CN113469963B - Pulmonary artery image segmentation method and device - Google Patents

Pulmonary artery image segmentation method and device Download PDF

Info

Publication number
CN113469963B
CN113469963B CN202110706289.9A CN202110706289A CN113469963B CN 113469963 B CN113469963 B CN 113469963B CN 202110706289 A CN202110706289 A CN 202110706289A CN 113469963 B CN113469963 B CN 113469963B
Authority
CN
China
Prior art keywords
segmentation
image
pulmonary artery
edge
domains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110706289.9A
Other languages
Chinese (zh)
Other versions
CN113469963A (en
Inventor
孙岩峰
韦人
邹彤
于荣震
张欢
王瑜
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202110706289.9A priority Critical patent/CN113469963B/en
Publication of CN113469963A publication Critical patent/CN113469963A/en
Application granted granted Critical
Publication of CN113469963B publication Critical patent/CN113469963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a pulmonary artery image segmentation method and a device, and the pulmonary artery image segmentation method comprises the following steps: inputting image data into a segmentation model with a self-attention distillation SAD mechanism to obtain an initial segmentation image, wherein the segmentation model is provided with preset edge weights which guide the segmentation model to segment the edge region of the image data; and screening connected domains in the initial segmentation image to obtain a final segmentation image. According to the method and the device, a self-attention distillation mechanism is introduced into the segmentation model, and a higher weight value is applied to the edge region, so that the segmentation effect of the edge region is forced to be strengthened by the model, and the segmentation precision of the edge region is enhanced.

Description

Pulmonary artery image segmentation method and device
Technical Field
The application relates to the technical field of deep learning, in particular to a pulmonary artery image segmentation method and device.
Background
At present, the determination of pulmonary hypertension is mainly based on the length of the diameter of the pulmonary trunk and the diameter of the aortic trunk in medical images (e.g. breast CT images). When the measured diameter of the pulmonary trunk is more than or equal to 29mm or the ratio of the diameter of the pulmonary trunk to the diameter of the aorta is more than 1, the pulmonary hypertension can be predicted. Since the decision is based on the pulmonary artery trunk diameter and the aortic trunk diameter, it is particularly important to determine the edges of the pulmonary artery and aorta. In the prior art, the segmentation of the pulmonary artery and the aorta is mainly performed on the whole of the medical image, and the segmentation result emphasizes the whole segmentation of the pulmonary artery and the aorta, so that the edge segmentation accuracy of the pulmonary artery and the aorta is low.
Therefore, how to improve the edge segmentation accuracy of the pulmonary artery and the aorta is a technical problem which needs to be solved urgently.
Disclosure of Invention
In view of this, embodiments of the present application provide a pulmonary artery image segmentation method and apparatus, which can improve the segmentation accuracy of the pulmonary artery and the aorta vessel edge in the pulmonary artery image.
In a first aspect, an embodiment of the present application provides a pulmonary artery image segmentation method, including: inputting image data into a segmentation model with a self-attention distillation SAD mechanism to obtain an initial segmentation image, wherein the segmentation model is provided with preset edge weights which guide the segmentation model to segment the edge region of the image data; and screening connected domains in the initial segmentation image to obtain a final segmentation image.
In some embodiments of the present application, the initial segmentation image comprises a vessel segmentation image, and inputting the image data into a segmentation model having a self-attention-distilling SAD mechanism to obtain the initial segmentation image comprises: inputting image data into a segmentation model with an SAD mechanism, and obtaining spatial edge attention characteristics on spatial dimensions through the SAD mechanism; integrating the spatial edge attention characteristics into a sampling stage of a segmentation model to obtain blood vessel image characteristics of image data; regions in the image data are labeled according to the blood vessel image features to obtain a blood vessel segmentation image.
In some embodiments of the present application, inputting the image data into the segmentation model, and obtaining the spatial edge attention characteristics in the spatial dimension through the SAD mechanism comprises: extracting a blood vessel feature set in the image data through a plurality of convolution layers in a segmentation model with an SAD mechanism; and (4) learning the blood vessel feature set from the attention matrix on the space dimension to obtain the space edge attention feature.
In some embodiments of the present application, the segmentation model is provided with preset edge weights to guide the segmentation model to segment the edge region of the image data, including: labeling the pulmonary artery and the aorta in the sample image data, and obtaining an edge segmentation image of the pulmonary artery and the aorta through morphological operation; determining the preset edge weight of each pixel point in the edge region according to the edge segmentation image, wherein the preset edge weight of the edge region is higher than the weight of the non-edge region; and adjusting parameters of the segmentation network according to the preset edge weight to obtain a segmentation model.
In some embodiments of the present application, determining the preset edge weight of each pixel point in the edge region according to the edge segmentation image includes: setting the weight of the pixel value of the edge area as a first preset weight; calculating a second preset weight of the edge region through a focalloss function; and summing the first preset weight and the second preset weight to determine the preset edge weight.
In some embodiments of the present application, the filtering the connected components in the initial segmented image to obtain the final segmented image includes: dividing a connected domain in an initial segmentation image into a plurality of first pulmonary artery connected domains, a plurality of first aorta connected domains and a plurality of first branch point connected domains; discarding a plurality of first pulmonary artery connected domains with a connected area smaller than a first preset area to obtain a plurality of second pulmonary artery connected domains; discarding a plurality of first aortic connected domains with a connected area smaller than a second preset area to obtain a plurality of second aortic connected domains; discarding the communication areas of the plurality of first branch point communication domains, which are smaller than a third preset area, so as to obtain a plurality of second branch point communication domains; and determining a final segmentation image based on the midpoints of the second pulmonary artery communication domains, the second aorta communication domains and the second bifurcation communication domains.
In some embodiments of the present application, determining the final segmented image based on midpoints of the second pulmonary artery communication domains, midpoints of the second aorta communication domains, and midpoints of the second bifurcation communication domains comprises: calculating the distance between the middle point of each second pulmonary artery communication domain and the middle point of each second bifurcation point communication domain, and determining the second pulmonary artery communication domain and the second bifurcation point communication domain corresponding to the minimum distance; calculating the distance between the midpoint of the second bifurcation connected domain corresponding to the minimum distance and the midpoint of each second aorta connected domain to determine the second aorta connected domain with the minimum distance; and combining and outputting the second pulmonary artery communication domain with the minimum distance, the second bifurcation point communication domain with the minimum distance and the second aorta communication domain with the minimum distance to determine a final segmentation image.
In some embodiments of the present application, before inputting the pulmonary artery image into the segmentation model with the self-attention-distillation SAD mechanism to obtain the initial segmentation image, the method further includes: selecting continuous image data with preset layers in a CT image; the intermediate layer data among the preset number of layers of image data is extracted and input as image data into a segmentation model with an SAD mechanism.
In a second aspect, an embodiment of the present application provides a pulmonary artery image segmentation apparatus, including: the acquisition module is used for inputting the pulmonary artery image into a segmentation model with a self-attention distillation SAD mechanism so as to acquire an initial segmentation image, wherein the segmentation model is provided with preset edge weight so as to guide the segmentation model to carry out on an edge region of the image data; and the screening module is used for screening the connected domain in the initial segmentation image to obtain a final segmentation image.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program for executing the pulmonary artery image segmentation method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor executable instructions, wherein the processor is adapted to perform the method for pulmonary artery image segmentation according to the first aspect.
The embodiment of the application provides a pulmonary artery image segmentation method and a device, and a self-attention distillation mechanism is introduced into a segmentation model, so that in the process of extracting image features, the edge features of image data are extracted from spatial dimensions, and the segmentation precision of edge regions in a pulmonary artery image is improved. Meanwhile, in the training process, the weight of the edge area is set, and a higher weight value is applied to the edge area, so that the segmentation effect of the edge area is forced to be strengthened by the model.
Drawings
Fig. 1 is a flowchart illustrating a pulmonary artery image segmentation method according to an exemplary embodiment of the present disclosure.
FIG. 2 is a schematic diagram of an initial segmented image provided by an exemplary embodiment of the present application.
Fig. 3 is a flowchart illustrating preset edge weights of a pulmonary artery image segmentation method according to an exemplary embodiment of the present application.
Fig. 4 is a schematic flowchart of a method for segmenting a pulmonary artery image according to an exemplary embodiment of the present application, for obtaining a final segmented image.
Fig. 5 is a schematic flowchart of a method for segmenting a pulmonary artery image according to another exemplary embodiment of the present application for obtaining a final segmented image.
Fig. 6 is a flowchart illustrating a pulmonary artery image segmentation method according to another exemplary embodiment of the present application.
Fig. 7 is a schematic structural diagram of a pulmonary artery image segmentation apparatus according to an exemplary embodiment of the present application.
Fig. 8 is a block diagram of an electronic device for pulmonary artery image segmentation provided by an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prediction and judgment of the pulmonary hypertension, the diameter of the arterial trunk on the flat scanning image is more than or equal to 33mm, the diameter of the arterial trunk on the enhanced image is more than or equal to 29mm, and the pulmonary hypertension is reported. When the plain scan and enhancement scenes are not distinguished, the diameter of the pulmonary artery trunk is more than or equal to 29mm, and the pulmonary artery high pressure is reported. Calculating the ratio of the pulmonary artery caliber to the aortic caliber, and reporting suspected pulmonary hypertension if the ratio is greater than 1.
Therefore, due to the particularity of the pulmonary artery high pressure prediction scene, the requirement on the segmentation precision of the outer edge of the blood vessel (namely the pulmonary artery and the aorta) is high.
Fig. 1 is a flowchart illustrating a pulmonary artery image segmentation method according to an exemplary embodiment of the present disclosure. The method of fig. 1 is performed by a computing device, e.g., a server. As shown in fig. 1, the pulmonary artery image segmentation method includes the following steps.
110: the image data is input into a segmentation model with a self-attention-distilling mechanism to obtain an initial segmentation image.
In an embodiment, the segmentation model is provided with a preset edge weight, and the preset edge weight guides the segmentation model to segment the edge region of the image data. Wherein the image data is pulmonary artery image data.
Specifically, the segmentation model may use a general ResUnet segmentation network, and the segmentation network is not specifically limited in the embodiment of the present application. The ResUnet split network includes an encoder corresponding to a feature extraction downsampling stage and a decoder corresponding to a feature extraction upsampling stage. In which a Self-Attention Distillation (SAD) mechanism can be introduced at adjacent stages of the encoder, for example, a SAD module is introduced for each downsampling block, and the higher-level features with rich semantic information are taken as Attention to guide the lower-level features to perform characterization learning.
The image data may be a chest or abdomen image including pulmonary arteries, such as a Computed Tomography (CT) image, a Digital Radiography (DR) image, or a Magnetic Resonance Imaging (MRI) image, which is not limited in this embodiment.
Preferably, the image data of the embodiment of the present application adopts CT image data, and the CT image data includes both flat-scan and enhanced scenes.
The image data may be input into the segmentation model by using the pulmonary artery image data in the dicom format, and the input dicom image may be added with a mediastinum window (window width 400, window level 40).
In one embodiment, after the image data is input into the segmentation model with the self-attention-distilling mechanism, the image data is preprocessed to enhance the recognition of the graph. The manner of pre-processing may include mapping the color values to within an interval of (0,255) to highlight active areas in the image data, such as the pulmonary artery and aorta. The pre-processing approach may also be to resize (9,256,256) to locate the signal-to-noise ratio of the active area (e.g., pulmonary artery and aorta).
In an example, the input image data may also include a lung segmentation bounding box. And when the pulmonary artery image data with the lung segmentation bounding box is input, clipping can be performed according to the lung segmentation bounding box, for example, crop operation can be performed according to the lung segmentation bounding box.
The image data is input into a segmentation model with an SAD mechanism, and the segmentation mode adopts 2.5D double-branch segmentation, namely, two branches are segmented in the segmentation model, one is to segment blood vessels (pulmonary artery and aorta) to obtain a blood vessel segmentation image, and the other is to segment branch points to obtain a bifurcation point segmentation image. The initial segmentation image may be obtained based on a segmentation model with an SAD mechanism, and the initial segmentation image may be a superposition of a vessel segmentation image and a bifurcation segmentation image.
For example, referring to the initial segmentation image shown in fig. 2, the initial segmentation image includes a blood vessel segmentation image including an aorta segmentation region S1 and a pulmonary artery segmentation region S2, and a bifurcation segmentation image including a bifurcation segmentation region S3. Wherein the hatching between S2 and S3 in fig. 2 is the overlapping region of the blood vessel segmentation image and the bifurcation segmentation image after superposition.
In one example, the vessel segmentation image may be based on a segmentation model with a SAD mechanism, and the spatial edge attention feature is obtained by learning a self-attention matrix in a spatial dimension through a self-attention distillation mechanism. And (3) integrating the spatial edge attention characteristics into a sampling stage (such as a down-sampling stage) of the segmentation model to obtain the blood vessel image characteristics of the image data. Regions in the image data are labeled according to the blood vessel image features to obtain a blood vessel segmentation image.
In an example, the bifurcation point segmented image may be a bifurcation point image feature obtained based on a segmentation model with a SAD mechanism. Regions in the image data are labeled according to the bifurcation point image features to obtain a bifurcation point segmentation image.
It should be appreciated that the segmentation model may set a higher preset edge weight during the training phase. Specifically, the sample image data can be directly labeled to obtain a pulmonary artery labeling result and an aorta labeling result. And obtaining a pulmonary artery edge segmentation image and an aorta edge segmentation image through morphological operation, for example, performing erosion operation on sample image data, and then performing expansion operation to obtain the pulmonary artery edge segmentation image and the aorta edge segmentation image. And determining preset edge weights corresponding to the pixel points of the edge regions in the pulmonary artery edge segmentation image and the aorta edge segmentation image, wherein the preset edge weights of the edge regions are higher than the weights of the non-edge regions. And adjusting parameters of the ResUnet segmentation network according to the preset edge weight set for the edge area to obtain the segmentation model applied in the embodiment of the application.
120: and screening connected domains in the initial segmentation image to obtain a final segmentation image.
Specifically, due to the influence of the segmentation model calculation method, the output initial segmented image has many false positive regions caused by erroneous segmentation of the abdominal cavity or other parts in addition to the true positive regions, and therefore the false positive regions need to be removed.
The initial segmentation image includes a plurality of connected regions, such as a pulmonary artery connected region, an aorta connected region, and a bifurcation connected region. Wherein a plurality of original connected components included in the initial segmentation image are divided into: a plurality of first pulmonary artery communication domains, a plurality of first aorta communication domains, and a plurality of first bifurcation point communication domains.
In an example, the first pulmonary artery communication domains, the first aorta communication domains and the first bifurcation communication domains are re-labeled, and only the communication domains with the communication area higher than 10% of the total communication area of the present category or the communication domains with the communication area higher than 15% of the total communication area of the present category are reserved in the labeling process, which is not specifically limited in this embodiment of the present application.
In one embodiment, the connected areas in the first plurality of pulmonary artery connected domains are discarded to be smaller than the first preset area, so as to obtain a second plurality of pulmonary artery connected domains. Discarding the connected areas in the plurality of first aortic connected domains which are smaller than the second preset area to obtain a plurality of second aortic connected domains. Discarding the communication areas in the plurality of first branch point communication domains, which are smaller than a third preset area, so as to obtain a plurality of second branch point communication domains. The first preset area, the second preset area and the third preset area may be flexibly set according to the communication area of each actual communication domain, which is not specifically limited in the embodiment of the present application.
In one embodiment, the distance between the midpoint (or centroid) of each second pulmonary artery communication domain and the midpoint of each second bifurcation point communication domain is calculated, the second pulmonary artery communication domain and the second bifurcation point communication domain corresponding to the minimum distance are reserved, and the rest second pulmonary artery communication domains and the rest second bifurcation point communication domains are discarded. And calculating the distance between the midpoint of the second bifurcation connected domain corresponding to the minimum distance and the midpoint of each second aorta connected domain, reserving the second aorta connected domain with the minimum distance from the second bifurcation connected domain, and discarding the rest second aorta connected domains. And then combining and outputting the second pulmonary artery connected domain with the minimum distance, the second bifurcation point connected domain with the minimum distance and the second aorta connected domain with the minimum distance to determine a final segmentation image.
It should be noted that, in the embodiments of the present application, the training parameters of the network model are adjusted according to the segmentation loss and the distillation loss to obtain the segmentation model applied in the present application. In the loss of the training stage, the extraction of edge information and features by the segmentation model is enhanced through self-attention guidance of the edge region, and the segmentation effect is further improved.
It should be noted that, the pulmonary artery image in both the flat-scan and enhanced scenes may adopt the image data in the dicom format. In addition, segmentation loss, distillation loss and focalloss loss described in the following embodiments are combined to train the segmentation model applied in the embodiments of the present application, so that the technical solution of the embodiments of the present application can be applied to both flat-scan and enhanced scenarios.
Therefore, the self-attention distillation mechanism is introduced into the segmentation model, so that the edge features of the image data are extracted from the spatial dimension in the process of extracting the image features, and the segmentation accuracy of the edge region in the pulmonary artery image is improved. Meanwhile, in the training process, the weight of the edge area is set, and a higher weight value is applied to the edge area, so that the segmentation effect of the edge area is forced to be strengthened by the model.
In an embodiment of the present application, inputting image data into a segmentation model having a self-attention-distilling SAD mechanism to obtain an initial segmentation image comprises: inputting image data into a segmentation model with an SAD mechanism, and obtaining spatial edge attention characteristics on spatial dimensions through the SAD mechanism; integrating the spatial edge attention characteristics into a sampling stage of a segmentation model to obtain blood vessel image characteristics of image data; regions in the image data are labeled according to the blood vessel image features to obtain a blood vessel segmentation image.
In an embodiment, the initial segmentation image comprises a vessel segmentation image.
Specifically, a blood vessel feature set in the image data is extracted through a plurality of convolution layers in a segmentation model with an SAD mechanism, wherein the blood vessel feature set comprises a blood vessel main body feature set and a blood vessel edge feature set. And (3) learning a self-attention matrix on the space where a plurality of dimensions are located through a self-attention distillation mechanism by using the blood vessel edge feature set in the blood vessel feature set to obtain the space edge attention feature on the space dimension.
And then, the spatial edge attention feature is fused into an encoder (namely a down-sampling stage) of the segmentation model, and a blood vessel image feature corresponding to the image data is obtained, wherein the blood vessel image feature comprises a blood vessel main body image feature and a blood vessel edge image feature obtained based on the spatial edge attention feature. According to the blood vessel image feature, pixels of the edge region of the blood vessel (i.e., the pulmonary artery and the aorta) and pixels of the non-edge region (i.e., the blood vessel main body region) in the binary labeled image data are used to take the labeled image data as a blood vessel segmentation image.
Therefore, the self-attention matrix obtained by the self-attention distillation mechanism in the embodiment of the application improves the segmentation precision of the edge region in the image data.
In an embodiment of the present application, inputting image data into a segmentation model, and obtaining spatial edge attention characteristics in a spatial dimension by having a SAD mechanism includes: extracting a blood vessel feature set in the image data through a plurality of convolution layers in a segmentation model with an SAD mechanism; and (4) learning the blood vessel feature set from the attention matrix on the space dimension to obtain the space edge attention feature.
Specifically, a segmentation model with the SAD mechanism includes a plurality of convolution layers. After the first convolution layer, it is regarded as 2D image processing, that is, all convolution operations after the first convolution layer are in the form of 2D operations.
And (3) extracting the original features of the blood vessels of different scales of the image data by segmenting the plurality of convolution layers in the model from the preprocessed image data, wherein the original features of the blood vessels comprise the original features of the main body of the blood vessel and the original features of the edges of the blood vessels. And combining the extracted original vessel features with different scales to obtain a vessel feature set, wherein the vessel feature set comprises a vessel edge feature set. The vessel edge feature set in the vessel feature set learns a self-attention matrix in a spatial dimension through a self-attention distillation mechanism to obtain a spatial edge attention feature.
Therefore, the self-attention distillation mechanism is introduced into the segmentation model, so that the edge features of the image data are extracted from the spatial dimension in the process of extracting the image features, and the segmentation accuracy of the edge region in the pulmonary artery image is improved.
Fig. 3 is a flowchart illustrating preset edge weights of a pulmonary artery image segmentation method according to an exemplary embodiment of the present application. The method of fig. 3 is performed by a computing device, e.g., a server. As shown in fig. 3, the method for presetting the edge weight includes the following steps.
210: labeling the pulmonary artery and the aorta in the sample image data, and obtaining an edge segmentation image of the pulmonary artery and the aorta through morphological operation.
Specifically, in the process of training the segmentation model with the SAD mechanism, the sample image data can be directly labeled to obtain a pulmonary artery labeling result and an aorta labeling result.
Through morphological operations, pulmonary artery edge segmentation images and aorta edge segmentation images are obtained. The morphological operation may include dilation, erosion, open operation, and close operation, and the morphological operation is not particularly limited in the embodiments of the present application.
For example, the sample image data is first subjected to erosion operation and then expansion operation to obtain a pulmonary artery edge segmentation image and an aorta edge segmentation image.
220: and determining the preset edge weight of each pixel point in the edge region according to the edge segmentation image.
In one embodiment, the default edge weight of the edge region is higher than the weight of the non-edge region.
Specifically, the preset edge weight may be a sum of a first preset weight and a second preset weight. The first preset weight may be a weighted value of pixels in an edge region in the calibrated pulmonary artery edge segmentation image and the aorta edge segmentation image, and the first preset weight may be 10, 11, or 12. The second preset weight may be a weight value of the edge region calculated by a focalloss function, where the focalloss function may gradually apply a high weight to a region (e.g., the edge region) that is difficult to segment in the training sample image data, that is, dynamically adjust the weight value.
That is to say, in the embodiment of the present application, the statically set first preset weight and the dynamically calculated focalloss loss weight (i.e., the second preset weight) jointly act to apply a higher loss to the edge region than to other regions, so as to force the model to strengthen the segmentation effect of the edge region.
230: and adjusting parameters of the segmentation network according to the preset edge weight to obtain a segmentation model.
Specifically, the parameters of the ResUnet segmentation network are adjusted according to the preset edge weights set for the edge regions, so that the segmentation model with the SAD mechanism applied in the embodiment of the present application is obtained.
Therefore, the pixel values in the edge region are set to be higher weighted values in the embodiment of the application, so that the influence of the segmentation prediction result of the pixel points in the edge region on the edge loss is improved, and the segmentation precision of the edge region is improved.
In an embodiment of the present application, determining the preset edge weight of each pixel point in the edge region according to the edge segmentation image includes: setting the weight of the pixel value of the edge area as a first preset weight; calculating a second preset weight of the edge region through a focalloss function; and summing the first preset weight and the second preset weight to determine the preset edge weight.
Specifically, the first preset weight may be set by setting a weight of pixel values of an edge region in the pulmonary artery edge segmentation image and the aorta edge segmentation image as the first preset weight, where the first preset weight may be 10, 11, or 12, and the first preset weight is not particularly limited in this embodiment of the application.
The second preset weight may be dynamically set based on the focalloss function. The weight of the sample image data of the non-edge area in training can be reduced through the focalloss function, and the mining of the edge area (namely, the difficult sample) is realized.
And adding the first preset weight of the static setting and the second preset weight of the dynamic setting calculated based on the focalloss function to determine the preset edge weight.
Therefore, the embodiment of the application combines the statically set weight with the dynamically set weight to apply higher loss to the edge region than to other regions, so as to force the model to strengthen the segmentation effect of the edge region.
Fig. 4 is a schematic flowchart of a method for segmenting a pulmonary artery image according to an exemplary embodiment of the present application, for obtaining a final segmented image. The embodiment of fig. 4 is an example of the embodiment of fig. 1, and the same parts are not repeated herein, and the differences are mainly described herein. As shown in fig. 4, the method of acquiring the final segmentation image includes the following.
310: the connected domain in the initial segmentation image is divided into a plurality of first pulmonary artery connected domains, a plurality of first aorta connected domains and a plurality of first branch point connected domains.
Specifically, the initial segmentation image includes a plurality of connected domains, and the connected domains are distinguished according to semantic results. And re-labeling the connected domains included in the initial segmentation image to obtain a plurality of first pulmonary artery connected domains, a plurality of first aorta connected domains and a plurality of first branch point connected domains.
320: discarding the connected areas in the first pulmonary artery connected domains with the connected areas smaller than the first preset area to obtain a second pulmonary artery connected domains.
Specifically, the first predetermined area may be 10% of the total area of the first pulmonary artery communication region, and the first predetermined area is not particularly limited in the embodiment of the present application.
In one example, first pulmonary artery communication domains of the plurality of first pulmonary artery communication domains having a communication area less than 10% of a total area of the first pulmonary artery communication domains are discarded, and a plurality of second pulmonary artery communication domains are obtained.
330: discarding the connected areas in the plurality of first aortic connected domains which are smaller than the second preset area to obtain a plurality of second aortic connected domains.
Specifically, the second preset area may be 10% of the total area of the first aorta communicating region, and the second preset area is not particularly limited in the embodiments of the present application.
In one example, first aortic communication regions of the plurality of first aortic communication regions having a communication area less than 10% of a total area of the first aortic communication regions are discarded, resulting in a plurality of second aortic communication regions.
340: discarding the communication areas of the plurality of first branch point communication domains, which are smaller than a third preset area, so as to obtain a plurality of second branch point communication domains;
specifically, the third preset area may be 10% of the total area of the first branch point connected domain, and the third preset area is not specifically limited in the embodiment of the present application.
In one example, the first branch point connected domains of the plurality of first branch point connected domains, which have a connected area less than 10% of the total area of the first branch point connected domains, are discarded, and a plurality of second branch point connected domains are obtained.
It should be noted that, in the embodiment of the present application, the execution sequence of the steps 320 to 340 is not specifically limited, for example, the steps 320 to 340 may be performed simultaneously, or may be performed in a certain sequence.
350: and determining a final segmentation image based on the midpoints of the second pulmonary artery communication domains, the second aorta communication domains and the second bifurcation communication domains.
Specifically, the final segmentation image is determined by calculating the distances of the middle points (or centroids) among the screened second pulmonary artery communication domains, the second aorta communication domains and the second bifurcation communication domains.
It should be noted that, for details of the description of step 350, please refer to the description of the embodiment in fig. 4, and details are not repeated herein to avoid repetition.
Therefore, the false positive region with the small connected region area is eliminated by screening the connected regions in the initial segmentation image.
Fig. 5 is a schematic flowchart of a method for segmenting a pulmonary artery image according to another exemplary embodiment of the present application for obtaining a final segmented image. The embodiment of fig. 5 is an example of the embodiment of fig. 4, and the same parts are not repeated herein, and the differences are mainly described herein. As shown in fig. 5, the method of obtaining the final segmentation image includes the following.
410: and performing distance calculation on the middle point of each second pulmonary artery communication domain and the middle point of each second bifurcation point communication domain, and determining the second pulmonary artery communication domain and the second bifurcation point communication domain corresponding to the minimum distance.
Specifically, the midpoint (or centroid) of each of the second pulmonary artery communication domains in the second pulmonary artery communication domains and the midpoint (or centroid) of each of the second bifurcation point communication domains in the second bifurcation point communication domains are subjected to distance calculation, and one second pulmonary artery communication domain and one second bifurcation point communication domain corresponding to the minimum distance are determined.
420: and performing distance calculation on the midpoint of the second bifurcation connected domain corresponding to the minimum distance and the midpoint of each second aorta connected domain to determine the second aorta connected domain with the minimum distance.
Specifically, the midpoint of the second bifurcation connected domain corresponding to the minimum distance determined in step 410 is subjected to distance calculation with the midpoint of each of the plurality of second aortic connected domains, and one second aortic connected domain having the smallest distance from the second bifurcation connected domain corresponding to the minimum distance determined in step 410 is determined.
430: and combining and outputting the second pulmonary artery communication domain with the minimum distance, the second bifurcation point communication domain with the minimum distance and the second aorta communication domain with the minimum distance to determine a final segmentation image.
Specifically, only the second pulmonary artery communication domain, the second bifurcation point communication domain, and the second aorta communication domain of the minimum distances determined in steps 410 and 420 are retained, and the remaining communication domains are deleted. And then merging and outputting the connected domains separated by the re-marking regions, namely merging the second pulmonary artery connected domain, the second bifurcation connected domain and the second aorta connected domain corresponding to the minimum distance, and outputting a merging result, wherein the merging result is the final segmentation image.
Therefore, in the embodiment of the application, a plurality of connected domains in the initial segmentation map are screened, false positives caused by Wu segmentation of the abdomen or other parts are deleted, and the segmentation precision of model training is improved.
Fig. 6 is a flowchart illustrating a pulmonary artery image segmentation method according to another exemplary embodiment of the present application. FIG. 6 is an example of the embodiment of FIG. 1, and the same parts are not repeated herein, and the differences are mainly described herein. As shown in fig. 6, the pulmonary artery image segmentation method includes the following steps.
510: and selecting continuous image data with preset layers in a CT image.
Specifically, the number of preset layers may be 7, 8, or 9, and the number of preset layers is not specifically limited in the embodiment of the present application. For example, image data of 9 consecutive slices in a CT image is selected.
Preferably, in order to facilitate the segmentation model to learn the effective features (e.g., pulmonary artery features, aorta features, etc.) of the upper and lower associated layers, the application embodiment sets the preset number of layers to 9.
520: the intermediate layer data among the preset number of layers of image data is extracted and input as image data into a segmentation model with an SAD mechanism.
Specifically, the intermediate layer image data among the consecutive preset number of layers of image data is extracted as training data input into the segmentation model with the SAD mechanism. For example, when the preset level is set to 9 levels, the 5 th level image data is extracted as the training data input into the segmentation model with the SAD mechanism.
530: the image data is input into a segmentation model with a self-attention-distilling SAD mechanism to obtain an initial segmentation image.
540: and screening connected domains in the initial segmentation image to obtain a final segmentation image.
Therefore, the image data of the middle layer is extracted for training, computing resources are reduced, and time cost is saved.
Fig. 7 is a schematic structural diagram of a pulmonary artery image segmentation apparatus according to an exemplary embodiment of the present application. As shown in fig. 7, the image segmentation apparatus 600 includes: a selecting module 610, an extracting module 620, an obtaining module 630 and a screening module 640.
The obtaining module 630 is configured to input the pulmonary artery image into a segmentation model with a self-attention-distillation SAD mechanism to obtain an initial segmentation image, where the segmentation model is provided with preset edge weights to guide the segmentation model to segment an edge region of the image data; the screening module 640 is configured to screen connected components in the initial segmented image to obtain a final segmented image.
The embodiment of the application provides a pulmonary artery image segmentation device, and a self-attention distillation mechanism is introduced into a segmentation model, so that in the process of extracting image features, the edge features of image data are extracted from a spatial dimension, and the segmentation precision of edge regions in a pulmonary artery image is improved. Meanwhile, in the training process, the weight of the edge area is set, and a higher weight value is applied to the edge area, so that the segmentation effect of the edge area is forced to be strengthened by the model.
According to an embodiment of the present application, the initial segmentation image includes a blood vessel segmentation image, and the obtaining module 630 is configured to input image data into a segmentation model with an SAD mechanism, and obtain spatial edge attention characteristics in a spatial dimension through the SAD mechanism; integrating the spatial edge attention characteristics into a sampling stage of a segmentation model to obtain blood vessel image characteristics of image data; regions in the image data are labeled according to the blood vessel image features to obtain a blood vessel segmentation image.
According to an embodiment of the present application, the obtaining module 630 is configured to extract a blood vessel feature set in the image data through a plurality of convolution layers in a segmentation model with an SAD mechanism; and (4) learning the blood vessel feature set from the attention matrix on the space dimension to obtain the space edge attention feature.
According to an embodiment of the present application, the obtaining module 630 is configured to label the pulmonary artery and the aorta in the sample image data, and obtain an edge segmentation image of the pulmonary artery and the aorta through morphological operations; determining the preset edge weight of each pixel point in the edge region according to the edge segmentation image, wherein the preset edge weight of the edge region is higher than the weight of the non-edge region; and adjusting parameters of the segmentation network according to the preset edge weight to obtain a segmentation model.
According to an embodiment of the present application, the obtaining module 630 is further configured to set the weight of the pixel value of the edge area to a first preset weight; calculating a second preset weight of the edge region through a focalloss function; and summing the first preset weight and the second preset weight to determine the preset edge weight.
According to an embodiment of the present application, the screening module 640 is configured to divide the connected regions in the initial segmentation image into a plurality of first pulmonary artery connected regions, a plurality of first aorta connected regions, and a plurality of first branch point connected regions; discarding a plurality of first pulmonary artery connected domains with a connected area smaller than a first preset area to obtain a plurality of second pulmonary artery connected domains; discarding a plurality of first aortic connected domains with a connected area smaller than a second preset area to obtain a plurality of second aortic connected domains; discarding the communication areas of the plurality of first branch point communication domains, which are smaller than a third preset area, so as to obtain a plurality of second branch point communication domains; and determining a final segmentation image based on the midpoints of the second pulmonary artery communication domains, the second aorta communication domains and the second bifurcation communication domains.
According to an embodiment of the present application, the screening module 640 is further configured to determine the final segmentation image based on the midpoints of the second pulmonary artery communication domains, the second aorta communication domains, and the second bifurcation communication domains, including: calculating the distance between the middle point of each second pulmonary artery communication domain and the middle point of each second bifurcation point communication domain, and determining the second pulmonary artery communication domain and the second bifurcation point communication domain corresponding to the minimum distance; calculating the distance between the midpoint of the second bifurcation connected domain corresponding to the minimum distance and the midpoint of each second aorta connected domain to determine the second aorta connected domain with the minimum distance; and combining and outputting the second pulmonary artery communication domain with the minimum distance, the second bifurcation point communication domain with the minimum distance and the second aorta communication domain with the minimum distance to determine a final segmentation image.
According to an embodiment of the present application, the selecting module 610 is configured to select image data of a predetermined number of consecutive layers in a CT image; the extracting module 620 is configured to extract the middle layer data of the preset number of layers of image data as image data to be input into the segmentation model with the SAD mechanism.
It should be understood that, for the specific working processes and functions of the selecting module 610, the extracting module 620, the obtaining module 630 and the screening module 640 in the foregoing embodiments, reference may be made to the description in the pulmonary artery image segmentation method provided in the foregoing embodiments of fig. 1 to 5, and in order to avoid repetition, details are not described here again.
Fig. 8 is a block diagram of an electronic device 700 for pulmonary artery image segmentation provided by an exemplary embodiment of the present application.
Referring to fig. 8, electronic device 700 includes a processing component 710 that further includes one or more processors, and memory resources, represented by memory 720, for storing instructions, such as applications, that are executable by processing component 710. The application programs stored in memory 720 may include one or more modules that each correspond to a set of instructions. Furthermore, the processing component 710 is configured to execute instructions to perform the above-described pulmonary artery image segmentation method.
The electronic device 700 may also include a power supply component configured to perform power management of the electronic device 700, a wired or wireless network interface configured to connect the electronic device 700 to a network, and an input-output (I/O) interface. The electronic device 700 may be operated based on an operating system, such as Windows Server, stored in the memory 720TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a processor of the electronic device 700, enable the electronic device 700 to perform a pulmonary artery image segmentation method, comprising: inputting image data into a segmentation model with a self-attention distillation SAD mechanism to obtain an initial segmentation image, wherein the segmentation model is provided with preset edge weights which guide the segmentation model to segment the edge region of the image data; and screening connected domains in the initial segmentation image to obtain a final segmentation image.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in the description of the present application, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (10)

1. A pulmonary artery image segmentation method is characterized by comprising the following steps:
inputting image data into a segmentation model with a self-attention-distillation SAD mechanism to obtain an initial segmentation image, wherein the segmentation model is provided with preset edge weights which guide the segmentation model to segment edge regions of the image data;
screening connected domains in the initial segmentation image to obtain a final segmentation image,
wherein, the screening the connected domain in the initial segmentation image to obtain the final segmentation image comprises:
dividing the connected domain in the initial segmentation image into a plurality of first pulmonary artery connected domains, a plurality of first aorta connected domains and a plurality of first branch point connected domains; discarding a connected area of the first pulmonary artery connected domains which is smaller than a first preset area to obtain a second pulmonary artery connected domains; discarding a connected area in the plurality of first aortic connected domains that is smaller than a second preset area to obtain a plurality of second aortic connected domains; discarding the communication areas in the first branch point communication domains, which are smaller than a third preset area, so as to obtain second branch point communication domains; determining the final segmented image based on midpoints of the second pulmonary artery communication domains, midpoints of the second aorta communication domains, and midpoints of the second bifurcation communication domains.
2. The pulmonary artery image segmentation method according to claim 1, wherein the initial segmentation image includes a blood vessel segmentation image,
the inputting of the image data into a segmentation model with a self-attention-distilling SAD mechanism to obtain an initial segmentation image includes:
inputting the image data into the segmentation model with the SAD mechanism, and obtaining spatial edge attention characteristics on spatial dimensions through the SAD mechanism;
a sampling stage of integrating the spatial edge attention characteristics into the segmentation model to obtain blood vessel image characteristics of the image data;
and marking the region in the image data according to the blood vessel image characteristics to obtain a blood vessel segmentation image.
3. The pulmonary artery image segmentation method according to claim 2, wherein the inputting the image data into the segmentation model with the SAD mechanism, and the obtaining the spatial edge attention characteristics in the spatial dimension through the SAD mechanism comprises:
extracting a blood vessel feature set in the image data through a plurality of convolution layers in the segmentation model with the SAD mechanism;
and learning the blood vessel feature set from an attention matrix on a spatial dimension to obtain the spatial edge attention feature.
4. The pulmonary artery image segmentation method according to claim 1, wherein the segmentation model is provided with preset edge weights to guide the segmentation model to segment the edge region of the image data, and the segmentation model comprises:
labeling the pulmonary artery and the aorta in sample image data, and acquiring edge segmentation images of the pulmonary artery and the aorta through morphological operation;
determining the preset edge weight of each pixel point in an edge region according to the edge segmentation image, wherein the preset edge weight of the edge region is higher than the weight of a non-edge region;
and adjusting parameters of the segmentation network according to the preset edge weight to obtain the segmentation model.
5. The method for segmenting the pulmonary artery image according to claim 4, wherein the determining the preset edge weight of each pixel point in an edge region according to the edge segmentation image comprises:
setting the weight of the pixel value of the edge area as a first preset weight; and
calculating a second preset weight of the edge region through a focalloss function;
adding the first preset weight and the second preset weight to determine the preset edge weight.
6. The pulmonary artery image segmentation method according to claim 1, wherein the determining the final segmentation image based on the midpoints of the plurality of second pulmonary artery communication domains, the midpoints of the plurality of second aorta communication domains, and the midpoints of the plurality of second bifurcation communication domains comprises:
calculating the distance between the middle point of each second pulmonary artery communication domain and the middle point of each second bifurcation point communication domain, and determining the second pulmonary artery communication domain and the second bifurcation point communication domain corresponding to the minimum distance;
calculating the distance between the midpoint of the second bifurcation connected domain corresponding to the minimum distance and the midpoint of each second aorta connected domain, and determining the second aorta connected domain with the minimum distance;
merging and outputting the second pulmonary artery communication domain of the minimum distance, the second bifurcation point communication domain of the minimum distance and the second aorta communication domain of the minimum distance to determine the final segmentation image.
7. The pulmonary artery image segmentation method according to claim 1, further comprising, before the inputting the image data into a segmentation model with a self-attention-distillation SAD mechanism to obtain an initial segmentation image:
selecting continuous image data with preset layers in a CT image;
and extracting intermediate layer data in the preset number of layers of image data as the image data to be input into the segmentation model with the SAD mechanism.
8. A pulmonary artery image segmentation apparatus, comprising:
an acquisition module, configured to input image data into a segmentation model with a self-attention-distillation SAD mechanism to acquire an initial segmentation image, wherein the segmentation model is provided with preset edge weights to guide the segmentation model to segment an edge region of the image data;
a screening module for screening connected domains in the initial segmentation image to obtain a final segmentation image,
the screening module is further used for dividing the connected domain in the initial segmentation image into a plurality of first pulmonary artery connected domains, a plurality of first aorta connected domains and a plurality of first branch point connected domains; discarding a connected area of the first pulmonary artery connected domains which is smaller than a first preset area to obtain a second pulmonary artery connected domains; discarding a connected area in the plurality of first aortic connected domains that is smaller than a second preset area to obtain a plurality of second aortic connected domains; discarding the communication areas in the first branch point communication domains, which are smaller than a third preset area, so as to obtain second branch point communication domains; determining the final segmented image based on midpoints of the second pulmonary artery communication domains, midpoints of the second aorta communication domains, and midpoints of the second bifurcation communication domains.
9. A computer-readable storage medium storing a computer program for executing the pulmonary artery image segmentation method according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to perform the pulmonary artery image segmentation method according to any one of claims 1 to 7.
CN202110706289.9A 2021-06-24 2021-06-24 Pulmonary artery image segmentation method and device Active CN113469963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110706289.9A CN113469963B (en) 2021-06-24 2021-06-24 Pulmonary artery image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110706289.9A CN113469963B (en) 2021-06-24 2021-06-24 Pulmonary artery image segmentation method and device

Publications (2)

Publication Number Publication Date
CN113469963A CN113469963A (en) 2021-10-01
CN113469963B true CN113469963B (en) 2022-04-19

Family

ID=77872750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110706289.9A Active CN113469963B (en) 2021-06-24 2021-06-24 Pulmonary artery image segmentation method and device

Country Status (1)

Country Link
CN (1) CN113469963B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550171B (en) * 2022-04-22 2022-07-12 珠海横琴圣澳云智科技有限公司 Cell instance segmentation model construction method, cell instance segmentation method and device
CN114820571B (en) * 2022-05-21 2023-05-30 东北林业大学 Quantitative analysis method for pneumonia fibrosis based on DLPE algorithm

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472730A (en) * 2019-08-07 2019-11-19 交叉信息核心技术研究院(西安)有限公司 A kind of distillation training method and the scalable dynamic prediction method certainly of convolutional neural networks
CN111091573B (en) * 2019-12-20 2021-07-20 广州柏视医疗科技有限公司 CT image pulmonary vessel segmentation method and system based on deep learning
CN111126258B (en) * 2019-12-23 2023-06-23 深圳市华尊科技股份有限公司 Image recognition method and related device
CN112862845B (en) * 2021-02-26 2023-08-22 长沙慧联智能科技有限公司 Lane line reconstruction method and device based on confidence evaluation

Also Published As

Publication number Publication date
CN113469963A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US10867384B2 (en) System and method for automatically detecting a target object from a 3D image
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN113436166A (en) Intracranial aneurysm detection method and system based on magnetic resonance angiography data
CN113469963B (en) Pulmonary artery image segmentation method and device
CN112465834B (en) Blood vessel segmentation method and device
JP2008529641A (en) Method for automatic extraction of pulmonary artery tree from 3D medical images
US11810293B2 (en) Information processing device, information processing method, and computer program
EP3722996A2 (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN112991346A (en) Training method and training system for learning network for medical image analysis
US8306354B2 (en) Image processing apparatus, method, and program
US8160336B2 (en) Reducing false positives for automatic computerized detection of objects
US20190138694A1 (en) Automatic characterization of agatston score from coronary computed tomography
CN112801999A (en) Method and device for determining heart coronary artery dominant type
CN108765399B (en) Lesion site recognition device, computer device, and readable storage medium
CN112801964B (en) Multi-label intelligent detection method, device, equipment and medium for lung CT image
CN115829980A (en) Image recognition method, device, equipment and storage medium for fundus picture
Wen et al. A novel lesion segmentation algorithm based on U-net network for tuberculosis CT image
CN113129297B (en) Diameter automatic measurement method and system based on multi-phase tumor image
KR101126223B1 (en) Liver segmentation method using MR images
CN112862785B (en) CTA image data identification method, device and storage medium
CN114998582A (en) Coronary artery blood vessel segmentation method, device and storage medium
JP7019104B2 (en) Threshold learning method
CN112785580A (en) Method and device for determining blood vessel flow velocity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant