EP4315237A1 - Systèmes et procédés d'extraction automatique de vaisseaux sanguins - Google Patents

Systèmes et procédés d'extraction automatique de vaisseaux sanguins

Info

Publication number
EP4315237A1
EP4315237A1 EP22710859.4A EP22710859A EP4315237A1 EP 4315237 A1 EP4315237 A1 EP 4315237A1 EP 22710859 A EP22710859 A EP 22710859A EP 4315237 A1 EP4315237 A1 EP 4315237A1
Authority
EP
European Patent Office
Prior art keywords
segmentation
topological
blood vessels
neural network
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22710859.4A
Other languages
German (de)
English (en)
Inventor
Ariel Birenbaum
Ofer Barasofsky
Guy Alexandroni
Irina SHEVLEV
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Covidien LP
Original Assignee
Covidien LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Covidien LP filed Critical Covidien LP
Publication of EP4315237A1 publication Critical patent/EP4315237A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • This disclosure relates to systems and methods to create 3D anatomic tree structures, which may be used to generate a 3D mesh model of blood vessels of a portion of a patient’s body.
  • the disclosure is directed to systems and methods of automatically creating 3D tree structures of the vasculature of a portion of a patient’s body, e.g., the lungs, using a neural network trained by manually and/or semi-automatically created 3D vascular tree structures.
  • the disclosure features a system including a processor and a memory.
  • the memory has stored thereon a neural network and instructions, which, when executed by the processor, cause the processor to: cause the neural network to segment blood vessels in volumetric images of a portion of a body, yielding segmented blood vessels.
  • the instructions when executed by the processor, further cause the processor to detect roots of the segmented blood vessels and detect endpoints of the blood vessels.
  • the instructions when executed by the processor, further cause the processor to determine the shortest path from each endpoint to each of the roots, and combine the shortest paths to the roots into directed graphs.
  • Implementations of the system may include one or more of the following features.
  • the neural network may use a 3D U-Net style architecture.
  • the instructions, when executed by the processor, may cause the processor to receive annotated volumetric images in which blood vessels are identified and train the neural network with the annotated volumetric images.
  • the instructions, when executed by the processor may cause the processor to segment blood vessels in the volumetric images using a classical image segmentation method, yielding the annotated volumetric images in which blood vessels are identified.
  • the classical image segmentation method may include an edge-based method, a region-based method, or a thresholding method.
  • the neural network may include a segmentation layer and the instructions, when executed by the processor, may cause the processor to train the segmentation layer with a dice loss.
  • the dice loss may be a weighted dice loss.
  • the neural network may include a topological layer and the instructions, when executed by the processor, may cause the processor to train the topological layer with a topological loss.
  • the neural network may include a classification layer and the instructions, when executed by the processor, may cause the processor to train the classification layer with a cross-entropy loss, a consistency loss, or both a cross-entropy loss and a consistency loss.
  • the neural network may include an encoder that processes the volumetric images and outputs an encoder output, a first decoder coupled to the output of the encoder and that generates a segmentation probability map based on the encoder output, and a second decoder coupled to the output of the encoder and that generates a topological embedding vector, a distance map, and a classification probability map (e.g., an artery and vein probability map) based on the encoder output.
  • an encoder that processes the volumetric images and outputs an encoder output
  • a first decoder coupled to the output of the encoder and that generates a segmentation probability map based on the encoder output
  • a second decoder coupled to the output of the encoder and that generates a topological embedding vector, a distance map, and a classification probability map (e.g., an artery and vein probability map) based on the encoder output.
  • the encoder, the first decoder, and the second decoder may each include recurrent convolutional neural networks and squeeze and excite blocks coupled to the recurrent convolutional neural networks, respectively.
  • the second decoder may include a convolution function and a sigmoid activation function that process the topological embedding vector and output the classification probability map.
  • the second decoder may include a convolution function and a rectified linear unit that process the topological embedding vector and output the distance map.
  • the portion of the body may be an organ, neck, upper body, or lower body.
  • the organ may be a brain, lung, kidney, liver, stomach, intestine, prostate, rectum, or colon.
  • the disclosure features a method.
  • the method includes receiving a three-dimensional (3D) image data set of a portion of the body and segmenting the 3D image data set to identify blood vessels in the 3D image data set using a neural network model.
  • the method also includes classifying the blood vessels using the neural network model, detecting starting points of the processed blood vessels, and detecting endpoints of the processed blood vessels.
  • the method also includes, for each endpoint, calculating optimal paths from possible starting points to the endpoint, selecting the best starting point from the possible starting points, and setting a class of the path from the best starting point to the endpoint.
  • the method also includes merging paths of the same starting point into a tree structure.
  • Implementations of the method may include one or more of the following features.
  • the blood vessels may be arteries or veins. Detecting starting points and ending points may be performed using a neural network model.
  • the method may include training a topological layer of the neural network model using a topological loss.
  • the method may include training a segmentation layer of the neural network model using dice loss.
  • the method may include weighting the dice loss. Weighting the dice loss may include applying a weight of 0 to the dice loss for unannotated peripheral blood vessels and applying a weight of 1 for annotated peripheral blood vessels.
  • the method may include computing Euclidian distances of topological embedding vectors, computing topological distances of the topological embedding vectors, and training the neural network model to match the Euclidian distance of topological embedding vectors to corresponding topological distances of the topological embedding vectors.
  • Computing the topological distances of the topological embedding vectors may include computing the topological distances of the topological embedding vectors based on total topological loss. The purpose of the topological loss is to increase the distance between feature spaces of the arteries and veins in the neural network space.
  • the total topological loss may be the sum of topological losses for pairs of points divided by the number of the pairs of points; the topological loss for the pair of points may be a value of an LI smooth loss function of the pair of points, if the pair of points are in the same class; and the topological loss for the pair of points may be the maximum of 0 or 1/K multiplied by the difference between the constant K and an absolute value of the difference between network topological layer values corresponding to the pair of points, if the pair of points are not in the same class.
  • the image data set may be a computed tomography (CT) data set.
  • the method may include generating a 3D mesh model from the tree structure.
  • the method may include displaying the 3D mesh model in a user interface.
  • the method may include presenting a user interface enabling a user to select a starting point, an endpoint, and a path of the blood vessel.
  • the disclosure features a method of generating directed graphs of blood vessels.
  • the method includes receiving a three-dimensional (3D) image data set of a portion of a body and processing the 3D image data set with a neural network to generate a segmentation probability map of blood vessels in the 3D image data set.
  • the method also includes closing at least one hole of the blood vessel in the segmentation probability map.
  • the method also includes detecting starting points of the blood vessels and detecting endpoints of the blood vessels.
  • the method also includes, for each endpoint, tracking the shortest path from the endpoint to each of the starting points, yielding probable paths and selecting the most probable path from the probable paths.
  • the method also includes merging paths having a common starting point to one directed graph and solving for at least one overlap between directed graphs.
  • Implementations of the method may include one or more of the following features.
  • the portion of the body may be a lung and detecting starting points of the blood vessels may include detecting starting points of the blood vessels at or near the heart.
  • the method may include filtering the segmentation probability map with a first threshold, yielding a first original segmentation, adding voxels to the first original segmentation, yielding a first extended segmentation, dilating the first extended segmentation, and removing voxels with low attenuation values from the first extended segmentation, yielding an updated segmentation.
  • the method may include calculating a skeleton of the updated segmentation and adding the skeleton to the first original segmentation.
  • the first threshold may be between about 0.1 and 0.4.
  • the method may include filtering the segmentation probability map with a second threshold, yielding a second original segmentation, calculating local attenuation value statistics based on the 3D image data set, adding voxels that have neighboring voxels of the second original segmentation with the same attenuation value statistics, yielding a second extended segmentation, and combining the first and second extended segmentations, yielding the updated segmentation.
  • the portion of the body may be a brain, lung, kidney, liver, stomach, intestine, prostate, rectum, or colon.
  • FIG. l is a block diagram of a system for generating 3D models of arteries and veins in accordance with the disclosure
  • FIG. 2 is a schematic diagram that illustrates challenges addresses by aspects of the disclosure
  • FIG. 3 is a block diagram that illustrates an example of a neural network according to aspects of the disclosure
  • FIG. 4 is a flowchart that illustrates a method including blood vessel analysis according to aspects of the disclosure
  • FIGS. 5A-5C are diagrams that illustrate aspects of the method of performing blood vessel analysis of FIG. 4;
  • FIG. 6 is a diagram that illustrates examples of the input to and the output from the system of FIG. 1;
  • FIGS. 7A and 7B is a block diagram that illustrates an example of the encoder of the neural network of FIG. 3;
  • FIGS. 8A-C is a block diagram that illustrates an example of the decoders of the neural network of FIG. 3;
  • FIGS. 9A and 9B are schematic diagrams that illustrate topological embedding, which may be performed by the neural network of FIG. 3;
  • FIGS. 10 and 11 are schematic diagrams that illustrate challenges addresses by aspects of the disclosure.
  • FIG. 12 is a diagram that illustrates examples of the outputs of the neural network of FIG. 3;
  • FIG. 13 is a flowchart that illustrates a method of detecting roots for arteries
  • FIG. 14 is a three-dimensional graph that illustrates a result of automatic root detection
  • FIGS. 15A-15C are computed tomography (CT) images that illustrate automatic root detection for arteries;
  • FIG. 16 is a flowchart that illustrates a method of detecting roots for veins
  • FIGS. 17A-17C are CT images that illustrate automatic root detection for veins
  • FIGS. 18A 22 are diagrams that illustrate a method for closing holes in a segmentation
  • FIG. 23 is a flowchart that illustrates a method of closing holes in a segmentation
  • FIG. 24 is a flow diagram that illustrates the method of FIG. 23;
  • FIGS. 25A-25C is a flowchart that illustrates another method of closing holes in a segmentation
  • FIG. 26A is a diagram of a portion of a 3D vasculature model that illustrates the results of estimating blood vessel radius without filtering
  • FIG. 26B is a diagram of a portion of a 3D vasculature model that illustrates the results of estimating blood vessel radius with filtering
  • FIG. 27 is a diagram that illustrates the merging of paths into a tree
  • FIG. 28 is a diagram that illustrates a blood vessel map in which endpoints of blood vessels are detected
  • FIG. 29 is a diagram that illustrates a 3D graph of connected components of untracked skeleton voxels used to detect endpoints
  • FIG. 30 is a diagram that illustrates a blood vessel skeleton graph used to detect endpoints
  • FIGS. 31-33 are annotated images that illustrate examples of blood vessels misclassified by the neural network
  • FIG. 34 is a schematic diagram that illustrates neighboring voxels and statistics associated with the voxels;
  • FIG. 35 is a flow diagram that illustrates an example of a method of generating 3D models of blood vessels in accordance with the disclosure;
  • FIGS. 36A-36D are diagrams that illustrate a method of creating a blood vessel graph or tree structure
  • FIG. 37 is a flowchart that illustrates a method of processing overlaps between blood vessels
  • FIG. 38A is a diagram that illustrates an example of a 3D vasculature model
  • FIGS. 38B-38D are diagrams of examples of overlap maps that illustrate the processing of overlaps
  • FIG. 39A is a schematic diagram that illustrates an example of blood vessel intersection
  • FIG. 39B is a 3D vasculature model that illustrates an example of blood vessel intersection
  • FIG. 40A is a schematic diagram that illustrates an example of erroneous classification of endpoints
  • FIG. 40B is a 3D vasculature model that illustrates an example of erroneous classification of endpoints
  • FIG. 41A is a schematic diagram that illustrates an example of false segmentation
  • FIG. 41B is a 3D vasculature model that illustrates an example of false segmentation
  • FIG. 42 is a schematic diagram of a computer system capable of executing the methods described herein.
  • This disclosure is directed to improved techniques and methods of automatically extracting blood vessels, e.g., blood vessels of the lungs, from a 3D image data set.
  • These techniques and methods may form part of an algorithm pipeline for generating a 3D model of pulmonary blood vessels using deep learning techniques.
  • the algorithm pipeline may include annotating a 3D image data set, e.g., CT images, segmenting (e.g., via a semantic segmentation method) and classifying the 3D image data set, finding roots (e.g., via a root detection method), closing segmentation holes, finding endpoints (e.g., via an end-point detection method), generating directed graphs, and creating the 3D models based on the directed graphs.
  • Generating directed graphs may include selecting roots, creating shortest paths, merging paths into trees, and/or performing an overlap analysis on the trees. Given a patient CT volume, the methods of this disclosure automatically create 3D anatomical trees of the lungs’ vasculature.
  • the methods include segmentation, which separates the CT images into separate objects.
  • segmentation which separates the CT images into separate objects.
  • the purpose of the segmentation is to separate the objects that make up the airways and the vasculature (e.g., the luminal structures) from the surrounding lung tissue.
  • the methods also include generating directed graph structures that model the patient’s vasculature. The arteries and veins are separated into different graph structures.
  • the model is later used to generate a 3D object that can be rendered and manipulated in a planning application. This allows clinicians to plan procedures based on which blood vessels should be resected (e.g., in surgery), and which blood vessels should be avoided (e.g., in surgery or ablation procedures).
  • the methods of this disclosure may rely on manually- and/or semi- automatically-created tree structures, which enable creation of ground-truth 3D models used for neural network training and evaluations identifying structures within 3D image data and 3D models derived therefrom.
  • the improved identification of structures allows for additional analysis of the images and 3D models and enables accurate surgical or treatment planning.
  • the methods of the disclosure may be applied to planning lung cancer ablation therapy, segmentectomy, or lobectomy.
  • the pulmonary vasculature enters the left atrium of the heart. There are usually four pulmonary veins. The pulmonary trunk exits the right ventricle of the heart.
  • a method of this disclosure creates 3D models of the pulmonary vasculature starting from the heart to the periphery of the lungs based on segmented blood vessels.
  • the 3D model models the arteries and veins of the vasculature trees in a tree data structure, such as a directed graph, to enable, among other functions, highlighting of a subsection of the vasculature tree and visualizing of the vasculature tree to a particular generation.
  • the methods of this disclosure may minimize or eliminate the need for manual editing of the 3D model.
  • One challenge associated with visualization is that segmentation of the mediastinum region results in a low contrast between pulmonary blood vessels and surrounding anatomy. Classification may also be a challenge because arteries and veins may touch each other at some points and have no contrast between them. For classification in the lung region, the challenges include large anatomical variation and the arteries and veins touch at some points and have little to no contrast between them.
  • the systems and methods of this disclosure may utilize some anatomical information to improve the deep neural network models.
  • the anatomical information may include connectivity information. For example, every blood vessel can be traced from the periphery to the heart.
  • the anatomical information may also include central region information with low anatomical variation from the heart to the hilum, i.e., the entrance to a lung.
  • the anatomical information may also include peripheral region information, such as airways that often accompany arteries.
  • the methods of this disclosure may be performed by an automatic system framework 100 for three-dimensional (3D) modeling of blood vessels, e.g., pulmonary blood vessels, as illustrated in FIG. 1.
  • the automatic system framework 100 may include a lung segmentation module 110 that acquires or receives an image data set, e.g., a CT image data set, of the lungs, and performs lung segmentation.
  • the lung segmentation module 110 may acquire or receive the image data set from an imaging device. Alternatively, or additionally, the lung segmentation module 110 may read an image data set of the lungs from a memory storing the image data set.
  • the lung segmentation module 110 semantically segments the lungs using an existing deep neural network model.
  • the imaging device may incorporate any imaging modality suitable for capturing and segmenting two-dimensional images of the lungs. While the disclosure refers to lungs, aspects of this disclosure may be applied to other vascularized portions of a patient’s body such as organs, the lower body, the upper body, limbs, or tissue volumes.
  • the automatic system framework 100 may also include a blood vessel analysis module 120, which performs blood vessel analysis based on the segmented lung generated by the lung segmentation module 110.
  • the blood vessel analysis module 120 includes a deep neural network 122, a roots detection module 124, and a blood vessel graphs creation module 126.
  • the blood vessel analysis includes identifying blood vessels in the image data set and processing the identified blood vessels with a deep neural network model of the deep neural network 122.
  • the deep neural network 122 may be based on a deep convolutional network architecture.
  • the deep neural network 122 may also be implemented by a recurrent unit.
  • An add-on module implementing a channel attention mechanism may be added to the deep convolutional network architecture to improve performance.
  • the channel attention mechanism may be squeeze and excitation networks.
  • the automatic system framework 100 may also include a 3D mesh generation module 130 that generates a 3D mesh based on the blood vessel graphs or tree structures generated by the blood vessel graphs creation module 126.
  • the automatic system framework 100 may be implemented by applications or instructions stored in memory of a computer system, e.g., system 4200 of FIG. 42, and executed by a processor of the computer system.
  • the segmentation algorithm performed by the lungs segmentation module 110 may face a variety of challenges in segmenting images.
  • the challenges include identifying the outline of vessels in the mediastinum 1202, identifying the outline of arteries “touching” veins 1204, excluding unwanted vessels 1206, 1208, e.g., the aorta, excluding airway walls 1210, and avoiding “loops,” “holes,” discontinuities, and leakages.
  • FIG. 3 shows an example of a neural network that may be used in the systems and methods of this disclosure to address at least the challenges described above.
  • the deep neural network includes an encoder 304, decoders 306a, 306b, a segmentation layer 310, a topological layer 320, a distance map layer 330, and a classification layer 340.
  • the encoder 304 encodes images, such as CT images 302, to generate encoded data.
  • the decoders 306a, 306b decode the encoded data.
  • the segmentation layer 310 segments the decoded data from the decoder 306a using dice loss 312, an example of which is described herein, to generate a segmentation map 315.
  • the segmentation map 315 includes, for each voxel, a probabilities that the voxel is a blood vessel.
  • the topological layer 320 determines the topological distances between points in the decoded data from the decoder 306a using consistency loss 344 and/or topological loss 322, an example of which is also described herein, to obtain topological embedding vectors 325.
  • the distance map layer 330 determines the Euclidian distances between points in the topological embedding vectors from the topological layer 320 using smooth LI loss 332, an example of which is also described herein, to obtain a distance map 335.
  • the classification layer 340 generates a classification map 345, which includes, for each voxel, a probability that the voxel is an artery or vein.
  • the classification map 345 is illustrated in FIG. 3 as being overlayed on a CT image.
  • the classification layer 340 generates a classification map 345 based on the topological embedding vectors from the topological layer 320 using cross-entropy loss 342 and consistency loss 344.
  • Each topological embedding vector represents each voxel.
  • the topological embedding vectors indicate the topological distances between pairs of points corresponding to pairs of voxels. Generate blood vessel trees. Graph-like structures from which you can generate the
  • the consistency loss 344 addresses the situation where the result of the classification layer 340 and the result of the topological layer 320 are inconsistent.
  • the classification layer 340 may indicate that two points belong to the same blood vessel, while the topological layer 320 may indicate that the two points belong to different blood vessels.
  • the classification layer 340 may indicate that two points belong to different blood vessels, while the topological layer 320 may indicate that the two points belong to the same blood vessel.
  • the consistency loss 344 smooths the inconsistencies between classification layer 340 and the topological layer 320.
  • FIG. 4 illustrates an example of a method that implements the first and second phases.
  • a neural network is trained with dice loss, topological loss, and annotated CT images, in which blood vessels are manually and/or semi-automatically segmented and classified using, for example, suitable annotation tools described herein.
  • the training of the neural network may be performed on hardware separate from the system 4200 of FIG. 42.
  • the neural network may be trained using dedicated hardware with sufficient processing power, for example, a system that includes multiple powerful Graphical Processing Units or a comparable system in the cloud.
  • the neural network may be trained with annotated CT images that have been segmented using a classical image segmentation technique to segment blood vessels in the CT images.
  • the classical image segmentation technique may include, for example, an edge-based technique, a region-based technique, or a thresholding technique.
  • the blood vessels in unannotated CT images are segmented using the trained neural network yielding a segmentation map. Since the segmentation map may contain false negative regions, which may be referred to as “holes,” the segmentation map is processed to close the holes at block 403. After the holes in the segmentation map are closed, the roots or starting points and the endpoints of the segmented blood vessels are automatically detected.
  • the roots may include the arteries origin, the left lung veins origin, and the right lung veins origin.
  • the blood vessel origins are located at the heart. Accordingly, as shown in FIGS. 4 and 5 A, at block 404, roots 504 of blood vessels 501 are detected, and at block 406, endpoints 506 of peripheral vessels 502, from which tracking starts, are detected.
  • the optimal or shortest path e.g., shortest path 508, from each detected endpoint, e.g., endpoint 506, to each detected root, e g., root 504, is tracked using an optimal or shortest path algorithm.
  • the shortest path algorithm may be Dijkstra’s algorithm.
  • the most probable path is selected. For example, the shortest path 510 shown in FIG. 5B has a better score than the shortest path 512 because the path 510a has minimal class alternation.
  • the best root is selected from the possible roots and the class of the path from the best root to the endpoint is selected.
  • any path that contains unlikely curves may be rejected in advance of performing the shortest path algorithm.
  • the reconstruction of the paths and the selection of the root may be split into two rounds. In a first round, if, for a given endpoint, there is now a path with high certainty, the given endpoint is left untracked.
  • the certainty of the path may be determined based on relevant factors including topological distance and angles above a threshold. For example, a path with a high topological distance and angles above a threshold may be determined to be a path with high uncertainty.
  • the algorithm may revisit previously rejected endpoints and may select the root to which the path does not create significant overlap with vessels of a type opposite the type of the root, e.g., in the case of attempting to connect to the artery root and the path overlaps with a vein.
  • the method 400 determines whether there are more endpoints to process. If there are more endpoints to process, blocks 708 and 710 are repeated. If there are no more endpoints to process, the method 700 proceeds to block 414. As shown in FIG. 5C, at block 414, the most probable paths 510a-510c leading to the same root are united or merged into a single directed graph or tree.
  • the process of merging paths into a tree may include estimating the radius along the path. In order to estimate the radius more accurately, a monotonic condition may be incorporated into the estimation of the radius.
  • the monotonic condition may include, as an input, a distance boundary, which may be defined as a volume with a distance from the boundaries of the segmentation such that the volume has a maximum value on the center line.
  • a distance boundary which may be defined as a volume with a distance from the boundaries of the segmentation such that the volume has a maximum value on the center line.
  • FIG. 2601 shows a narrowing of the blood vessel, which reflects an inaccurate estimate of the radius of the blood vessel.
  • FIG. 26B shows a blood vessel model after filtering the estimated radius of the blood vessel with a monotonic condition to account for inaccuracies in the estimate of the radius of the blood vessel.
  • the portion of the blood vessel highlighted by the circle 2602 reflects a more accurate estimate of the radius of the blood vessel.
  • FIG. 27 is a diagram that illustrates the merging of paths into a tree according to one aspect of the disclosure.
  • the merging of paths into a tree may include, for each path, determining whether the tree is empty. If the tree is empty, the current path 2710 is treated as the initial tree. If the tree is not empty, the method starts from a root point, goes along the current path 2710, and calculates the distance 2730 between the center line 2712 of the current path 2710 and the center line 2722 of the tree 2720. If the calculated distance 2730 is greater than a threshold, the current path 2710 is split, resulting in a child path.
  • overlaps between directed graphs are solved at block 416.
  • the directed graphs may be used to create a 3D model.
  • the 3D model 604 illustrated in FIG. 6 may be created based on the directed graphs.
  • the systems and methods of this disclosure may receive as input volumetric images, e.g., CT images 602, and may output a 3D model, e g., the 3D model 604.
  • FIGS. 7A and 7B show an example of the encoder 304 of the deep neural network of FIG. 3.
  • CT volume images 702a are input to a recurrent convolutional neural network (RCNN) 720.
  • the RCNN 720 may include two recurrent blocks 722.
  • Each recurrent block 722 includes a 3D convolutional block 724, a group normalization block 726, and a rectified linear unit (ReLU) 728 block, the output of which is input to the 3D convolutional block 724.
  • ReLU rectified linear unit
  • the RCNN 720 outputs a convolutional block 704a.
  • the convolutional block 704a is then input to a squeeze and excite (S&E) block 730.
  • the S&E block 730 improves the convolutional channel interdependencies of the RCNNs 720 with minimal computational cost.
  • the S&E block 730 includes an inception block 731, a global pooling block 732, a first fully connected layer 733, a ReLU block 734, a second fully connected layer 735, a sigmoid activation block 736, and a scale block 737.
  • the global pooling block 732 squeezes each channel of a convolutional block to a single numeric value.
  • the first fully connected layer 733 and the ReLU block 734 adds nonlinearity.
  • the second fully connected layer 735 and the sigmoid activation block 736 gives each channel a smooth gating function.
  • the scale block 737 weights each feature map of the convolutional block based on the results of the processing by the global pooling block 732, the first fully connected layer 733, the ReLU block 734, the second fully connected layer 735, and the sigmoid activation block 736.
  • the S&E block 730 outputs a convolutional block 706a.
  • the processed convolutional block 706a are then input to a maximum pooling block 708.
  • the maximum pooling block 708 reduces the dimensionality of the convolutional block 706a.
  • the maximum pooling block 708 outputs a convolutional block 702b.
  • the convolutional block 702b is input to an RCNN 720, which outputs a convolutional block 704b.
  • the convolutional block 704b is then input to an S&E block 730, which outputs a convolutional block 706b.
  • the convolutional block 706b is then input to the maximum pooling block 708, which outputs a convolutional block 702c.
  • the convolutional block 702c is input to a recurrent convolutional neural network 720, which outputs a convolutional block 704c.
  • the convolutional block 704c is then input to an S&E block 730, which outputs a convolutional block 7106c.
  • the convolutional block 706c is then input to a maximum pooling block 708, which outputs a convolutional block 702d.
  • the convolutional block 702d is input to a recurrent convolutional neural network 720, which outputs a convolutional block 704d.
  • the convolutional block 704d is then input to an S&E block 730, which outputs a convolutional block 706d.
  • the convolutional blocks 706a-706d are then assembled (e.g., concatenated) into an output convolutional block 710.
  • FIGS. 8A-8C show a block diagram that illustrates an example of the decoders 306a, 306b of the deep neural network of FIG. 3.
  • the convolutional block 706d shown in FIG. 7A is input to an upconvert block 820, which outputs a convolutional block 804a.
  • the convolutional block 804a is then concatenated 825 with the convolutional block 702c shown in FIG. 7A, yielding a convolutional block 806a.
  • the convolutional block 806a is input to an RCNN 720, which outputs a convolutional block 808a.
  • the convolutional block 808a is then input to an S&E block 730, which outputs a convolutional block 802a.
  • the convolutional block 802a is input to an upconvert block 820, which outputs a convolutional block 804b.
  • the convolutional block 804b is then concatenated 825 with the convolutional block 702b shown in FIG. 7A, yielding a convolutional block 806b.
  • the convolutional block 806b is input to an RCNN 720, which outputs a convolutional block 808b.
  • the convolutional block 808b is then input to an S&E block 730, which outputs a convolutional block 802b.
  • the convolutional block 802b is input to an upconvert block 820, which outputs a convolutional block 804c.
  • the convolutional block 804c is then concatenated 825 with the convolutional block 706a shown in FIG. 7A, yielding a convolutional block 806c.
  • the convolutional block 806c is input to an RCNN 720, which outputs a convolutional block 808c.
  • the convolutional block 808c is then input to an S&E block 730, which outputs a convolutional block 802c.
  • the convolutional block 802c is input to a convolution block 830a.
  • the first decoder 306a includes the convolution block 830a shown in FIG. 8A and a sigmoid function, which output the segmentation layer 812.
  • the second decoder 306b includes two convolution blocks 830b, 830c, which receive as input the output from the convolution block 830a shown in FIG. 8A.
  • the second decoder 306b extracts the sigmoid portion of the output from the convolutional block 830b, yielding the classification layer 816.
  • the second decoder 306b extracts the ReLU portion of the output from the convolution block 830c, yielding the distance map layer 818.
  • the upconverter block 820 may include an up sample block 822, a 3D convolution block 824, a group normalization block 826, and a ReLU block 828.
  • the deep neural network may be based on a U-Net style architecture with per voxel outputs.
  • the deep neural network may include an input that receives 3D volume images, e.g., CT volume images, and multiple outputs that provide segmentation probability, classification probability, e.g., artery probability, and a topological embedding vector.
  • the topological embedding vector uses blood vessel connectivity information to improve accuracy.
  • the deep neural network may utilize a large patch size, which improves accuracy in the mediastinum region because of the large context and enables the deep neural network to use connectivity information in a large volume.
  • the deep neural network outputs a topological embedding vector for each voxel.
  • the deep neural network is trained to match the Euclidian distance of topological embedding vectors to corresponding topological distances.
  • the deep neural network may be trained to match the Euclidian distance of topological embedding vectors to corresponding topological distances as follows: In one implementation, loss terms may be added in the training of the deep neural network to correlate classification differences with topological distances.
  • the topological loss 322 of the neural network of FIG. 3 may be used to increase the class distance in the feature space.
  • the topological loss 322 may be determined according to the following example of topological loss calculations.
  • D(x1, x2) T(x1, x2), where Xi is the network topological layer value in point pi.
  • D(x(al), x(a2 )) 15mm and D(x(a1), for each pair p1, p2 where D((p1, p2) ⁇ a (910).
  • the topographical loss associated with each pair of skeleton points p1, p2 in a patch may computed according to the following equation:
  • the total topological loss may then be computed according to the following equation: where n is the total number of pairs and K is a constant which, for example, may be equal to or greater than 3 (3 times the maximum value of the topological loss).
  • K may be any constant value suitable for acting as an infinity measure, i.e., a number which is too large to be a topological loss.
  • K may be 4 or 5.
  • Increasing the constant K increases the distance in the feature space between arteries and veins.
  • there may be a classification inconsistency in which one portion of a blood vessel is classified as a vein and another portion of the same blood vessel is classified as an artery. For example, as illustrated in FIG. 10, first and second portions 1002, 1004 of a blood vessel 1001 may be classified as a vein, whereas a third portion 1012 of the blood vessel 1001 between the first and second portions 1002, 104 of the blood vessel 1001 may be classified as an artery.
  • an unsupervised “smooth” loss such as the smooth LI loss 332
  • the smooth LI loss 332 may be determined according to the following example of smooth LI loss calculations.
  • S p may be defined as the result of the network segmentation layer for point p
  • M p may be defined as the result of the network distance map layer for point p
  • C p may be defined as the result of the network classification layer for point p
  • T (p 1, p 2 ) may be defined as the result of the network topological distance between points pi and p2
  • D(p 1 , p 2 ) may be defined as the Euclidian distance between points p 1 and p 2 .
  • the threshold values for the conditions above may be other threshold values suitable for obtaining accurate classification results for given volumetric image data.
  • the total smooth LI loss may then be computed according to the following equation: where n is the total number of pairs of points p 1 , p 2 and MSE is the mean squared error.
  • a vein and an artery may continue beyond an annotated portion 1102 of the vein and an annotated portion 1112 of the artery, respectively, leaving an unannotated portion 1104 of the vein and an unannotated portion 1114 of the artery.
  • the deep neural network may correctly classify some voxels as blood vessels, but this may have a negative impact on the training of the deep neural network.
  • One solution to these issues may be to use a pre-trained segmentation model of peripheral blood vessels. Another solution may be to weight the segmentation dice loss term.
  • the deep neural network 122 may be supervised, which means that the deep neural network 122 is optimized based on a training set of examples.
  • the input to the deep neural network 122 are CT images 602 and the outputs from the deep neural network 122 are volumetric data, which includes, in order of output, topological embedding vector 325, a segmentation map 315, a classification map 345, and a distance map 335.
  • ground truth information is used to evaluate the outputs from the deep neural network 122 and update the weights of the deep neural network 122 based on the evaluation.
  • the evaluation uses losses as metrics.
  • the losses may include one or more of the following losses: topological loss, consistency loss, dice loss, cross-entropy loss, and smooth LI loss.
  • the deep neural network’s quality may depend on the availability of large, annotated data sets.
  • the peripheral information improves accuracy because of the blood vessel connectivity information.
  • the number of branches may increase exponentially.
  • the systems and methods of this disclosure may provide efficient annotation tools to segment and classify blood vessel branches in medical image data sets, e.g., 3D medical image data sets.
  • the annotation tools may be manual and/or semi-automatic tools.
  • the annotation tools may include a pretrained neural network that segments the blood vessels, and a shortest path algorithm that creates blood vessels between two points, e.g., a root and an endpoint, which are manually selected by the user, e.g., a clinician with experience in reading medical images.
  • Each tree model may be decomposed into a set of cylinder-shaped segments. An oblique view is displayed, where the radius is marked accurately. The segment’s cylinder is then added to a tree and displayed to a user. After manually and/or semi-automatically segmenting the blood vessels, a 3D model of the vasculature, e.g., the vasculature of the lungs, may be updated. The annotated 3D medical image data set may then be used to train the neural network of this disclosure to automatically segment and classify other 3D medical image data sets.
  • the accuracy of the neural network may be evaluated by comparing the results of the neural network to the annotated 3D models.
  • the accuracy criteria for the evaluation method may be based on centerline points.
  • a hit may refer to a ground-truth centerline inside the methods’ segmentation.
  • a correct classification may refer to the method assigning the artery or vein label correctly.
  • a total correct may refer to an instance where there is both a hit and a correct classification. There may be an instance in which there is a miss and an example of an instance in which there is an incorrect classification.
  • the neural network accuracy was evaluated at different depths of the blood vessel trees to obtain the follow results:
  • the neural network errors include: a classification error in another CT slice, in which there is a change from artery to vein along the blood vessel; a segmentation hole in still another CT slice; and -91% of the voxels are accurate.
  • the remaining blood vessel analysis may be designed to be robust to these errors so that branches are not cut off from the 3D model.
  • FIG. 13 is a flowchart of a method for automatically detecting roots of arteries.
  • a binary volume with artery segmentation, a distance boundary volume, and a lung mask are received.
  • the distance boundary volume is a volume with a distance from the boundaries of the segmentation such that the volume has a maximum value on center line.
  • the lung mask is a binary volume that is true on each voxel that is inside the lungs.
  • blood vessels are classified as arteries.
  • the skeleton is calculated from the artery classification.
  • a graph of arteries is created.
  • FIG. 14 shows an example of a graph of arteries 1400.
  • the graph of arteries 1400 may be created using a suitable software library.
  • endpoints are extracted from the graph of arteries 1400. Endpoints may be identified as voxels which have only one neighbor.
  • the endpoints which are outside of the lung mask are filtered.
  • the distance boundary is sampled in the coordinates of the endpoints, and, at block 1316, the sampled distance boundary which has a radius lower than a threshold is filtered.
  • the endpoint that has longest path to the nearest bifurcation is selected as a root of the arteries, e.g., the endpoint 1402. In some aspects, the most anterior point of the artery is selected as the root.
  • FIGS. 15A-15C show examples of CT images in which artery roots 1502 are detected.
  • FIG. 16 is a flowchart of a method for automatically detecting roots of veins.
  • the binary volume with artery segmentation, the distance boundary volume, the lung mask, and an artery root that was detected previously is received.
  • the method of FIG. 16 assumes that the maximum number of vein roots for each lung is two.
  • blood vessels are classified as veins.
  • a skeleton is calculated from the blood vessels classified as veins.
  • a graph of veins is created.
  • the graph of veins may be created using a suitable network software library.
  • connected components are extracted from the graph and sorted. Then, for each connected component, blocks 1612-1618 are performed. Blocks 1612-1618 may be performed on connected components in order starting with the largest connected component and ending with the smallest connected component.
  • the method 1600 determines whether a connected component has voxels that are outside of the lung and voxels that are inside the lung. If a connected component has voxels that are outside of the lung and voxels that are inside the lung, a voxel with only two neighbors that has a largest radius from the distance boundary is extracted at block 1614.
  • the lung to which the current candidate to the root belongs is determined.
  • the method 1600 determines whether the number of roots for the determined lung is less than 2. If the number of roots for the determined lung is less than 2, the current root is added at block 1620. Then, at block 1622, the method 1600 determines whether there is another connected component to be processed. If there is another connected component to be processed, the method 1600 returns to block 1612. Otherwise, the method 1600 ends at block 1624.
  • the blood vessel segmentation of the medical images may include “holes.”
  • methods including, for example, the methods illustrated in FIGS. 18A-25C, may be employed to close or fill holes in the segmentation of blood vessels. If the holes are not closed, possible errors include missing blood vessels and incorrect labeling during graph creation. These errors may lead to negative effects on the user and patient.
  • the 3D model may be used for surgery planning, if the 3D model is missing a vessel, the clinician may not be aware of the missing vessel during surgery and cut through the missing vessel, which may lead to significant bleeding.
  • a method of closing holes may include roughly finding a center point 1804 of the heart 1802, as illustrated in FIG. 18 A. This may be accomplished, for example, by averaging the positions of all the roots for arteries and veins. As illustrated in FIGS. 18B-18D, the root positions 1822a and 1822b of veins 1812a and 1812b, and the root position 1821 of artery 1811 are averaged to find the center point 1804 of the heart 1802.
  • the method of closing holes may also include roughly finding left and right hilum center points.
  • FIG. 19A illustrates the center point 1902a of the right hilum. For example, as illustrated in FIGS.
  • this may be performed by finding the intersection points x,y,z 1915 of the lung mask with a line 1912 such that y,z are a predetermined distance (e g., ⁇ 40 mm) 1914 from the heart 1901 and the maximum segmentation voxels are on the x-axis.
  • y,z are a predetermined distance (e g., ⁇ 40 mm) 1914 from the heart 1901 and the maximum segmentation voxels are on the x-axis.
  • the method of closing holes may also include finding the artery and vein skeleton points closest to the hilum.
  • Finding the artery and vein skeleton points may include creating a skeleton for the largest components connected to the hilum of each class (e g., artery and vein) and finding the skeleton point closest to the hilum for each class such that the radius of the closest skeleton point is roughly half the radius of the root.
  • a skeleton 2011 is created for the largest connected components of the artery 1811 and the skeleton point 2021 closest to the hilum for the artery 1811 is found such that the radius of the closest skeleton point is roughly half the radius of the root.
  • a vein skeleton point may be found.
  • the method of closing holes may also include finding candidate points for “holes.” Finding candidate points for “holes” may include, for each largest connected components of each class that is not connected to the largest connected component of the class of each largest connected component, creating a skeleton and selecting a skeleton point that is closest to the hilum center point if the radius of the skeleton point is greater than a threshold, which may be predetermined. For example, as illustrated in FIG. 20B, a skeleton 2012 is created for a large connected component of the vein 1812 that is not connected to the largest connected component of a vein. Next, the skeleton point 2022, which is closest to the hilum center point and which has a radius greater than a threshold, is selected.
  • the method of closing holes may also include finding an optimal path from a hole to an existing segmentation.
  • Finding an optimal path from a hole to an existing segmentation may include executing a Dijkstra algorithm from a candidate point 2102 to the closest hilum skeleton point of the same class.
  • the Dijkstra algorithm may be performed according to one or more of the following conditions:
  • a Dijkstra algorithm is performed from a candidate point 2102 to the closest hilum skeleton point 2108 of the same class. While there is an existing same class segmentation, i.e., segmented vein 2104, the algorithm goes over the “white” voxels 2105 because this saves x3 distance and the algorithm goes over the segmentation of the opposite class, i.e., segmented artery 2106. The algorithm stops at the existing segmented artery 2106.
  • FIG. 21B illustrates an example of an adjacency matrix 2110 for a Dijkstra algorithm showing a shortest path 2115 from a starting candidate point 2112 to an ending point 2118, e.g., a closest hilum skeleton point 2108.
  • the method of closing holes may also include creating a segmentation from the shortest path.
  • Creating a segmentation from the shortest path may include, for each “white” point on the shortest path, estimating radius based on “white” neighborhood voxels and coloring a cone in the volume. For example, as illustrated in FIG. 22, for each “white” point 2206 on the shortest path 2202, the radius 2204 is estimated based on “white” neighborhood voxels and a cone 2208 is drawn or colored in the volume defined by the estimated radius 2204.
  • the method 2300 of FIG. 23, which is illustrated by FIG. 24, may be employed to close holes in the segmentation of blood vessels.
  • the method 2300 may be separately applied to artery segmentation and vein segmentation.
  • a three- dimensional (3D) image data set of a portion of the body, e.g., the lungs, is acquired or received.
  • the 3D image data set is segmented to identify blood vessels.
  • the method 2300 determines whether a hole is detected in a blood vessel. If a hole is detected in a blood vessel, at block 2308 and as illustrated in step 2402 of FIG. 24, the segmentation, e.g., the segmentation probability map, is filtered with a lower threshold, e.g., a threshold of 0.5, to obtain an extended segmentation, and voxels are added to the blood vessel of the extended segmentation.
  • a lower threshold e.g., a threshold of 0.5
  • voxels with low attenuation values i.e., low Hounsfield values
  • a region of air with a low attenuation value can be added.
  • this region of air is removed.
  • the segmentation is updated yielding an updated segmentation 2409 and the new skeleton 2411 of the updated segmentation 2409 is calculated.
  • the new skeleton 2411 is used because new regions, e.g., new region 2415, are added to the original segmentation 2401 that are not blood vessels.
  • skeletonization By skeletonization all irrelevant voxels are removed and a connection through the holes, e.g., hole 2420, is created.
  • the new skeleton 2411 is added to the original segmentation 2401.
  • more holes may be collected. Additional methods may be employed to handle long holes and holes with different class labels in their end.
  • the segmentation may be extended using a low threshold on the segmentation probability map. As shown in FIG. 25A, this may involve filtering the segmentation probability map with a low threshold at block 2506 and applying a morphological dilation on the segmentation probability map at block 2508 after acquiring a CT image data set of the lungs at block 2502 and segmenting the CT image data set to obtain the segmentation probability map at block 2504. The result of this method is the extended segmentation from low thresholding 2510.
  • the segmentation may use local Hounsfield Units (HU) statistics from the CT volume. This may involve filtering the segmentation probability map with a threshold of, for example, 0.5 to extract the original segmentation, calculating the local HU statistic according to the CT volume, and extending the segmentation by adding voxels which have segmented neighbors with the same HU statistics.
  • FIG. 25B illustrates an example of a method of extending segmentation from HU statistics.
  • a CT image data set of the lungs is acquired and, at block 2514, the CT image data set is segmented to obtain a segmentation probability map.
  • the segmentation probability map is filtered with a threshold to extract the original segmentation.
  • the threshold may be 0.5 or another threshold, e.g., 0.3, 0.4, or 0.6, suitable for extracting the original segmentation.
  • the local statistic of HU values is calculated from the CT volume.
  • a voxel which has in its neighborhood segmented voxels with the same local statistic, is added to obtain an extended segmentation.
  • a morphology closing is applied to the extended segmentation to obtain an extended segmentation from HU statistics 2524.
  • the extended segmentation from low thresholding 2510 and the extended segmentation from HU statistics 2524 may be combined. Then, the resulting combination may be skeletonized to connect and not add voxels that are not blood vessel voxels.
  • FIG. 25C illustrates an example of a method of combining both extended segmentations to ultimately add a new skeleton to the original segmentation.
  • the extended segmentation from low thresholding 2510 and the extended segmentation from HU statistics 2524 are combined.
  • voxels with low intensity are removed from the combined extended segmentations.
  • the combined extended segmentations are skeletonized. Then, at block 2538, a new skeleton is added to the original segmentation.
  • the methods of this disclosure detect coordinates of peripheral points or endpoints 2811, 2812 in the blood vessel segmentation 2801, 2802, as illustrated in the blood vessel map 2800 of FIG. 28.
  • a first batch of endpoints are collected using region growing on the segmentation skeleton or centerline.
  • a second batch of endpoints are collected after initial trees are reconstructed, using connected component analysis on the centerline pieces left untracked by the initial trees.
  • the endpoint detection method may include generating a skeleton 2901 or centerline from the blood vessels segmentation, locating points on skeleton closest to the detected roots 2905, and from each detected root 2905 perform region growing.
  • Region growing may include iteratively going over the skeleton voxels not yet tracked. If the region growing reaches a voxel without any neighbors that have not yet been visited, the voxel is marked as an endpoint 2910.
  • the region growing may be terminated before reaching the final endpoint based on conditions such as minimal vessel radius estimation or based on some “vesselness” score such as the Frangi filter.
  • the endpoint detection method may include generating a segmentation volume of untracked vessels by subtracting the segmentation generated from the initial trees, i.e., trees generated from the first batch of endpoints, from the original segmentation.
  • the endpoint detection method may further include finding skeleton voxels, that are inside the untracked vessels e.g., the skeleton voxels 3005 shown in the 3D graph of FIG.
  • the neural network may misclassify a vein 3102, which is shown as being manually annotated by a clinician, as an artery 3111.
  • the neural network may misclassify an artery 3101, which is shown as being manually annotated by a clinician, as a vein 3112.
  • the neural network may falsely classify a section of a vessel as an artery 3111 (which may be referred to as discontinuous vessel classification).
  • the methods of the disclosure find the optimal paths connecting each endpoint at the lung periphery with the endpoint’s compatible root in the heart.
  • the optimal path is the path that traverses the true vessel path to the heart.
  • Generating an optimal path may include a preprocessing stage.
  • the preprocessing stage may include building a graph in which the voxels segmented as a blood vessel are the vertices of the graph and the edges are based on a neighborhood definition between voxels.
  • the preprocessing stage may also include weighting the edges as a function of estimated segmentation probabilities, vessel probabilities (e.g., artery or vein probabilities), and/or distance to a segmentation centerline at the connected vertices, which are voxels.
  • FIG. 34 illustrates neighboring voxels voxel 3401 and voxel 3402-and the estimated values of the voxels.
  • voxel 3401 has an estimated segmentation probability of 0.8, an artery probability of 0.2, and a distance of 1.5mm to a segmentation centerline; and
  • voxel 3402 has an estimated segmentation probability of 0.9, an artery probability of 0.1, and a distance of 1.0mm to a segmentation centerline.
  • the edge connecting from voxel 3401 to voxel 3402 may weighted according to the segmentation and artery probability values of the voxels.
  • the optimal path may be a shortest path generated based on possible scores.
  • the edges may be weighted based on one or more of: the segmentation probability of the neighboring voxel, the distance to the centerline of the neighboring voxel, the classification probability, and the distance between the center of the current voxel and the center of the neighboring voxel.
  • the classification probability is used differently if the shortest path algorithm is started from an artery root or a vein root. This means that two sets of weights are generated: one set of weights for connecting an endpoint to the artery root and another set of weights for connecting endpoints to the vein roots.
  • a weight may be the vein probability (i.e., by subtracting the artery probability from the decimal number 1) of the neighboring voxel.
  • a weight may be the artery probability of the neighboring voxel.
  • the distance to the centerline is a volume with a distance from the centerline of the segmentation such that the segmentation has a maximum value on the boundaries.
  • the weight based on the distance to the centerline may make the shortest path go through the center of the blood vessel, which may result in improved estimation of the radius of the blood vessel and improved merging of paths from different paths.
  • the weight based on the distance to the centerline may also help the path stay on the same vessel while passing through the intersection of an artery and a vein.
  • the results of the shortest path algorithm give instructions on how to go from each endpoint to each root.
  • Each path may be given one or more scores. For example, an SP score may be the sum of the weights of all the edges traversed by the path.
  • Each path may be given another score based on the topological distance of the path.
  • the neural network estimates a topological embedding for each point along the path.
  • the topological embedding may be used to compute the distance (e.g., L2 norm) between each point embedding and a point X steps further away along the path.
  • each path may be given a topological score that is a maximum topological distance value computed for each path.
  • the SP score and the topological score may be combined to evaluate each path.
  • the SP score and the topological score may be combined by, for example, multiplying the SP score and the topological score.
  • To select a root for each endpoint the combined scores of paths connecting to all possible roots (e.g., artery roots or vein roots relevant for the endpoint) and the path with the lowest score is selected.
  • FIGS. 35 and 36A-36D show an example of a method of creating a blood vessel graph or tree structure, which may be used to generate a 3D mesh model. These steps may be implemented in executable code such as software and firmware or in hardware utilizing the components and systems of FIG. 42.
  • the roots or starting points 3611, 3621 and the endpoints 3612, 3613, 3622 of the segmented blood vessels 3610, 3620 are detected, as illustrated in FIG. 36A.
  • optimal paths 3631-3633 from possible starting points 3611, 3621 to an endpoint are calculated, as illustrated in FIG. 36B.
  • the best starting point 3611 is selected from the possible starting points, and, at block 3512, the class of the optimal path 3631 from the best starting point 3611 to the endpoint 3622 is selected, as shown in FIG. 36C.
  • the method 3500 determines whether there is another endpoint to process. If there are other endpoints to process, the processes of blocks 3508-3512 are repeated for the other endpoints, e.g., endpoints 3612, 3613. If there are no other endpoints to process, the paths of the same starting point are merged into a directed graph or tree structure at block 3516, as illustrated in FIG. 36D. Then, at block 3518, the method 3500 ends.
  • vessels may be classified and tracked back to the heart independently. However, there may be overlaps between artery and vein branches. Thus, overlaps between blood vessel trees are solved at block 416 of the method 400 of FIG. 4. The overlaps may be solved in a post process. The post process may include detecting overlaps, classifying each overlap to a specific overlap type, and correcting the vessel classification, if needed. Solving overlaps between directed graphs or trees may be performed according to the method 3700 of FIG. 37.
  • an overlap volume map 3810 of FIG. 38B is generated from initial trees 3800 of FIG. 38A.
  • a connected component analysis is performed on the overlap volume map 3810 in which each connected component 3822, 3832 is given a unique ID at block 3704.
  • the overlap volume map 3810 is sampled along a path 3806 from the endpoint 3804 to the root 3808.
  • statistics about the paths which go through each overlap are collected.
  • the statistics may include one or more of the following: average vessel radius; overlap length (which may be normalized by the average radius); the number of paths having an endpoint near the overlap; the number of paths going through the overlap, which may be separated into subsets of paths classified as artery and paths classified as veins; and, for each subset; and collected scores, which may include one or more of topological embedding score, shortest path score, and sample of the network classification.
  • FIGS. 39A-41B are diagrams that illustrate examples of blood vessel overlap types.
  • the overlap types may include a valid intersection (at points 3911, 3912) between an actual artery 3901 and an actual vein 3902.
  • the overlap types may include erroneous classification of endpoints (for example, at points 4001 and 4002).
  • the overlap types may include false segmentation, which may be caused by, for example, lesions, atelectasis, or fissures adjacent to an overlap.
  • FIGS. 41A and 41B illustrate portions 4101 and 4102, respectively, of false segmentation.
  • the method 3700 ends at block 3712 and no correction is performed. If the overlap type is an erroneous classification, the method 3700 determines whether the overlap is a single leaf from one classification type at block 3714. If the overlap is a single leaf from one classification type, the classification of the overlap is changed to the type with no leaf at block 3716. [00129] At block 3718, the method 3700 determines whether the overlap is a long overlap, as illustrated, for example, by the overlap 3824 of FIG. 38D.
  • the method 3700 determines whether there are leaves from both types at block 3722 and determines whether the radius of the blood vessel is less than a threshold at block 3724. If there are leaves from both types and the radius of the blood vessel is less than a threshold, paths before the overlap are trimmed at block 3726. Otherwise, if there are no leaves from both types or the radius of the blood vessel is not less than a threshold, the method 3700 ends at block 3712 without performing correction.
  • the overlaps between blood vessel trees may be solved by using a graph cut algorithm.
  • the graph cut algorithm assumes there are no connections between a root of an artery tree and a root of a vein tree.
  • the graph cut algorithm separates the trees that are overlapping according to scores or weights placed on the edges of the segmented arteries and veins.
  • the graph cut algorithm may then cut the trees according to the mean cut of the weights such that there are no paths that connect the segmented arteries to the segmented veins.
  • the overlaps between blood vessel trees may be solved by an overlap split algorithm that uses the topological embedding map values.
  • an overlap split algorithm that uses the topological embedding map values.
  • Overlap path segments e g., short overlap path segments, may be identified and a score from the topological embedding map values may be associated with identified overlap path segments.
  • the overlap split algorithm may determine whether an overlap path segment belongs to an artery or a vein and split the overlap path segment accordingly.
  • the overlap split algorithm may consider various metrics associated with an overlap path segment including the length of the overlap path and scores from the topological embedding map values. For example, the overlap split algorithm may analyze the change in topological embedding map values across an edge of an overlap path segment. The overlap split algorithm may determine that two portions of the overlap path segment belong to the same blood vessel if the change in topological embedding map values across the edge of the overlap path segment is small. On the other hand, the overlap split algorithm may determine that two portions of the overlap path segment belong to different blood vessels if the change in topological embedding map values across the edge of the overlap path segment is large.
  • a physician In the case of treating lung cancer by performing endobronchial ablation or lung surgery, a physician must understand the patient’s anatomy, including the blood vessels in the vicinity of the lesion or leading to the lesion. For example, when performing a lobectomy, a surgeon is interested in blood vessels that enter and leave the specific lobe. Physicians may look at a CT scan prior to the therapeutic procedure and use the CT scan to visualize the vasculature structures and anatomical variations in order to plan the therapeutic procedure.
  • the systems and methods of the disclosure automatically generate a patient-specific 3D model of the vasculature of a portion of a patient’s body.
  • the patient-specific 3D anatomical model of the vasculature enables a clinician to select the best surgical approach and to prevent complications.
  • the patient-specific 3D anatomical model of the vasculature may be used for procedure planning, intraoperative safety, and training of clinicians, such as surgeons.
  • a 3D mesh is generated based on the blood vessel graph or tree structure.
  • a 3D mesh is the structural build of a 3D model consisting of polygons. 3D meshes use reference points, which may be voxels identified in the graph, to define shapes with height, width, and depth. A variety of different methods and algorithms may be used for creating a 3D mesh, including, for example, the marching cubes algorithm.
  • the result of the 3D mesh generation is a 3D model.
  • FIG. 6 is a diagram that illustrates a full pulmonary blood vessel model 604 that may be generated from CT images 602 of the lungs according to the systems and methods of the disclosure. In one example, the accuracy of the 3D model may be 96.5% in the center and 95.5% lobe+5cm.
  • FIG. 42 is a schematic diagram of a system 4200 configured for use with the methods of this disclosure.
  • System 4200 may include a workstation 4201.
  • workstation 4201 may be coupled with an imaging device 4215 such as a CT scanner or an MRI, directly or indirectly, e.g., by wireless communication.
  • Workstation 4201 may include a memory 4202, a processor 4204, a display 4206 and an input device 4210.
  • Processor 4204 may include one or more hardware processors.
  • Workstation 4201 may optionally include an output module 4212 and a network interface 4208.
  • Memory 4202 may store an application 4218 and image data 4214.
  • Application 4218 may include instructions executable by processor 4204 for executing the methods of this disclosure.
  • Application 4218 may further include a user interface 4216.
  • Image data 4214 may include image data sets such as CT image data sets and others useable herein.
  • Processor 4204 may be coupled with memory 4202, display 4206, input device 4210, output module 4212, network interface 4208 and imaging device 4215, e.g., a CT imaging device.
  • Workstation 4201 may be a stationary computing device, such as a personal computer, or a portable computing device such as a tablet computer. Workstation 4201 may embed a plurality of computer devices.
  • Memory 4202 may include any non-transitory computer-readable storage media for storing data and/or software including instructions that are executable by processor 4204 and which control the operation of workstation 4201 and, in some aspects, may also control the operation of imaging device 4215.
  • memory 4202 may include one or more storage devices such as solid-state storage devices, e.g., flash memory chips.
  • solid-state storage devices e.g., flash memory chips.
  • mass storage controller not shown
  • communications bus not shown
  • CT image data that is a series of slice images that make up a 3D volume
  • the disclosure is not so limited and may be implemented in a variety of imaging techniques including magnetic resonance imaging (MRI), fluoroscopy, X-Ray, ultrasound, positron emission tomography (PET), and other imaging techniques that generate 3D image volumes without departing from the scope of the disclosure.
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • a variety of different algorithms may be employed to segment the CT image data set including connected component, region growing, thresholding, clustering, watershed segmentation, edge detection, and others.
  • the methods and systems described herein may be embodied on one or more applications operable on a computer system for a variety of diagnostic and therapeutic purposes. As an initial matter, these systems and methods may be embodied on one or more educational or teaching applications. Further the methods and systems may be incorporated into a procedure planning system where structures, blood vessels, and other features found in the CT image data set are identified and a surgical or interventional path is planned to enable biopsy or therapy to be delivered at a desired location. Still further, these methods may be employed to model blood flow paths following surgery to ensure that tissues that are not to be resected or removed will still be sufficiently supplied with blood following the procedure.
  • computer-readable media can be any available media that can be accessed by the processor. That is, computer readable storage media may include non-transitory, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by the workstation.
  • the application may, when executed by the processor, cause the display to present a user interface.
  • the user interface may be configured to present to the user a variety of images and models as described herein.
  • the user interface may be further configured to display and mark aspects of the images and 3D models in different colors depending on their purpose, function, importance, etc.
  • the network interface may be configured to connect to a network such as a local area network (LAN) consisting of a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the Internet.
  • the network interface may be used to connect between the workstation and the imaging device.
  • the network interface may be also used to receive the image data.
  • the input device may be any device by which a user may interact with the workstation, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface.
  • the output module may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art.
  • the systems and methods of the disclosure can form part of a platform for surgical planning applications for other organs or portions of the body including surgery, e.g., minimally-invasive cancer surgery, in the liver, the stomach, the intestines, the colon, the rectum, the prostate, the brain, the neck, the upper body, or the lower body; kidney transplant surgery; and vascular bypass and aneurysm repair, which requires using artificial intelligence (AI) to segment and classify many anatomical parts.
  • AI artificial intelligence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne des systèmes et des procédés de traitement d'images, comprenant un processeur en communication avec un affichage et un support d'enregistrement lisible par ordinateur doté d'instructions qui sont exécutées par le processeur pour lire un ensemble de données d'image tridimensionnelle (3D) à partir du support d'enregistrement lisible par ordinateur et gêner automatiquement une structure arborescente de vaisseaux sanguins d'après des images de patient de l'ensemble de données d'image à l'aide d'un réseau neuronal. Des modèles 3D, générés manuellement et/ou semi-automatiquement, de vaisseaux sanguins sont utilisés pour entraîner le réseau neuronal. Les systèmes et procédés font intervenir les étapes consistant à segmenter et à classifier des vaisseaux sanguins dans l'ensemble de données d'image 3D à l'aide du réseau neuronal entraîné, à boucher des trous, à déceler les racines et les points d'extrémités dans la segmentation, à trouver les trajets les plus courts entre les racines et les points d'extrémités, à sélectionner les trajets les plus probables, à combiner les trajets les plus probables en graphes orientés, à résoudre les recouvrements entre les graphes orientés, et à créer des modèles 3D de vaisseaux sanguins d'après les graphes orientés.
EP22710859.4A 2021-03-26 2022-02-25 Systèmes et procédés d'extraction automatique de vaisseaux sanguins Pending EP4315237A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163166244P 2021-03-26 2021-03-26
US202163224786P 2021-07-22 2021-07-22
US202163251616P 2021-10-02 2021-10-02
PCT/US2022/018004 WO2022203814A1 (fr) 2021-03-26 2022-02-25 Systèmes et procédés d'extraction automatique de vaisseaux sanguins

Publications (1)

Publication Number Publication Date
EP4315237A1 true EP4315237A1 (fr) 2024-02-07

Family

ID=80780966

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22710859.4A Pending EP4315237A1 (fr) 2021-03-26 2022-02-25 Systèmes et procédés d'extraction automatique de vaisseaux sanguins

Country Status (2)

Country Link
EP (1) EP4315237A1 (fr)
WO (1) WO2022203814A1 (fr)

Also Published As

Publication number Publication date
WO2022203814A1 (fr) 2022-09-29

Similar Documents

Publication Publication Date Title
US10079071B1 (en) Method and system for whole body bone removal and vascular visualization in medical image data
US10115039B2 (en) Method and system for machine learning based classification of vascular branches
Aykac et al. Segmentation and analysis of the human airway tree from three-dimensional X-ray CT images
Li et al. Optimal surface segmentation in volumetric images-a graph-theoretic approach
US7822461B2 (en) System and method for endoscopic path planning
US9679389B2 (en) Method and system for blood vessel segmentation and classification
US9129417B2 (en) Method and system for coronary artery centerline extraction
US8150113B2 (en) Method for lung lesion location identification
US9471989B2 (en) Vascular anatomy modeling derived from 3-dimensional medical image processing
Zhou et al. Automatic segmentation and recognition of anatomical lung structures from high-resolution chest CT images
JP4914517B2 (ja) 構造物検出装置および方法ならびにプログラム
US20070092864A1 (en) Treatment planning methods, devices and systems
Zheng et al. Multi-part modeling and segmentation of left atrium in C-arm CT for image-guided ablation of atrial fibrillation
Beichel et al. Liver segment approximation in CT data for surgical resection planning
KR20190084380A (ko) 2차원 x-선 조영영상의 혈관 구조 추출 방법, 이를 수행하기 위한 기록매체 및 장치
US8050470B2 (en) Branch extension method for airway segmentation
Alirr et al. Survey on liver tumour resection planning system: steps, techniques, and parameters
CN115908297A (zh) 基于拓扑知识的医学影像中血管分割建模方法
Wang et al. Naviairway: a bronchiole-sensitive deep learning-based airway segmentation pipeline for planning of navigation bronchoscopy
Wang et al. Airway segmentation for low-contrast CT images from combined PET/CT scanners based on airway modelling and seed prediction
Ukil et al. Automatic lung lobe segmentation in X-ray CT images by 3D watershed transform using anatomic information from the segmented airway tree
EP4315237A1 (fr) Systèmes et procédés d'extraction automatique de vaisseaux sanguins
CN117083631A (zh) 用于自动血管提取的系统和方法
US11380060B2 (en) System and method for linking a segmentation graph to volumetric data
Novikov et al. Automated anatomy-based tracking of systemic arteries in arbitrary field-of-view CTA scans

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231023

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR