US20220415510A1 - Method and system for disease quantification modeling of anatomical tree structure - Google Patents
Method and system for disease quantification modeling of anatomical tree structure Download PDFInfo
- Publication number
- US20220415510A1 US20220415510A1 US17/894,363 US202217894363A US2022415510A1 US 20220415510 A1 US20220415510 A1 US 20220415510A1 US 202217894363 A US202217894363 A US 202217894363A US 2022415510 A1 US2022415510 A1 US 2022415510A1
- Authority
- US
- United States
- Prior art keywords
- graph
- nodes
- centerline
- neural network
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 201000010099 disease Diseases 0.000 title claims abstract description 84
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000011002 quantification Methods 0.000 title claims abstract description 60
- 238000013528 artificial neural network Methods 0.000 claims abstract description 87
- 230000006870 function Effects 0.000 claims description 10
- 230000000306 recurrent effect Effects 0.000 claims description 4
- 230000006403 short-term memory Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims 4
- 230000001902 propagating effect Effects 0.000 claims 1
- 238000012549 training Methods 0.000 abstract description 39
- 238000012360 testing method Methods 0.000 description 35
- 230000015654 memory Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000002591 computed tomography Methods 0.000 description 5
- 210000001367 artery Anatomy 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 210000002565 arteriole Anatomy 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 208000020358 Learning disease Diseases 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000007408 cone-beam computed tomography Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000031225 myocardial ischemia Diseases 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 208000031481 Pathologic Constriction Diseases 0.000 description 1
- 244000208734 Pisonia aculeata Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 210000004351 coronary vessel Anatomy 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000013535 dynamic contrast enhanced MRI Methods 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000004324 lymphatic system Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- APTZNLHMIGJTEW-UHFFFAOYSA-N pyraflufen-ethyl Chemical compound C1=C(Cl)C(OCC(=O)OCC)=CC(C=2C(=C(OC(F)F)N(C)N=2)Cl)=C1F APTZNLHMIGJTEW-UHFFFAOYSA-N 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000036262 stenosis Effects 0.000 description 1
- 208000037804 stenosis Diseases 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G06N3/0445—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30021—Catheter; Guide wire
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- the disclosure relates to medical image processing and analysis and considers the learning problem of disease quantification for anatomical tree structures (e.g., vessels, airway trees or the like), using labeled data (such as ground truth values) available for training.
- anatomical tree structures e.g., vessels, airway trees or the like
- labeled data such as ground truth values
- Fractional Flow Reserve is a reliable index for the assessment of the cardiac ischemia.
- FFR can be measured by pressure wire. Pressure wire measurement is invasive and only one or several values may be measured in the whole tree because of the level of invasiveness. Attempts have been made to estimate FFR using learning-based methods. Such learning based FFR estimation is fundamentally a low-data problem since the ground truth measurements are provided only at one, a few, or several locations.
- FIG. 1 shows several scenarios with ground-truth FFR values: one point ( FIG. 1 A ), several isolated points ( FIG. 1 B ), or values along one segment ( FIG. 1 C ).
- the present disclosure is provided to, among other things, overcome the drawbacks in the conventional methods for disease quantification modeling of anatomical tree structure with learning network Instead of using the simulated FFR as the ground truth for training the FFR model, a goal of certain embodiments of the present disclosure is to train the FFR model with the measured invasive FFRs directly.
- the measured invasive FFRs are the most accurate values as the ground truth for training the model compared to the other values (calculated by algorithms) as the ground truths.
- a computer implemented method for disease quantification modeling of an anatomical tree structure may include the follows steps for each training image to perform the corresponding training/learning.
- the method may include obtaining a centerline of an anatomical tree structure from the training image.
- the method may also include generating, by a processor, a graph neural network including a plurality of nodes based on a graph. Each node of the graph neural network may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
- the method may include obtaining labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Still further, the method may include training, by the processor, the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- a system for disease quantification modeling of an anatomical tree structure may include an interface and a processor.
- the interface may be configured to receive training images containing the anatomical tree structure.
- the processor may be configured to perform the follows steps for each training image.
- the processor may be configured to obtain a centerline of the anatomical tree structure.
- the processor may be further configured to generate a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
- the processor may be further configured to obtain labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the processor is configured to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- a non-transitory computer readable medium storing instructions that, when executed by a processor, may perform a method for disease quantification modeling of an anatomical tree structure.
- the method may include obtaining a centerline of an anatomical tree structure.
- the method may further include generating a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
- the method may further include obtaining labeled data of one or more nodes (e.g., from the training image), the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the method may include training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- the method of certain embodiments of the disclosure may propagate information from labeled data points towards the unlabeled data points.
- Certain embodiments of the disclosure may use both the implicit representations (such as feature embedding) and explicit relationships (such as graph) for learning disease models of the whole anatomical tree structure.
- the disclosed method builds a graph where each node corresponds to a point on a centerline of the tree structure. These nodes may be linked via the centerline.
- the input to each node of the graph may be a vector representation (also referred to as feature embedding) of each node.
- the disclosed method then generates and uses a dynamic graph neural network to transfer information (message passing) between the nodes with the ground truth values (for example, invasive FFR value) and the nodes without ground truth values.
- the ground truth values for example, invasive FFR value
- Certain of the disclosed methods and systems have at least the following benefits. Firstly, the disease prediction task is formulated as an interpolation problem on a graph under the deep learning architectures that may rely on supervision from only a few ground truth values, nodes are associated with the points on the centerline of the tree, and edges are defined by the centerline.
- anatomical tree structures have variations, but the disclosed graph neural networks can deal with such variations using dynamic graphs for individuals.
- the disclosed dynamic graph neural network may learn how to propagate label information from labeled data points towards the unlabeled data points during the optimization process, so as to obtain a well-trained graph neural network regardless of the deficiency of labeled data points.
- the disclosed system not only considers the points of the centerline independently but also embeds graph structure among all centerline points.
- the disclosed framework can seamlessly integrate the information from the centerline points in the whole tree to make an accurate prediction with only limited labeled data.
- the spatially close neighbor nodes being considered during learning of the graph neural network, global considerations among nodes may be integrated into the training in such a way that relations between one node and surrounding nodes are considered together with hidden information.
- FIG. 1 A is a schematic diagram illustrating scenarios with only one centerline point having ground-truth FFR value
- FIG. 1 B is a schematic diagram illustrating scenarios with several centerline points having ground-truth FFR values
- FIG. 1 C is a schematic diagram illustrating scenarios with multiple centerline points along one segment or path having ground-truth FFR values
- FIG. 2 illustrates an overall framework for FFR (as an example of the disease quantification parameter) prediction with dynamic graph neural network according to an embodiment of the present disclosure
- FIG. 3 illustrates a process of creating graph representation from Computed Tomography (CT) image including stage(a)-stage(d) according to an embodiment of the present disclosure
- FIG. 4 A is a schematic diagram illustrating a graph convolution type of dynamic graph neural network according to an embodiment of the present disclosure
- FIG. 4 B is a schematic diagram illustrating a gate type of dynamic graph neural network according to an embodiment of the present disclosure
- FIG. 5 illustrates a flowchart of an example method for disease quantification modeling according to an embodiment of present disclosure
- FIG. 6 depicts a schematic configuration diagram of disease quantification system according to an embodiment of present disclosure.
- FIG. 7 depicts a block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure.
- anatomical tree structure may refer to vessels, airways, and the like with tree structure.
- the technical term “medical image” may refer to a complete image or an image patch cropped from a complete image in any form including the forms of two dimensional (2D), 2D plus depth, or three dimensional (3D).
- 2D two dimensional
- 3D three dimensional
- FIGS. 1 A, 1 B, and 1 C show several other scenarios as shown in FIGS. 1 A, 1 B, and 1 C .
- FIG. 1 A only one invasive FFR value equal to 0.7 is measured in addition to an inlet FFR value equal to 1.
- FIG. 1 B illustrates two invasive FFR values are measured in addition to an inlet FFR value equal to 1.
- FIG. 1 C A scenario where multiple values are measured from a segment or path obtained using pull-back curves using pressure wire, which is invasive, is illustrated in FIG. 1 C .
- the present disclosure proposes to optimize disease quantification model using learning-based methods with only limited FFR value(s) available.
- FIG. 2 The framework of an example method according to the present disclosure is illustrated in FIG. 2 , including two phases: a training phase as an offline process and a prediction phase as an online process.
- a database of annotated training data (training images) with ground truth values (as an example of labeled data) is assembled.
- a graph representation algorithm may be adopted to automatically extract features from the sampled centerline points to create the graph structural representation for each training data.
- the dynamic graph neural network may learn to transfer information (message passing) between the nodes with the ground truth values and the nodes without the ground truth values, which will be described below, to obtain a well-trained deep memory graph neural network.
- the prediction phase shown as online testing in FIG. 2 is completed online, whereby the disease quantification parameter (e.g., FFR) of the whole tree for an unseen data can be calculated by using the learned model from the offline training phase.
- the method may perform the steps for each test image to predict disease quantification parameters along the anatomical tree structure.
- the processor may extract a test centerline of the test anatomical tree structure.
- the processor may generate a trained test graph neural network including a plurality of nodes based on the trained model as shown in FIG. 2 .
- the trained test graph neural network may be generated based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline.
- a test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network.
- the term “a graph-template setting of a neural network unit” may refer to a network parameter setting definition with respect to all graphs.
- the network parameters of the neural network unit may be set based on the local graph relationship of the corresponding centerline point using the network parameter setting definition.
- the network parameter setting definition may define what network parameters of the neural network unit will be used confronting what local graph relationship of the corresponding centerline point.
- a test disease related feature or a 2D/3D image patch may be extracted for each test centerline point. Disease quantification parameters along the test centerline may then be predicted based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network.
- the process as shown in FIG. 3 may be used to create graph representation from an image (e.g., CT image) as an input, including stages (a)-(d).
- the image may be obtained from a training image database.
- the image of tree structure may be segmented and thus centerline can be extracted for the image. Thereafter, it is possible to sample points from the centerline, as shown in stage(c). Then, at stage(d), features may be extracted from these points as the vertices (nodes) of the graph to create a graph representation.
- the features may be disease related features.
- an initial artery segmentation (stage(a)) with a centerline (stage(b)) is firstly generated, which could be obtained automatically, semi-automatically or manually.
- points on the centerline are sampled (stage(c)) as the vertex (V) of the graph (G).
- disease related features can be extracted, which may include but may not be limited to structural features, intensity features, other derived features, or the like.
- the structural feature the geometric features may include any one of radius, areas, stenosis, volumes, length, curvatures, etc.
- Intensity features may include any one of intensity-related measurements, such as intensity statistic measurements (minimum, maximum, mean, etc.), gradients, etc.
- the other derived features could be any feature derived based on the tree structures, intensity or even information related to other anatomic structures. For example, if FFR prediction is needed, such features could be pressure drops or resistance estimated using simplified equations.
- edges can refer to lines linking two or more nodes.
- the edges may also be directed.
- the undirected edge may be treated as two directed edges shows that there is a relation or association between two nodes.
- directed edges can bring more information than undirected edges.
- Undirected edges may reflect an underlying anatomical structure, while directed edges may further show information such as flow direction.
- the information can be propagated from the root of the tree to the terminals, and it can also be propagated in the opposite direction (e.g., from terminals to the root of the tree). Stated differently, the information propagation or passing between nodes of the tree structure can be implemented by considering both directions.
- both the implicit representations (i.e. feature embedding) and explicit relationships (i.e. graph) may be fused for learning disease models of the whole anatomical tree structure.
- structural information may be incorporated into disease quantification problem during implementation of the dynamic graph neural network to deal with variations of various anatomical tree structures using dynamic graphs for individuals.
- each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline.
- the input of each node may be a disease related feature or a cropped 2D/3D image patch for the corresponding centerline point, and an output of each node may be a disease quantification parameter.
- the disease prediction task may be formulated as an interpolation problem on a graph under the deep learning architectures that involve supervision from only a few ground truth values.
- FIG. 4 A illustrates a diagram showing the structure of a Graph Convolution Neural Network (GCN).
- GCN Graph Convolution Neural Network
- the goal of GCN may include generalization of Convolution Neural Network (CNN) architectures to non-Euclidean domains (for example, graphs).
- CNN Convolution Neural Network
- the graph convolution may define convolutions directly on the graph, whereby a series of convolutional operations can be performed for each node taking a given node's spatially close neighbor nodes into account with graph convolution layers. In this way, global considerations among nodes may be integrated into the training such that relations between one node and surrounding nodes are considered together with hidden information.
- the GCN can be a function of the input which may include two components: nodes representation X and an adjacent matrix that indicates the edges among the nodes, the structure of which may be formally expressed as:
- N is the node number
- C is the dimension of the feature embedding
- A is an adjacent matrix to denote if there are edges between nodes
- Z is the output of the GCN.
- the adjacent matrix A can be determined by the centerline. According to some embodiments, the adjacent matrix A may be fixed.
- FIG. 4 B shows gate type of the dynamic graph neural network.
- a gate mechanism like Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM) can also be incorporated into the propagation step to improve the long-term propagation of information across the graph structure.
- GRU Gated Recurrent Unit
- LSTM Long Short Term Memory
- a child node can be a node that is connected to a parent node by an edge with a directionality in the direction of the child node.
- a node in an artery may be a parent node to a child node in an arteriole.
- blood may flow from the node in the artery to the node in the arteriole, and consequently the node in the arteriole may be considered a “child” node, while the node in the artery may be considered a “parent node.”
- the same principle can be applied to other tree structures including veins and lymphatic systems, among others.
- the parent node can selectively incorporate information from each child node for dynamic optimization of parameters of the disease quantification model.
- the gate may govern which information may be conveyed and what weight(s) may be set or adjusted.
- each graph unit (could be a GRU or LSTM unit) contains input and output gates, a memory cell and hidden state. Instead of a single forget gate, each graph unit contains one forget gate for each child node, and the incoming edges indicate the child nodes of the node.
- the message passing could be bottom-to-up or up-to-bottom or both directions, see FIG. 4 B for an example of bottom-to-up message passing.
- the graph unit could be any recurrent neural network (RNN) unit such as LSTM, GRU, convolutional LSTM (CLSTM) unit, convolutional GRU (CGRU) unit, etc.
- RNN recurrent neural network
- the flowchart of implementation of a method for disease quantification modeling of an anatomical tree structure is illustrated in FIG. 5 .
- the method may include obtaining a centerline of an anatomical tree structure (Step S 1 ).
- the method includes, at Step S 2 , generating a graph neural network including a plurality of nodes based on a graph, where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
- labeled data e.g., training data with ground-truth
- the graph neural network may be trained by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- the trained graph neural network has the optimized parameters (bias, weights, etc.) such that the optimal disease quantification model can be obtained.
- the node number of the one or more nodes is less than a total number of the nodes in the graph neural network.
- the node number of the one or more nodes may be much less than a total number of the nodes in the graph neural network.
- Step S 1 and Step S 3 are shown as individual steps in FIG. 5 , they can also be integrated into one step.
- the centerline may be extracted and obtained meanwhile obtaining label date of the one or more nodes, which correspond to the already labeled centerline points.
- the one or more nodes include at least a first node at the inlet of the anatomical tree structure.
- the one or more nodes include the first node at the inlet of the anatomical tree structure and 1-3 additional nodes.
- only one additional node may be sufficient to train the graph neural network with the information passing mechanism.
- the graph neural network with less than or even about 1000 nodes may be well-trained based on only one labeled node or several labeled nodes.
- the present disclosure does not intend to limit the number of the nodes, and any number of nodes is possible.
- the disclosed dynamic graph neural network may learn how to propagate label information from labeled data towards the unlabeled data during the optimization, so as to obtain a well-trained graph neural network despite of the deficiency of labeled data points.
- the system 600 may include a disease quantification model training unit 602 and a disease quantification predicting unit 604 .
- the disease quantification model training unit 602 can acquire training image with ground truth values from a training image database 601 to train a disease quantification model, and as a result, can output the trained disease quantification model to the disease quantification predicting unit 604 .
- the disease quantification predicting unit 604 may be communicatively coupled to a medical image database 606 , and then may predict result(s) of disease quantification.
- training of the graph neural network may be performed by using gradient based methods, for example.
- the parameters of the graph neural network can be optimized by minimizing the objective function of the set of nodes during offline training. With only limited labeled data measured, the gradients and/or errors of the set of nodes can be transferred to the other nodes of the graph net through a back propagation approach for message or information passing. Thus, the structural information of the graph may be considered for a robust model.
- the disclosed architecture can, in certain embodiments, seamlessly integrate the information from the centerline points in the whole tree for more accurate prediction with only limited labeled data available.
- the objective function may be the means square error of the set of nodes.
- the objective function may be the weighted means square error of the set of nodes.
- the objective function may be defined by one skilled in the art as desired without departing from the spirit of the disclosure.
- the disease quantification predicting unit 604 may be communicatively coupled to the training image database 601 via network 605 . In this manner, the predicted result of disease quantification obtained by the disease quantification predicting unit 604 , upon confirmation by the radiologist or the clinician, may be fed back as training sample to the training image database 601 for future use. In this way, the training image database 601 may be augmented for expansion in scale in favor of better prediction results as well as improvement of accuracy of the model.
- the system may include a centerline generation device 700 , an image acquisition device 701 and a disease quantification modeling device 702 , for example.
- the system may include only a disease quantification modeling device 702 .
- the image acquisition device 701 may acquire and output an image by any type of imaging modalities, such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like.
- imaging modalities such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like.
- the centerline generation device 700 is communicatively connected to the image acquisition device 701 and the disease quantification modeling device 702 . According to an embodiment, the centerline generation device 700 may obtain the image directly or indirectly from the image acquisition device 701 , conduct tree segment for the image, and then extract a centerline of the image, as illustrated in stages (a) and (b) of FIG. 3 .
- the disease quantification modeling device 702 may be a dedicated computer or a general-purpose computer.
- the disease quantification modeling device 702 may be a hospital-customized computer for performing image acquisition and image processing tasks, for example, or a server in the cloud.
- the disease quantification modeling device 702 may include a communication interface 703 , a processor 706 , a memory 705 , a storage device 704 , a bus 707 , and an input/output device 708 .
- the communication interface 703 , the processor 706 , the memory 705 , the storage device 704 and the input/output device 708 may be connected and communicated with one another.
- the communication interface 703 may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as optical fiber, USB 3.0, Thunderbolt or the like), a wireless network adapter (such as WiFi adapter), telecommunication (3G, 4G/LTE, 5G, 6G and beyond).
- the disease quantification modeling device 702 may be connected to the centerline generation device 700 , the image acquisition device 701 and other components. In some embodiments, the disease quantification modeling device 702 may receive the generated centerline from the centerline generation device 700 and medical image (e.g., a sequence of images of vessel) from the image acquisition device 701 via the communication interface 703 .
- the memory 705 /storage device 704 may be a non-transitory computer-readable medium or machine-readable medium such as read only memory (ROM), random access memory (RAM), a phase change random-access memory (PRAM), a dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM, a static random-access memory (SRAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), magnetic storage device, etc., on which information or instructions which can be accessed and executed by a computer are stored in any format.
- the trained graph neural network and model-related data may be stored in the storage device 704 .
- the memory 705 may store computer-executable instructions, which, when executed by the processor 706 , may perform the method for disease quantification modeling including the steps of: obtaining a centerline of an anatomical tree structure; generating a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter; obtaining labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network; and training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- the computer-executable instructions when executed by the processor 706 , may perform the steps of predicting disease quantification parameter for each test image. Particularly, the computer-executable instructions, when executed by the processor 706 , may extract a test centerline of the test anatomical tree structure, generate a trained test graph neural network, extract a test disease related feature or a 2D/3D image patch for each test centerline point corresponding to test node, and predict the disease quantification parameters along the test centerline based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network.
- the trained test graph neural network may be based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline.
- a test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network.
- the processor 706 may be a single-core or multi-core processing device that includes one or more general processing devices, such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like. More specifically, the processor 706 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor that runs a combination of instruction sets. The processor 706 may also be one or more dedicated processing devices such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), system-on-chip (SoC), and the like.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- DSPs digital signal processors
- SoC system-on-chip
- the processor 706 may be communicatively coupled to the memory 705 , and may be configured to obtain a centerline of an anatomical tree structure, to generate a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter, to obtain labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network (for example, the one or more nodes with labeled data may be a subset of the nodes of the graph neural network), and to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- the processor 706 is also configured to train the graph neural network as follows by using gradient based methods: to optimize the parameters of the graph neural network by minimizing the objective function of the set of nodes; and to transfer the gradients and/or errors of the set of nodes to the other nodes.
- the input/output device 708 may be any input and output device such as keyboard, mouse, printer, display, scanner, touch panel, via which an operator may interface with the computer.
- prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient, etc.
- Various operations or functions are described herein, which may be implemented as software code or instructions or defined as software code or instructions. Such content may be source code or differential code (“delta” or “patch” code) that can be executed directly (“object” or “executable” form).
- the software code or instructions may be stored in computer readable storage medium, and when executed, may cause a machine to perform the described functions or operations and include any mechanism for storing information in the form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable or non-recordable media.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
Description
- This application is a continuation of application Ser. No. 16/906,936 filed Jun. 19, 2020, which claims the benefit of priority to U.S. Provisional Application No. 62/863,472, filed on Jun. 19, 2019, the entire contents of both of which are incorporated herein by reference.
- The disclosure relates to medical image processing and analysis and considers the learning problem of disease quantification for anatomical tree structures (e.g., vessels, airway trees or the like), using labeled data (such as ground truth values) available for training.
- Accurate disease quantifications of anatomical tree structure are useful for precise diagnosis. For example, it has been proved that Fractional Flow Reserve (FFR) is a reliable index for the assessment of the cardiac ischemia. FFR can be measured by pressure wire. Pressure wire measurement is invasive and only one or several values may be measured in the whole tree because of the level of invasiveness. Attempts have been made to estimate FFR using learning-based methods. Such learning based FFR estimation is fundamentally a low-data problem since the ground truth measurements are provided only at one, a few, or several locations.
FIG. 1 shows several scenarios with ground-truth FFR values: one point (FIG. 1A ), several isolated points (FIG. 1B ), or values along one segment (FIG. 1C ). With only a small amount of invasive values (measured by pressure wire) available for a training process, it is challenging to provide accurate predictions for the whole coronary artery tree. The existing machine learning based methods rely on simulated FFR values as ground truth for training the model. However, the simulated FFR values are usually calculated by numeric flow simulation based methods, which are time-consuming and too inaccurate for training the machine learning model. Thus, the performance of conventional machine learning-based methods is highly restricted by simulation methods. - The present disclosure is provided to, among other things, overcome the drawbacks in the conventional methods for disease quantification modeling of anatomical tree structure with learning network Instead of using the simulated FFR as the ground truth for training the FFR model, a goal of certain embodiments of the present disclosure is to train the FFR model with the measured invasive FFRs directly. The measured invasive FFRs are the most accurate values as the ground truth for training the model compared to the other values (calculated by algorithms) as the ground truths.
- In one aspect, a computer implemented method for disease quantification modeling of an anatomical tree structure is provided. The method may include the follows steps for each training image to perform the corresponding training/learning. The method may include obtaining a centerline of an anatomical tree structure from the training image. The method may also include generating, by a processor, a graph neural network including a plurality of nodes based on a graph. Each node of the graph neural network may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. Further, the method may include obtaining labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Still further, the method may include training, by the processor, the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- In another aspect, a system for disease quantification modeling of an anatomical tree structure is provided. The system may include an interface and a processor. The interface may be configured to receive training images containing the anatomical tree structure. The processor may be configured to perform the follows steps for each training image. The processor may be configured to obtain a centerline of the anatomical tree structure. The processor may be further configured to generate a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. The processor may be further configured to obtain labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the processor is configured to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- In a further aspect, a non-transitory computer readable medium storing instructions that, when executed by a processor, may perform a method for disease quantification modeling of an anatomical tree structure. The method may include obtaining a centerline of an anatomical tree structure. The method may further include generating a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. The method may further include obtaining labeled data of one or more nodes (e.g., from the training image), the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the method may include training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
- Use of graph neural networks has demonstrated that, in some circumstances, predictions may be performed from only one or a few data points (such as nodes). The method of certain embodiments of the disclosure may propagate information from labeled data points towards the unlabeled data points. Certain embodiments of the disclosure may use both the implicit representations (such as feature embedding) and explicit relationships (such as graph) for learning disease models of the whole anatomical tree structure. The disclosed method builds a graph where each node corresponds to a point on a centerline of the tree structure. These nodes may be linked via the centerline. The input to each node of the graph may be a vector representation (also referred to as feature embedding) of each node. The disclosed method then generates and uses a dynamic graph neural network to transfer information (message passing) between the nodes with the ground truth values (for example, invasive FFR value) and the nodes without ground truth values. Certain of the disclosed methods and systems have at least the following benefits. Firstly, the disease prediction task is formulated as an interpolation problem on a graph under the deep learning architectures that may rely on supervision from only a few ground truth values, nodes are associated with the points on the centerline of the tree, and edges are defined by the centerline. Secondly, anatomical tree structures have variations, but the disclosed graph neural networks can deal with such variations using dynamic graphs for individuals. Thirdly, the disclosed dynamic graph neural network may learn how to propagate label information from labeled data points towards the unlabeled data points during the optimization process, so as to obtain a well-trained graph neural network regardless of the deficiency of labeled data points.
- Moreover, in contrast to the conventional methods, the disclosed system not only considers the points of the centerline independently but also embeds graph structure among all centerline points. With the information propagation of the nodes in the deep memory graph nets, the disclosed framework can seamlessly integrate the information from the centerline points in the whole tree to make an accurate prediction with only limited labeled data. With the spatially close neighbor nodes being considered during learning of the graph neural network, global considerations among nodes may be integrated into the training in such a way that relations between one node and surrounding nodes are considered together with hidden information.
- It is to be understood that the preceding general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the invention, as claimed. A given embodiment may provide one, two, more, or all the preceding advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
- In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments, and together with the description and claims, serve to explain the disclosed embodiments. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present method, system, or non-transitory computer readable medium having instructions thereon for implementing the method.
-
FIG. 1A is a schematic diagram illustrating scenarios with only one centerline point having ground-truth FFR value; -
FIG. 1B is a schematic diagram illustrating scenarios with several centerline points having ground-truth FFR values; -
FIG. 1C is a schematic diagram illustrating scenarios with multiple centerline points along one segment or path having ground-truth FFR values; -
FIG. 2 illustrates an overall framework for FFR (as an example of the disease quantification parameter) prediction with dynamic graph neural network according to an embodiment of the present disclosure; -
FIG. 3 illustrates a process of creating graph representation from Computed Tomography (CT) image including stage(a)-stage(d) according to an embodiment of the present disclosure; -
FIG. 4A is a schematic diagram illustrating a graph convolution type of dynamic graph neural network according to an embodiment of the present disclosure; -
FIG. 4B is a schematic diagram illustrating a gate type of dynamic graph neural network according to an embodiment of the present disclosure; -
FIG. 5 illustrates a flowchart of an example method for disease quantification modeling according to an embodiment of present disclosure; -
FIG. 6 depicts a schematic configuration diagram of disease quantification system according to an embodiment of present disclosure; and -
FIG. 7 depicts a block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure. - For the purposes of facilitating an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure pertains. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. The order of steps of the method does not limit to the described or shown one. According to the disclosure, the order of steps may be varied according to actual requirements without departing from the gist of the disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately herein.
- Hereinafter, the technical term “anatomical tree structure” may refer to vessels, airways, and the like with tree structure. The technical term “medical image” may refer to a complete image or an image patch cropped from a complete image in any form including the forms of two dimensional (2D), 2D plus depth, or three dimensional (3D). Although FFR, which is a reliable index for assessment of cardiac ischemia, is mentioned above for describing the process, no limitation to the FFR is intended according to the present disclosure. Instead, the present disclosure is applicable to any disease quantification parameter of any anatomical tree structure.
- As described above, because the procedure of measuring FFR by pressure wire is invasive, it is preferable that only a few invasive values may be measured in the whole tree. In some embodiments, only one FFR value may be measured at the inlet. In alternative embodiments, several other scenarios are possible as shown in
FIGS. 1A, 1B, and 1C . As shown inFIG. 1A , only one invasive FFR value equal to 0.7 is measured in addition to an inlet FFR value equal to 1. As another scenario,FIG. 1B illustrates two invasive FFR values are measured in addition to an inlet FFR value equal to 1. A scenario where multiple values are measured from a segment or path obtained using pull-back curves using pressure wire, which is invasive, is illustrated inFIG. 1C . The present disclosure proposes to optimize disease quantification model using learning-based methods with only limited FFR value(s) available. - The framework of an example method according to the present disclosure is illustrated in
FIG. 2 , including two phases: a training phase as an offline process and a prediction phase as an online process. During the offline training, a database of annotated training data (training images) with ground truth values (as an example of labeled data) is assembled. A graph representation algorithm may be adopted to automatically extract features from the sampled centerline points to create the graph structural representation for each training data. In particular, the dynamic graph neural network may learn to transfer information (message passing) between the nodes with the ground truth values and the nodes without the ground truth values, which will be described below, to obtain a well-trained deep memory graph neural network. - The prediction phase shown as online testing in
FIG. 2 is completed online, whereby the disease quantification parameter (e.g., FFR) of the whole tree for an unseen data can be calculated by using the learned model from the offline training phase. In some embodiments, the method may perform the steps for each test image to predict disease quantification parameters along the anatomical tree structure. Upon a test image containing a test anatomical tree structure (“test anatomical tree structure” is used to differentiate from the anatomical tree structure contained by the training image) is received, the processor may extract a test centerline of the test anatomical tree structure. The processor may generate a trained test graph neural network including a plurality of nodes based on the trained model as shown inFIG. 2 . In some embodiments, the trained test graph neural network may be generated based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline. A test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network. The term “a graph-template setting of a neural network unit” may refer to a network parameter setting definition with respect to all graphs. Particularly, the network parameters of the neural network unit may be set based on the local graph relationship of the corresponding centerline point using the network parameter setting definition. The network parameter setting definition may define what network parameters of the neural network unit will be used confronting what local graph relationship of the corresponding centerline point. A test disease related feature or a 2D/3D image patch may be extracted for each test centerline point. Disease quantification parameters along the test centerline may then be predicted based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network. - In some embodiments, the process as shown in
FIG. 3 may be used to create graph representation from an image (e.g., CT image) as an input, including stages (a)-(d). At stage(a), the image may be obtained from a training image database. At stage(b), the image of tree structure may be segmented and thus centerline can be extracted for the image. Thereafter, it is possible to sample points from the centerline, as shown in stage(c). Then, at stage(d), features may be extracted from these points as the vertices (nodes) of the graph to create a graph representation. In particular, the features may be disease related features. - For example, an initial artery segmentation (stage(a)) with a centerline (stage(b)) is firstly generated, which could be obtained automatically, semi-automatically or manually. Secondly, points on the centerline are sampled (stage(c)) as the vertex (V) of the graph (G). For each sampled point on the centerline, disease related features can be extracted, which may include but may not be limited to structural features, intensity features, other derived features, or the like. As an example of the structural feature, the geometric features may include any one of radius, areas, stenosis, volumes, length, curvatures, etc. Intensity features may include any one of intensity-related measurements, such as intensity statistic measurements (minimum, maximum, mean, etc.), gradients, etc. The other derived features could be any feature derived based on the tree structures, intensity or even information related to other anatomic structures. For example, if FFR prediction is needed, such features could be pressure drops or resistance estimated using simplified equations.
- As can be seen from an example of the architecture at stage(d) of
FIG. 3 , points on the centerline are linked by edges, which may be undirected. Thus, in the context of certain embodiments of the present disclosure, the technical term “edges,” can refer to lines linking two or more nodes. In some embodiments, the edges may also be directed. In particular, the undirected edge may be treated as two directed edges shows that there is a relation or association between two nodes. Generally, directed edges can bring more information than undirected edges. Undirected edges may reflect an underlying anatomical structure, while directed edges may further show information such as flow direction. - In some embodiment, the information can be propagated from the root of the tree to the terminals, and it can also be propagated in the opposite direction (e.g., from terminals to the root of the tree). Stated differently, the information propagation or passing between nodes of the tree structure can be implemented by considering both directions.
- According to the present disclosure, the tree T is associated with a graph GT=(V,E), where nodes vi∈V correspond to the feature vectors or embedding of the points on the centerline (both with ground truth values and unknown values), and edges ei∈E correspond to directed or undirected edges between the points. According to the present disclosure, both the implicit representations (i.e. feature embedding) and explicit relationships (i.e. graph) may be fused for learning disease models of the whole anatomical tree structure. According to some embodiments, structural information may be incorporated into disease quantification problem during implementation of the dynamic graph neural network to deal with variations of various anatomical tree structures using dynamic graphs for individuals. According to some embodiments, each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline. The input of each node may be a disease related feature or a cropped 2D/3D image patch for the corresponding centerline point, and an output of each node may be a disease quantification parameter. According to the disclosure, the disease prediction task may be formulated as an interpolation problem on a graph under the deep learning architectures that involve supervision from only a few ground truth values.
-
FIG. 4A illustrates a diagram showing the structure of a Graph Convolution Neural Network (GCN). The goal of GCN may include generalization of Convolution Neural Network (CNN) architectures to non-Euclidean domains (for example, graphs). As shown inFIG. 4A , the graph convolution may define convolutions directly on the graph, whereby a series of convolutional operations can be performed for each node taking a given node's spatially close neighbor nodes into account with graph convolution layers. In this way, global considerations among nodes may be integrated into the training such that relations between one node and surrounding nodes are considered together with hidden information. - The GCN can be a function of the input which may include two components: nodes representation X and an adjacent matrix that indicates the edges among the nodes, the structure of which may be formally expressed as:
-
Z=GCN(X,A), - Where X∈RN×C, is the nodes representation, N is the node number, C is the dimension of the feature embedding, A is an adjacent matrix to denote if there are edges between nodes, and Z is the output of the GCN. According to the present disclosure, the adjacent matrix A can be determined by the centerline. According to some embodiments, the adjacent matrix A may be fixed.
- Other common methods applicable in CNN can also be used in GCN, such as skipping connection or attention.
-
FIG. 4B shows gate type of the dynamic graph neural network. A gate mechanism like Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM) can also be incorporated into the propagation step to improve the long-term propagation of information across the graph structure. In a directed tree structure, a child node can be a node that is connected to a parent node by an edge with a directionality in the direction of the child node. Thus, along an arterial system a node in an artery may be a parent node to a child node in an arteriole. In such a system blood may flow from the node in the artery to the node in the arteriole, and consequently the node in the arteriole may be considered a “child” node, while the node in the artery may be considered a “parent node.” The same principle can be applied to other tree structures including veins and lymphatic systems, among others. For example, if the edges of the graph are directional, by using gate mechanism, the parent node can selectively incorporate information from each child node for dynamic optimization of parameters of the disease quantification model. As an example, the gate may govern which information may be conveyed and what weight(s) may be set or adjusted. More particularly, each graph unit (could be a GRU or LSTM unit) contains input and output gates, a memory cell and hidden state. Instead of a single forget gate, each graph unit contains one forget gate for each child node, and the incoming edges indicate the child nodes of the node. The message passing could be bottom-to-up or up-to-bottom or both directions, seeFIG. 4B for an example of bottom-to-up message passing. The graph unit could be any recurrent neural network (RNN) unit such as LSTM, GRU, convolutional LSTM (CLSTM) unit, convolutional GRU (CGRU) unit, etc. - The flowchart of implementation of a method for disease quantification modeling of an anatomical tree structure is illustrated in
FIG. 5 . The method may include obtaining a centerline of an anatomical tree structure (Step S1). The method includes, at Step S2, generating a graph neural network including a plurality of nodes based on a graph, where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. After the graph neural network is generated, labeled data (e.g., training data with ground-truth) of one or more nodes may be obtained, at Step S3. Then, at Step S4, the graph neural network may be trained by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes. The trained graph neural network has the optimized parameters (bias, weights, etc.) such that the optimal disease quantification model can be obtained. Particularly, the node number of the one or more nodes is less than a total number of the nodes in the graph neural network. According to an embodiment, the node number of the one or more nodes may be much less than a total number of the nodes in the graph neural network. Although Step S1 and Step S3 are shown as individual steps inFIG. 5 , they can also be integrated into one step. As an example, upon a training image labeled with several ground truth values along its centerline is received, the centerline may be extracted and obtained meanwhile obtaining label date of the one or more nodes, which correspond to the already labeled centerline points. - According to embodiments of the disclosure, the one or more nodes include at least a first node at the inlet of the anatomical tree structure. Alternatively, the one or more nodes include the first node at the inlet of the anatomical tree structure and 1-3 additional nodes. According to an embodiment, only one additional node may be sufficient to train the graph neural network with the information passing mechanism. According to some embodiments, the graph neural network with less than or even about 1000 nodes may be well-trained based on only one labeled node or several labeled nodes. The present disclosure does not intend to limit the number of the nodes, and any number of nodes is possible. As a result, the disclosed dynamic graph neural network may learn how to propagate label information from labeled data towards the unlabeled data during the optimization, so as to obtain a well-trained graph neural network despite of the deficiency of labeled data points.
- The training and prediction phases for disease quantification modeling will be described in detail with reference to
FIG. 6 , which illustrates an outline of implementations ofdisease quantification system 600. As shown, thesystem 600 may include a disease quantification model training unit 602 and a diseasequantification predicting unit 604. The disease quantification model training unit 602 can acquire training image with ground truth values from atraining image database 601 to train a disease quantification model, and as a result, can output the trained disease quantification model to the diseasequantification predicting unit 604. The diseasequantification predicting unit 604 may be communicatively coupled to amedical image database 606, and then may predict result(s) of disease quantification. - According to certain embodiments of the disclosure, training of the graph neural network may be performed by using gradient based methods, for example. In an implementation, the parameters of the graph neural network can be optimized by minimizing the objective function of the set of nodes during offline training. With only limited labeled data measured, the gradients and/or errors of the set of nodes can be transferred to the other nodes of the graph net through a back propagation approach for message or information passing. Thus, the structural information of the graph may be considered for a robust model. The disclosed architecture can, in certain embodiments, seamlessly integrate the information from the centerline points in the whole tree for more accurate prediction with only limited labeled data available. According to various embodiments, the objective function may be the means square error of the set of nodes. Alternatively, the objective function may be the weighted means square error of the set of nodes. In other words, the objective function may be defined by one skilled in the art as desired without departing from the spirit of the disclosure.
- In some embodiments, the disease
quantification predicting unit 604 may be communicatively coupled to thetraining image database 601 vianetwork 605. In this manner, the predicted result of disease quantification obtained by the diseasequantification predicting unit 604, upon confirmation by the radiologist or the clinician, may be fed back as training sample to thetraining image database 601 for future use. In this way, thetraining image database 601 may be augmented for expansion in scale in favor of better prediction results as well as improvement of accuracy of the model. - A block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure is described below with reference to
FIG. 7 . In some embodiments, the system may include acenterline generation device 700, animage acquisition device 701 and a diseasequantification modeling device 702, for example. In some embodiments, the system may include only a diseasequantification modeling device 702. - In some embodiments, the
image acquisition device 701 may acquire and output an image by any type of imaging modalities, such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like. - In some embodiments, the
centerline generation device 700 is communicatively connected to theimage acquisition device 701 and the diseasequantification modeling device 702. According to an embodiment, thecenterline generation device 700 may obtain the image directly or indirectly from theimage acquisition device 701, conduct tree segment for the image, and then extract a centerline of the image, as illustrated in stages (a) and (b) ofFIG. 3 . - In some embodiments, the disease
quantification modeling device 702 may be a dedicated computer or a general-purpose computer. The diseasequantification modeling device 702 may be a hospital-customized computer for performing image acquisition and image processing tasks, for example, or a server in the cloud. As shown inFIG. 7 , the diseasequantification modeling device 702 may include acommunication interface 703, aprocessor 706, amemory 705, astorage device 704, a bus 707, and an input/output device 708. For example, thecommunication interface 703, theprocessor 706, thememory 705, thestorage device 704 and the input/output device 708 may be connected and communicated with one another. - In some embodiments, the
communication interface 703 may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as optical fiber, USB 3.0, Thunderbolt or the like), a wireless network adapter (such as WiFi adapter), telecommunication (3G, 4G/LTE, 5G, 6G and beyond). The diseasequantification modeling device 702 may be connected to thecenterline generation device 700, theimage acquisition device 701 and other components. In some embodiments, the diseasequantification modeling device 702 may receive the generated centerline from thecenterline generation device 700 and medical image (e.g., a sequence of images of vessel) from theimage acquisition device 701 via thecommunication interface 703. - In some embodiments, the
memory 705/storage device 704 may be a non-transitory computer-readable medium or machine-readable medium such as read only memory (ROM), random access memory (RAM), a phase change random-access memory (PRAM), a dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM, a static random-access memory (SRAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), magnetic storage device, etc., on which information or instructions which can be accessed and executed by a computer are stored in any format. In some embodiments, the trained graph neural network and model-related data may be stored in thestorage device 704. - In some embodiments, the
memory 705 may store computer-executable instructions, which, when executed by theprocessor 706, may perform the method for disease quantification modeling including the steps of: obtaining a centerline of an anatomical tree structure; generating a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter; obtaining labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network; and training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes. - The computer-executable instructions, when executed by the
processor 706, may perform the steps of predicting disease quantification parameter for each test image. Particularly, the computer-executable instructions, when executed by theprocessor 706, may extract a test centerline of the test anatomical tree structure, generate a trained test graph neural network, extract a test disease related feature or a 2D/3D image patch for each test centerline point corresponding to test node, and predict the disease quantification parameters along the test centerline based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network. The trained test graph neural network may be based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline. A test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network. - In some embodiments, the
processor 706 may be a single-core or multi-core processing device that includes one or more general processing devices, such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like. More specifically, theprocessor 706 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor that runs a combination of instruction sets. Theprocessor 706 may also be one or more dedicated processing devices such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), system-on-chip (SoC), and the like. - In some embodiment, the
processor 706 may be communicatively coupled to thememory 705, and may be configured to obtain a centerline of an anatomical tree structure, to generate a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter, to obtain labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network (for example, the one or more nodes with labeled data may be a subset of the nodes of the graph neural network), and to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes. According to some embodiments, theprocessor 706 is also configured to train the graph neural network as follows by using gradient based methods: to optimize the parameters of the graph neural network by minimizing the objective function of the set of nodes; and to transfer the gradients and/or errors of the set of nodes to the other nodes. - The input/
output device 708 may be any input and output device such as keyboard, mouse, printer, display, scanner, touch panel, via which an operator may interface with the computer. In some embodiments, prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient, etc. - Various operations or functions are described herein, which may be implemented as software code or instructions or defined as software code or instructions. Such content may be source code or differential code (“delta” or “patch” code) that can be executed directly (“object” or “executable” form). The software code or instructions may be stored in computer readable storage medium, and when executed, may cause a machine to perform the described functions or operations and include any mechanism for storing information in the form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable or non-recordable media.
- Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the descriptions be considered as examples only, with a true scope being indicated following claims and their full scope of equivalents.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/894,363 US20220415510A1 (en) | 2019-06-19 | 2022-08-24 | Method and system for disease quantification modeling of anatomical tree structure |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962863472P | 2019-06-19 | 2019-06-19 | |
US16/906,936 US11462326B2 (en) | 2019-06-19 | 2020-06-19 | Method and system for disease quantification modeling of anatomical tree structure |
US17/894,363 US20220415510A1 (en) | 2019-06-19 | 2022-08-24 | Method and system for disease quantification modeling of anatomical tree structure |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/906,936 Continuation US11462326B2 (en) | 2019-06-19 | 2020-06-19 | Method and system for disease quantification modeling of anatomical tree structure |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220415510A1 true US20220415510A1 (en) | 2022-12-29 |
Family
ID=72675480
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/906,936 Active 2041-04-20 US11462326B2 (en) | 2019-06-19 | 2020-06-19 | Method and system for disease quantification modeling of anatomical tree structure |
US17/894,363 Abandoned US20220415510A1 (en) | 2019-06-19 | 2022-08-24 | Method and system for disease quantification modeling of anatomical tree structure |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/906,936 Active 2041-04-20 US11462326B2 (en) | 2019-06-19 | 2020-06-19 | Method and system for disease quantification modeling of anatomical tree structure |
Country Status (2)
Country | Link |
---|---|
US (2) | US11462326B2 (en) |
CN (1) | CN111754476A (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837234B (en) * | 2021-01-25 | 2022-07-22 | 重庆师范大学 | Human face image restoration method based on multi-column gating convolution network |
CN113017667A (en) * | 2021-02-05 | 2021-06-25 | 上海市第六人民医院 | Method, device and equipment for quantifying vascular stenosis and readable storage medium |
CN113674856B (en) * | 2021-04-15 | 2023-12-12 | 腾讯科技(深圳)有限公司 | Medical data processing method, device, equipment and medium based on artificial intelligence |
CN115359870B (en) * | 2022-10-20 | 2023-03-24 | 之江实验室 | Disease diagnosis and treatment process abnormity identification system based on hierarchical graph neural network |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7599535B2 (en) * | 2004-08-02 | 2009-10-06 | Siemens Medical Solutions Usa, Inc. | System and method for tree-model visualization for pulmonary embolism detection |
CN101548296B (en) * | 2006-06-16 | 2013-04-24 | 皇家飞利浦电子股份有限公司 | Automated hierarchical splitting of anatomical trees |
CN103324954B (en) * | 2013-05-31 | 2017-02-08 | 中国科学院计算技术研究所 | Image classification method based on tree structure and system using same |
US9633306B2 (en) * | 2015-05-07 | 2017-04-25 | Siemens Healthcare Gmbh | Method and system for approximating deep neural networks for anatomical object detection |
US10971271B2 (en) * | 2016-04-12 | 2021-04-06 | Siemens Healthcare Gmbh | Method and system for personalized blood flow modeling based on wearable sensor networks |
DE102017214447B4 (en) * | 2017-08-18 | 2021-05-12 | Siemens Healthcare Gmbh | Planar visualization of anatomical structures |
CN107563983B (en) * | 2017-09-28 | 2020-09-01 | 上海联影医疗科技有限公司 | Image processing method and medical imaging device |
US10762637B2 (en) * | 2017-10-27 | 2020-09-01 | Siemens Healthcare Gmbh | Vascular segmentation using fully convolutional and recurrent neural networks |
US20190155993A1 (en) * | 2017-11-20 | 2019-05-23 | ThinkGenetic Inc. | Method and System Supporting Disease Diagnosis |
US10657359B2 (en) * | 2017-11-20 | 2020-05-19 | Google Llc | Generating object embeddings from images |
CN109635876B (en) * | 2017-12-21 | 2021-04-09 | 北京科亚方舟医疗科技股份有限公司 | Computer-implemented method, apparatus, and medium for generating anatomical labels for physiological tree structures |
US11918333B2 (en) * | 2017-12-29 | 2024-03-05 | Analytics For Life Inc. | Method and system to assess disease using phase space tomography and machine learning |
US10699407B2 (en) * | 2018-04-11 | 2020-06-30 | Pie Medical Imaging B.V. | Method and system for assessing vessel obstruction based on machine learning |
US20200303075A1 (en) * | 2019-03-18 | 2020-09-24 | Kundan Krishna | System and a method to predict occurrence of a chronic diseases |
EP3958732A4 (en) * | 2019-04-23 | 2023-01-18 | Cedars-Sinai Medical Center | Methods and systems for assessing inflammatory disease with deep learning |
-
2020
- 2020-06-18 CN CN202010559567.8A patent/CN111754476A/en active Pending
- 2020-06-19 US US16/906,936 patent/US11462326B2/en active Active
-
2022
- 2022-08-24 US US17/894,363 patent/US20220415510A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US11462326B2 (en) | 2022-10-04 |
CN111754476A (en) | 2020-10-09 |
US20200402666A1 (en) | 2020-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11462326B2 (en) | Method and system for disease quantification modeling of anatomical tree structure | |
US10430949B1 (en) | Automatic method and system for vessel refine segmentation in biomedical images using tree structure based deep learning model | |
Usman et al. | Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning | |
US9280819B2 (en) | Image segmentation techniques | |
US20220351863A1 (en) | Method and System for Disease Quantification of Anatomical Structures | |
Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
Gsaxner et al. | Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data | |
Stember et al. | Deep reinforcement learning with automated label extraction from clinical reports accurately classifies 3D MRI brain volumes | |
US20160232330A1 (en) | Manifold Diffusion of Solutions for Kinetic Analysis of Pharmacokinetic Data | |
Lamash et al. | Curved planar reformatting and convolutional neural network‐based segmentation of the small bowel for visualization and quantitative assessment of pediatric Crohn's disease from MRI | |
Subramanian et al. | Multiatlas calibration of biophysical brain tumor growth models with mass effect | |
Garrido-Oliver et al. | Machine learning for the automatic assessment of aortic rotational flow and wall shear stress from 4D flow cardiac magnetic resonance imaging | |
Ferdian et al. | WSSNet: aortic wall shear stress estimation using deep learning on 4D flow MRI | |
Sundar et al. | Potentials and caveats of AI in hybrid imaging | |
Meng et al. | Radiomics-enhanced deep multi-task learning for outcome prediction in head and neck cancer | |
Yousefirizi et al. | TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images—a multi-center generalizability analysis | |
Wang et al. | Medical matting: Medical image segmentation with uncertainty from the matting perspective | |
CN114913174B (en) | Method, apparatus and storage medium for vascular system variation detection | |
Fiot et al. | Efficient brain lesion segmentation using multi‐modality tissue‐based feature selection and support vector machines | |
Upendra et al. | CNN-based cardiac motion extraction to generate deformable geometric left ventricle myocardial models from cine MRI | |
Klyuzhin et al. | PSMA‐Hornet: Fully‐automated, multi‐target segmentation of healthy organs in PSMA PET/CT images | |
CN113129297B (en) | Diameter automatic measurement method and system based on multi-phase tumor image | |
US20220215958A1 (en) | System and method for training machine learning models with unlabeled or weakly-labeled data and applying the same for physiological analysis | |
Kuang et al. | Three-dimensional embedded attentive RNN (3D-EAR) segmentor for left ventricle delineation from myocardial velocity mapping | |
US12026877B2 (en) | Device and method for pneumonia detection based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BEIJING KEYA MEDICAL TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, XIN;YIN, YOUBING;BAI, JUNJIE;AND OTHERS;SIGNING DATES FROM 20200616 TO 20200617;REEL/FRAME:060886/0933 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |