US20220415510A1 - Method and system for disease quantification modeling of anatomical tree structure - Google Patents

Method and system for disease quantification modeling of anatomical tree structure Download PDF

Info

Publication number
US20220415510A1
US20220415510A1 US17/894,363 US202217894363A US2022415510A1 US 20220415510 A1 US20220415510 A1 US 20220415510A1 US 202217894363 A US202217894363 A US 202217894363A US 2022415510 A1 US2022415510 A1 US 2022415510A1
Authority
US
United States
Prior art keywords
graph
nodes
centerline
neural network
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/894,363
Inventor
Xin Wang
Youbing YIN
Junjie Bai
Qi Song
Kunlin Cao
Yi Lu
Feng Gao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Keya Medical Technology Co Ltd
Keya Medical Technology Co Ltd
Original Assignee
Keya Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Keya Medical Technology Co Ltd filed Critical Keya Medical Technology Co Ltd
Priority to US17/894,363 priority Critical patent/US20220415510A1/en
Assigned to BEIJING KEYA MEDICAL TECHNOLOGY CO., LTD. reassignment BEIJING KEYA MEDICAL TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WANG, XIN, YIN, YOUBING, BAI, JUNJIE, CAO, KUNLIN, GAO, FENG, LU, YI, SONG, QI
Publication of US20220415510A1 publication Critical patent/US20220415510A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0445
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0454
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30021Catheter; Guide wire
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the disclosure relates to medical image processing and analysis and considers the learning problem of disease quantification for anatomical tree structures (e.g., vessels, airway trees or the like), using labeled data (such as ground truth values) available for training.
  • anatomical tree structures e.g., vessels, airway trees or the like
  • labeled data such as ground truth values
  • Fractional Flow Reserve is a reliable index for the assessment of the cardiac ischemia.
  • FFR can be measured by pressure wire. Pressure wire measurement is invasive and only one or several values may be measured in the whole tree because of the level of invasiveness. Attempts have been made to estimate FFR using learning-based methods. Such learning based FFR estimation is fundamentally a low-data problem since the ground truth measurements are provided only at one, a few, or several locations.
  • FIG. 1 shows several scenarios with ground-truth FFR values: one point ( FIG. 1 A ), several isolated points ( FIG. 1 B ), or values along one segment ( FIG. 1 C ).
  • the present disclosure is provided to, among other things, overcome the drawbacks in the conventional methods for disease quantification modeling of anatomical tree structure with learning network Instead of using the simulated FFR as the ground truth for training the FFR model, a goal of certain embodiments of the present disclosure is to train the FFR model with the measured invasive FFRs directly.
  • the measured invasive FFRs are the most accurate values as the ground truth for training the model compared to the other values (calculated by algorithms) as the ground truths.
  • a computer implemented method for disease quantification modeling of an anatomical tree structure may include the follows steps for each training image to perform the corresponding training/learning.
  • the method may include obtaining a centerline of an anatomical tree structure from the training image.
  • the method may also include generating, by a processor, a graph neural network including a plurality of nodes based on a graph. Each node of the graph neural network may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
  • the method may include obtaining labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Still further, the method may include training, by the processor, the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • a system for disease quantification modeling of an anatomical tree structure may include an interface and a processor.
  • the interface may be configured to receive training images containing the anatomical tree structure.
  • the processor may be configured to perform the follows steps for each training image.
  • the processor may be configured to obtain a centerline of the anatomical tree structure.
  • the processor may be further configured to generate a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
  • the processor may be further configured to obtain labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the processor is configured to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • a non-transitory computer readable medium storing instructions that, when executed by a processor, may perform a method for disease quantification modeling of an anatomical tree structure.
  • the method may include obtaining a centerline of an anatomical tree structure.
  • the method may further include generating a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
  • the method may further include obtaining labeled data of one or more nodes (e.g., from the training image), the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the method may include training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • the method of certain embodiments of the disclosure may propagate information from labeled data points towards the unlabeled data points.
  • Certain embodiments of the disclosure may use both the implicit representations (such as feature embedding) and explicit relationships (such as graph) for learning disease models of the whole anatomical tree structure.
  • the disclosed method builds a graph where each node corresponds to a point on a centerline of the tree structure. These nodes may be linked via the centerline.
  • the input to each node of the graph may be a vector representation (also referred to as feature embedding) of each node.
  • the disclosed method then generates and uses a dynamic graph neural network to transfer information (message passing) between the nodes with the ground truth values (for example, invasive FFR value) and the nodes without ground truth values.
  • the ground truth values for example, invasive FFR value
  • Certain of the disclosed methods and systems have at least the following benefits. Firstly, the disease prediction task is formulated as an interpolation problem on a graph under the deep learning architectures that may rely on supervision from only a few ground truth values, nodes are associated with the points on the centerline of the tree, and edges are defined by the centerline.
  • anatomical tree structures have variations, but the disclosed graph neural networks can deal with such variations using dynamic graphs for individuals.
  • the disclosed dynamic graph neural network may learn how to propagate label information from labeled data points towards the unlabeled data points during the optimization process, so as to obtain a well-trained graph neural network regardless of the deficiency of labeled data points.
  • the disclosed system not only considers the points of the centerline independently but also embeds graph structure among all centerline points.
  • the disclosed framework can seamlessly integrate the information from the centerline points in the whole tree to make an accurate prediction with only limited labeled data.
  • the spatially close neighbor nodes being considered during learning of the graph neural network, global considerations among nodes may be integrated into the training in such a way that relations between one node and surrounding nodes are considered together with hidden information.
  • FIG. 1 A is a schematic diagram illustrating scenarios with only one centerline point having ground-truth FFR value
  • FIG. 1 B is a schematic diagram illustrating scenarios with several centerline points having ground-truth FFR values
  • FIG. 1 C is a schematic diagram illustrating scenarios with multiple centerline points along one segment or path having ground-truth FFR values
  • FIG. 2 illustrates an overall framework for FFR (as an example of the disease quantification parameter) prediction with dynamic graph neural network according to an embodiment of the present disclosure
  • FIG. 3 illustrates a process of creating graph representation from Computed Tomography (CT) image including stage(a)-stage(d) according to an embodiment of the present disclosure
  • FIG. 4 A is a schematic diagram illustrating a graph convolution type of dynamic graph neural network according to an embodiment of the present disclosure
  • FIG. 4 B is a schematic diagram illustrating a gate type of dynamic graph neural network according to an embodiment of the present disclosure
  • FIG. 5 illustrates a flowchart of an example method for disease quantification modeling according to an embodiment of present disclosure
  • FIG. 6 depicts a schematic configuration diagram of disease quantification system according to an embodiment of present disclosure.
  • FIG. 7 depicts a block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure.
  • anatomical tree structure may refer to vessels, airways, and the like with tree structure.
  • the technical term “medical image” may refer to a complete image or an image patch cropped from a complete image in any form including the forms of two dimensional (2D), 2D plus depth, or three dimensional (3D).
  • 2D two dimensional
  • 3D three dimensional
  • FIGS. 1 A, 1 B, and 1 C show several other scenarios as shown in FIGS. 1 A, 1 B, and 1 C .
  • FIG. 1 A only one invasive FFR value equal to 0.7 is measured in addition to an inlet FFR value equal to 1.
  • FIG. 1 B illustrates two invasive FFR values are measured in addition to an inlet FFR value equal to 1.
  • FIG. 1 C A scenario where multiple values are measured from a segment or path obtained using pull-back curves using pressure wire, which is invasive, is illustrated in FIG. 1 C .
  • the present disclosure proposes to optimize disease quantification model using learning-based methods with only limited FFR value(s) available.
  • FIG. 2 The framework of an example method according to the present disclosure is illustrated in FIG. 2 , including two phases: a training phase as an offline process and a prediction phase as an online process.
  • a database of annotated training data (training images) with ground truth values (as an example of labeled data) is assembled.
  • a graph representation algorithm may be adopted to automatically extract features from the sampled centerline points to create the graph structural representation for each training data.
  • the dynamic graph neural network may learn to transfer information (message passing) between the nodes with the ground truth values and the nodes without the ground truth values, which will be described below, to obtain a well-trained deep memory graph neural network.
  • the prediction phase shown as online testing in FIG. 2 is completed online, whereby the disease quantification parameter (e.g., FFR) of the whole tree for an unseen data can be calculated by using the learned model from the offline training phase.
  • the method may perform the steps for each test image to predict disease quantification parameters along the anatomical tree structure.
  • the processor may extract a test centerline of the test anatomical tree structure.
  • the processor may generate a trained test graph neural network including a plurality of nodes based on the trained model as shown in FIG. 2 .
  • the trained test graph neural network may be generated based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline.
  • a test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network.
  • the term “a graph-template setting of a neural network unit” may refer to a network parameter setting definition with respect to all graphs.
  • the network parameters of the neural network unit may be set based on the local graph relationship of the corresponding centerline point using the network parameter setting definition.
  • the network parameter setting definition may define what network parameters of the neural network unit will be used confronting what local graph relationship of the corresponding centerline point.
  • a test disease related feature or a 2D/3D image patch may be extracted for each test centerline point. Disease quantification parameters along the test centerline may then be predicted based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network.
  • the process as shown in FIG. 3 may be used to create graph representation from an image (e.g., CT image) as an input, including stages (a)-(d).
  • the image may be obtained from a training image database.
  • the image of tree structure may be segmented and thus centerline can be extracted for the image. Thereafter, it is possible to sample points from the centerline, as shown in stage(c). Then, at stage(d), features may be extracted from these points as the vertices (nodes) of the graph to create a graph representation.
  • the features may be disease related features.
  • an initial artery segmentation (stage(a)) with a centerline (stage(b)) is firstly generated, which could be obtained automatically, semi-automatically or manually.
  • points on the centerline are sampled (stage(c)) as the vertex (V) of the graph (G).
  • disease related features can be extracted, which may include but may not be limited to structural features, intensity features, other derived features, or the like.
  • the structural feature the geometric features may include any one of radius, areas, stenosis, volumes, length, curvatures, etc.
  • Intensity features may include any one of intensity-related measurements, such as intensity statistic measurements (minimum, maximum, mean, etc.), gradients, etc.
  • the other derived features could be any feature derived based on the tree structures, intensity or even information related to other anatomic structures. For example, if FFR prediction is needed, such features could be pressure drops or resistance estimated using simplified equations.
  • edges can refer to lines linking two or more nodes.
  • the edges may also be directed.
  • the undirected edge may be treated as two directed edges shows that there is a relation or association between two nodes.
  • directed edges can bring more information than undirected edges.
  • Undirected edges may reflect an underlying anatomical structure, while directed edges may further show information such as flow direction.
  • the information can be propagated from the root of the tree to the terminals, and it can also be propagated in the opposite direction (e.g., from terminals to the root of the tree). Stated differently, the information propagation or passing between nodes of the tree structure can be implemented by considering both directions.
  • both the implicit representations (i.e. feature embedding) and explicit relationships (i.e. graph) may be fused for learning disease models of the whole anatomical tree structure.
  • structural information may be incorporated into disease quantification problem during implementation of the dynamic graph neural network to deal with variations of various anatomical tree structures using dynamic graphs for individuals.
  • each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline.
  • the input of each node may be a disease related feature or a cropped 2D/3D image patch for the corresponding centerline point, and an output of each node may be a disease quantification parameter.
  • the disease prediction task may be formulated as an interpolation problem on a graph under the deep learning architectures that involve supervision from only a few ground truth values.
  • FIG. 4 A illustrates a diagram showing the structure of a Graph Convolution Neural Network (GCN).
  • GCN Graph Convolution Neural Network
  • the goal of GCN may include generalization of Convolution Neural Network (CNN) architectures to non-Euclidean domains (for example, graphs).
  • CNN Convolution Neural Network
  • the graph convolution may define convolutions directly on the graph, whereby a series of convolutional operations can be performed for each node taking a given node's spatially close neighbor nodes into account with graph convolution layers. In this way, global considerations among nodes may be integrated into the training such that relations between one node and surrounding nodes are considered together with hidden information.
  • the GCN can be a function of the input which may include two components: nodes representation X and an adjacent matrix that indicates the edges among the nodes, the structure of which may be formally expressed as:
  • N is the node number
  • C is the dimension of the feature embedding
  • A is an adjacent matrix to denote if there are edges between nodes
  • Z is the output of the GCN.
  • the adjacent matrix A can be determined by the centerline. According to some embodiments, the adjacent matrix A may be fixed.
  • FIG. 4 B shows gate type of the dynamic graph neural network.
  • a gate mechanism like Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM) can also be incorporated into the propagation step to improve the long-term propagation of information across the graph structure.
  • GRU Gated Recurrent Unit
  • LSTM Long Short Term Memory
  • a child node can be a node that is connected to a parent node by an edge with a directionality in the direction of the child node.
  • a node in an artery may be a parent node to a child node in an arteriole.
  • blood may flow from the node in the artery to the node in the arteriole, and consequently the node in the arteriole may be considered a “child” node, while the node in the artery may be considered a “parent node.”
  • the same principle can be applied to other tree structures including veins and lymphatic systems, among others.
  • the parent node can selectively incorporate information from each child node for dynamic optimization of parameters of the disease quantification model.
  • the gate may govern which information may be conveyed and what weight(s) may be set or adjusted.
  • each graph unit (could be a GRU or LSTM unit) contains input and output gates, a memory cell and hidden state. Instead of a single forget gate, each graph unit contains one forget gate for each child node, and the incoming edges indicate the child nodes of the node.
  • the message passing could be bottom-to-up or up-to-bottom or both directions, see FIG. 4 B for an example of bottom-to-up message passing.
  • the graph unit could be any recurrent neural network (RNN) unit such as LSTM, GRU, convolutional LSTM (CLSTM) unit, convolutional GRU (CGRU) unit, etc.
  • RNN recurrent neural network
  • the flowchart of implementation of a method for disease quantification modeling of an anatomical tree structure is illustrated in FIG. 5 .
  • the method may include obtaining a centerline of an anatomical tree structure (Step S 1 ).
  • the method includes, at Step S 2 , generating a graph neural network including a plurality of nodes based on a graph, where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter.
  • labeled data e.g., training data with ground-truth
  • the graph neural network may be trained by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • the trained graph neural network has the optimized parameters (bias, weights, etc.) such that the optimal disease quantification model can be obtained.
  • the node number of the one or more nodes is less than a total number of the nodes in the graph neural network.
  • the node number of the one or more nodes may be much less than a total number of the nodes in the graph neural network.
  • Step S 1 and Step S 3 are shown as individual steps in FIG. 5 , they can also be integrated into one step.
  • the centerline may be extracted and obtained meanwhile obtaining label date of the one or more nodes, which correspond to the already labeled centerline points.
  • the one or more nodes include at least a first node at the inlet of the anatomical tree structure.
  • the one or more nodes include the first node at the inlet of the anatomical tree structure and 1-3 additional nodes.
  • only one additional node may be sufficient to train the graph neural network with the information passing mechanism.
  • the graph neural network with less than or even about 1000 nodes may be well-trained based on only one labeled node or several labeled nodes.
  • the present disclosure does not intend to limit the number of the nodes, and any number of nodes is possible.
  • the disclosed dynamic graph neural network may learn how to propagate label information from labeled data towards the unlabeled data during the optimization, so as to obtain a well-trained graph neural network despite of the deficiency of labeled data points.
  • the system 600 may include a disease quantification model training unit 602 and a disease quantification predicting unit 604 .
  • the disease quantification model training unit 602 can acquire training image with ground truth values from a training image database 601 to train a disease quantification model, and as a result, can output the trained disease quantification model to the disease quantification predicting unit 604 .
  • the disease quantification predicting unit 604 may be communicatively coupled to a medical image database 606 , and then may predict result(s) of disease quantification.
  • training of the graph neural network may be performed by using gradient based methods, for example.
  • the parameters of the graph neural network can be optimized by minimizing the objective function of the set of nodes during offline training. With only limited labeled data measured, the gradients and/or errors of the set of nodes can be transferred to the other nodes of the graph net through a back propagation approach for message or information passing. Thus, the structural information of the graph may be considered for a robust model.
  • the disclosed architecture can, in certain embodiments, seamlessly integrate the information from the centerline points in the whole tree for more accurate prediction with only limited labeled data available.
  • the objective function may be the means square error of the set of nodes.
  • the objective function may be the weighted means square error of the set of nodes.
  • the objective function may be defined by one skilled in the art as desired without departing from the spirit of the disclosure.
  • the disease quantification predicting unit 604 may be communicatively coupled to the training image database 601 via network 605 . In this manner, the predicted result of disease quantification obtained by the disease quantification predicting unit 604 , upon confirmation by the radiologist or the clinician, may be fed back as training sample to the training image database 601 for future use. In this way, the training image database 601 may be augmented for expansion in scale in favor of better prediction results as well as improvement of accuracy of the model.
  • the system may include a centerline generation device 700 , an image acquisition device 701 and a disease quantification modeling device 702 , for example.
  • the system may include only a disease quantification modeling device 702 .
  • the image acquisition device 701 may acquire and output an image by any type of imaging modalities, such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like.
  • imaging modalities such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like.
  • the centerline generation device 700 is communicatively connected to the image acquisition device 701 and the disease quantification modeling device 702 . According to an embodiment, the centerline generation device 700 may obtain the image directly or indirectly from the image acquisition device 701 , conduct tree segment for the image, and then extract a centerline of the image, as illustrated in stages (a) and (b) of FIG. 3 .
  • the disease quantification modeling device 702 may be a dedicated computer or a general-purpose computer.
  • the disease quantification modeling device 702 may be a hospital-customized computer for performing image acquisition and image processing tasks, for example, or a server in the cloud.
  • the disease quantification modeling device 702 may include a communication interface 703 , a processor 706 , a memory 705 , a storage device 704 , a bus 707 , and an input/output device 708 .
  • the communication interface 703 , the processor 706 , the memory 705 , the storage device 704 and the input/output device 708 may be connected and communicated with one another.
  • the communication interface 703 may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as optical fiber, USB 3.0, Thunderbolt or the like), a wireless network adapter (such as WiFi adapter), telecommunication (3G, 4G/LTE, 5G, 6G and beyond).
  • the disease quantification modeling device 702 may be connected to the centerline generation device 700 , the image acquisition device 701 and other components. In some embodiments, the disease quantification modeling device 702 may receive the generated centerline from the centerline generation device 700 and medical image (e.g., a sequence of images of vessel) from the image acquisition device 701 via the communication interface 703 .
  • the memory 705 /storage device 704 may be a non-transitory computer-readable medium or machine-readable medium such as read only memory (ROM), random access memory (RAM), a phase change random-access memory (PRAM), a dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM, a static random-access memory (SRAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), magnetic storage device, etc., on which information or instructions which can be accessed and executed by a computer are stored in any format.
  • the trained graph neural network and model-related data may be stored in the storage device 704 .
  • the memory 705 may store computer-executable instructions, which, when executed by the processor 706 , may perform the method for disease quantification modeling including the steps of: obtaining a centerline of an anatomical tree structure; generating a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter; obtaining labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network; and training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • the computer-executable instructions when executed by the processor 706 , may perform the steps of predicting disease quantification parameter for each test image. Particularly, the computer-executable instructions, when executed by the processor 706 , may extract a test centerline of the test anatomical tree structure, generate a trained test graph neural network, extract a test disease related feature or a 2D/3D image patch for each test centerline point corresponding to test node, and predict the disease quantification parameters along the test centerline based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network.
  • the trained test graph neural network may be based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline.
  • a test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network.
  • the processor 706 may be a single-core or multi-core processing device that includes one or more general processing devices, such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like. More specifically, the processor 706 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor that runs a combination of instruction sets. The processor 706 may also be one or more dedicated processing devices such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), system-on-chip (SoC), and the like.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • DSPs digital signal processors
  • SoC system-on-chip
  • the processor 706 may be communicatively coupled to the memory 705 , and may be configured to obtain a centerline of an anatomical tree structure, to generate a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter, to obtain labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network (for example, the one or more nodes with labeled data may be a subset of the nodes of the graph neural network), and to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • the processor 706 is also configured to train the graph neural network as follows by using gradient based methods: to optimize the parameters of the graph neural network by minimizing the objective function of the set of nodes; and to transfer the gradients and/or errors of the set of nodes to the other nodes.
  • the input/output device 708 may be any input and output device such as keyboard, mouse, printer, display, scanner, touch panel, via which an operator may interface with the computer.
  • prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient, etc.
  • Various operations or functions are described herein, which may be implemented as software code or instructions or defined as software code or instructions. Such content may be source code or differential code (“delta” or “patch” code) that can be executed directly (“object” or “executable” form).
  • the software code or instructions may be stored in computer readable storage medium, and when executed, may cause a machine to perform the described functions or operations and include any mechanism for storing information in the form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable or non-recordable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A method and system can be used for disease quantification modeling of an anatomical tree structure. The method may include obtaining a centerline of an anatomical tree structure and generating a graph neural network including a plurality of nodes based on a graph. Each node corresponds to a centerline point and edges are defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. The method also includes obtaining labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network. Further, the method includes training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of application Ser. No. 16/906,936 filed Jun. 19, 2020, which claims the benefit of priority to U.S. Provisional Application No. 62/863,472, filed on Jun. 19, 2019, the entire contents of both of which are incorporated herein by reference.
  • TECHNICAL FIELD
  • The disclosure relates to medical image processing and analysis and considers the learning problem of disease quantification for anatomical tree structures (e.g., vessels, airway trees or the like), using labeled data (such as ground truth values) available for training.
  • BACKGROUND
  • Accurate disease quantifications of anatomical tree structure are useful for precise diagnosis. For example, it has been proved that Fractional Flow Reserve (FFR) is a reliable index for the assessment of the cardiac ischemia. FFR can be measured by pressure wire. Pressure wire measurement is invasive and only one or several values may be measured in the whole tree because of the level of invasiveness. Attempts have been made to estimate FFR using learning-based methods. Such learning based FFR estimation is fundamentally a low-data problem since the ground truth measurements are provided only at one, a few, or several locations. FIG. 1 shows several scenarios with ground-truth FFR values: one point (FIG. 1A), several isolated points (FIG. 1B), or values along one segment (FIG. 1C). With only a small amount of invasive values (measured by pressure wire) available for a training process, it is challenging to provide accurate predictions for the whole coronary artery tree. The existing machine learning based methods rely on simulated FFR values as ground truth for training the model. However, the simulated FFR values are usually calculated by numeric flow simulation based methods, which are time-consuming and too inaccurate for training the machine learning model. Thus, the performance of conventional machine learning-based methods is highly restricted by simulation methods.
  • SUMMARY
  • The present disclosure is provided to, among other things, overcome the drawbacks in the conventional methods for disease quantification modeling of anatomical tree structure with learning network Instead of using the simulated FFR as the ground truth for training the FFR model, a goal of certain embodiments of the present disclosure is to train the FFR model with the measured invasive FFRs directly. The measured invasive FFRs are the most accurate values as the ground truth for training the model compared to the other values (calculated by algorithms) as the ground truths.
  • In one aspect, a computer implemented method for disease quantification modeling of an anatomical tree structure is provided. The method may include the follows steps for each training image to perform the corresponding training/learning. The method may include obtaining a centerline of an anatomical tree structure from the training image. The method may also include generating, by a processor, a graph neural network including a plurality of nodes based on a graph. Each node of the graph neural network may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. Further, the method may include obtaining labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Still further, the method may include training, by the processor, the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • In another aspect, a system for disease quantification modeling of an anatomical tree structure is provided. The system may include an interface and a processor. The interface may be configured to receive training images containing the anatomical tree structure. The processor may be configured to perform the follows steps for each training image. The processor may be configured to obtain a centerline of the anatomical tree structure. The processor may be further configured to generate a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. The processor may be further configured to obtain labeled data of one or more nodes, the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the processor is configured to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • In a further aspect, a non-transitory computer readable medium storing instructions that, when executed by a processor, may perform a method for disease quantification modeling of an anatomical tree structure. The method may include obtaining a centerline of an anatomical tree structure. The method may further include generating a graph neural network including a plurality of nodes based on a graph. Wherein each node may correspond to a centerline point and edges between the nodes of the graph neural network may be defined by the centerline, with an input of each node being a disease related feature or an image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. The method may further include obtaining labeled data of one or more nodes (e.g., from the training image), the number of which may be less than a total number of the nodes in the graph neural network. Moreover, the method may include training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • Use of graph neural networks has demonstrated that, in some circumstances, predictions may be performed from only one or a few data points (such as nodes). The method of certain embodiments of the disclosure may propagate information from labeled data points towards the unlabeled data points. Certain embodiments of the disclosure may use both the implicit representations (such as feature embedding) and explicit relationships (such as graph) for learning disease models of the whole anatomical tree structure. The disclosed method builds a graph where each node corresponds to a point on a centerline of the tree structure. These nodes may be linked via the centerline. The input to each node of the graph may be a vector representation (also referred to as feature embedding) of each node. The disclosed method then generates and uses a dynamic graph neural network to transfer information (message passing) between the nodes with the ground truth values (for example, invasive FFR value) and the nodes without ground truth values. Certain of the disclosed methods and systems have at least the following benefits. Firstly, the disease prediction task is formulated as an interpolation problem on a graph under the deep learning architectures that may rely on supervision from only a few ground truth values, nodes are associated with the points on the centerline of the tree, and edges are defined by the centerline. Secondly, anatomical tree structures have variations, but the disclosed graph neural networks can deal with such variations using dynamic graphs for individuals. Thirdly, the disclosed dynamic graph neural network may learn how to propagate label information from labeled data points towards the unlabeled data points during the optimization process, so as to obtain a well-trained graph neural network regardless of the deficiency of labeled data points.
  • Moreover, in contrast to the conventional methods, the disclosed system not only considers the points of the centerline independently but also embeds graph structure among all centerline points. With the information propagation of the nodes in the deep memory graph nets, the disclosed framework can seamlessly integrate the information from the centerline points in the whole tree to make an accurate prediction with only limited labeled data. With the spatially close neighbor nodes being considered during learning of the graph neural network, global considerations among nodes may be integrated into the training in such a way that relations between one node and surrounding nodes are considered together with hidden information.
  • It is to be understood that the preceding general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the invention, as claimed. A given embodiment may provide one, two, more, or all the preceding advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments, and together with the description and claims, serve to explain the disclosed embodiments. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present method, system, or non-transitory computer readable medium having instructions thereon for implementing the method.
  • FIG. 1A is a schematic diagram illustrating scenarios with only one centerline point having ground-truth FFR value;
  • FIG. 1B is a schematic diagram illustrating scenarios with several centerline points having ground-truth FFR values;
  • FIG. 1C is a schematic diagram illustrating scenarios with multiple centerline points along one segment or path having ground-truth FFR values;
  • FIG. 2 illustrates an overall framework for FFR (as an example of the disease quantification parameter) prediction with dynamic graph neural network according to an embodiment of the present disclosure;
  • FIG. 3 illustrates a process of creating graph representation from Computed Tomography (CT) image including stage(a)-stage(d) according to an embodiment of the present disclosure;
  • FIG. 4A is a schematic diagram illustrating a graph convolution type of dynamic graph neural network according to an embodiment of the present disclosure;
  • FIG. 4B is a schematic diagram illustrating a gate type of dynamic graph neural network according to an embodiment of the present disclosure;
  • FIG. 5 illustrates a flowchart of an example method for disease quantification modeling according to an embodiment of present disclosure;
  • FIG. 6 depicts a schematic configuration diagram of disease quantification system according to an embodiment of present disclosure; and
  • FIG. 7 depicts a block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure.
  • DETAILED DESCRIPTION
  • For the purposes of facilitating an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure pertains. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. The order of steps of the method does not limit to the described or shown one. According to the disclosure, the order of steps may be varied according to actual requirements without departing from the gist of the disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately herein.
  • Hereinafter, the technical term “anatomical tree structure” may refer to vessels, airways, and the like with tree structure. The technical term “medical image” may refer to a complete image or an image patch cropped from a complete image in any form including the forms of two dimensional (2D), 2D plus depth, or three dimensional (3D). Although FFR, which is a reliable index for assessment of cardiac ischemia, is mentioned above for describing the process, no limitation to the FFR is intended according to the present disclosure. Instead, the present disclosure is applicable to any disease quantification parameter of any anatomical tree structure.
  • As described above, because the procedure of measuring FFR by pressure wire is invasive, it is preferable that only a few invasive values may be measured in the whole tree. In some embodiments, only one FFR value may be measured at the inlet. In alternative embodiments, several other scenarios are possible as shown in FIGS. 1A, 1B, and 1C. As shown in FIG. 1A, only one invasive FFR value equal to 0.7 is measured in addition to an inlet FFR value equal to 1. As another scenario, FIG. 1B illustrates two invasive FFR values are measured in addition to an inlet FFR value equal to 1. A scenario where multiple values are measured from a segment or path obtained using pull-back curves using pressure wire, which is invasive, is illustrated in FIG. 1C. The present disclosure proposes to optimize disease quantification model using learning-based methods with only limited FFR value(s) available.
  • The framework of an example method according to the present disclosure is illustrated in FIG. 2 , including two phases: a training phase as an offline process and a prediction phase as an online process. During the offline training, a database of annotated training data (training images) with ground truth values (as an example of labeled data) is assembled. A graph representation algorithm may be adopted to automatically extract features from the sampled centerline points to create the graph structural representation for each training data. In particular, the dynamic graph neural network may learn to transfer information (message passing) between the nodes with the ground truth values and the nodes without the ground truth values, which will be described below, to obtain a well-trained deep memory graph neural network.
  • The prediction phase shown as online testing in FIG. 2 is completed online, whereby the disease quantification parameter (e.g., FFR) of the whole tree for an unseen data can be calculated by using the learned model from the offline training phase. In some embodiments, the method may perform the steps for each test image to predict disease quantification parameters along the anatomical tree structure. Upon a test image containing a test anatomical tree structure (“test anatomical tree structure” is used to differentiate from the anatomical tree structure contained by the training image) is received, the processor may extract a test centerline of the test anatomical tree structure. The processor may generate a trained test graph neural network including a plurality of nodes based on the trained model as shown in FIG. 2 . In some embodiments, the trained test graph neural network may be generated based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline. A test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network. The term “a graph-template setting of a neural network unit” may refer to a network parameter setting definition with respect to all graphs. Particularly, the network parameters of the neural network unit may be set based on the local graph relationship of the corresponding centerline point using the network parameter setting definition. The network parameter setting definition may define what network parameters of the neural network unit will be used confronting what local graph relationship of the corresponding centerline point. A test disease related feature or a 2D/3D image patch may be extracted for each test centerline point. Disease quantification parameters along the test centerline may then be predicted based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network.
  • In some embodiments, the process as shown in FIG. 3 may be used to create graph representation from an image (e.g., CT image) as an input, including stages (a)-(d). At stage(a), the image may be obtained from a training image database. At stage(b), the image of tree structure may be segmented and thus centerline can be extracted for the image. Thereafter, it is possible to sample points from the centerline, as shown in stage(c). Then, at stage(d), features may be extracted from these points as the vertices (nodes) of the graph to create a graph representation. In particular, the features may be disease related features.
  • For example, an initial artery segmentation (stage(a)) with a centerline (stage(b)) is firstly generated, which could be obtained automatically, semi-automatically or manually. Secondly, points on the centerline are sampled (stage(c)) as the vertex (V) of the graph (G). For each sampled point on the centerline, disease related features can be extracted, which may include but may not be limited to structural features, intensity features, other derived features, or the like. As an example of the structural feature, the geometric features may include any one of radius, areas, stenosis, volumes, length, curvatures, etc. Intensity features may include any one of intensity-related measurements, such as intensity statistic measurements (minimum, maximum, mean, etc.), gradients, etc. The other derived features could be any feature derived based on the tree structures, intensity or even information related to other anatomic structures. For example, if FFR prediction is needed, such features could be pressure drops or resistance estimated using simplified equations.
  • As can be seen from an example of the architecture at stage(d) of FIG. 3 , points on the centerline are linked by edges, which may be undirected. Thus, in the context of certain embodiments of the present disclosure, the technical term “edges,” can refer to lines linking two or more nodes. In some embodiments, the edges may also be directed. In particular, the undirected edge may be treated as two directed edges shows that there is a relation or association between two nodes. Generally, directed edges can bring more information than undirected edges. Undirected edges may reflect an underlying anatomical structure, while directed edges may further show information such as flow direction.
  • In some embodiment, the information can be propagated from the root of the tree to the terminals, and it can also be propagated in the opposite direction (e.g., from terminals to the root of the tree). Stated differently, the information propagation or passing between nodes of the tree structure can be implemented by considering both directions.
  • According to the present disclosure, the tree T is associated with a graph GT=(V,E), where nodes vi∈V correspond to the feature vectors or embedding of the points on the centerline (both with ground truth values and unknown values), and edges ei∈E correspond to directed or undirected edges between the points. According to the present disclosure, both the implicit representations (i.e. feature embedding) and explicit relationships (i.e. graph) may be fused for learning disease models of the whole anatomical tree structure. According to some embodiments, structural information may be incorporated into disease quantification problem during implementation of the dynamic graph neural network to deal with variations of various anatomical tree structures using dynamic graphs for individuals. According to some embodiments, each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline. The input of each node may be a disease related feature or a cropped 2D/3D image patch for the corresponding centerline point, and an output of each node may be a disease quantification parameter. According to the disclosure, the disease prediction task may be formulated as an interpolation problem on a graph under the deep learning architectures that involve supervision from only a few ground truth values.
  • FIG. 4A illustrates a diagram showing the structure of a Graph Convolution Neural Network (GCN). The goal of GCN may include generalization of Convolution Neural Network (CNN) architectures to non-Euclidean domains (for example, graphs). As shown in FIG. 4A, the graph convolution may define convolutions directly on the graph, whereby a series of convolutional operations can be performed for each node taking a given node's spatially close neighbor nodes into account with graph convolution layers. In this way, global considerations among nodes may be integrated into the training such that relations between one node and surrounding nodes are considered together with hidden information.
  • The GCN can be a function of the input which may include two components: nodes representation X and an adjacent matrix that indicates the edges among the nodes, the structure of which may be formally expressed as:

  • Z=GCN(X,A),
  • Where X∈RN×C, is the nodes representation, N is the node number, C is the dimension of the feature embedding, A is an adjacent matrix to denote if there are edges between nodes, and Z is the output of the GCN. According to the present disclosure, the adjacent matrix A can be determined by the centerline. According to some embodiments, the adjacent matrix A may be fixed.
  • Other common methods applicable in CNN can also be used in GCN, such as skipping connection or attention.
  • FIG. 4B shows gate type of the dynamic graph neural network. A gate mechanism like Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM) can also be incorporated into the propagation step to improve the long-term propagation of information across the graph structure. In a directed tree structure, a child node can be a node that is connected to a parent node by an edge with a directionality in the direction of the child node. Thus, along an arterial system a node in an artery may be a parent node to a child node in an arteriole. In such a system blood may flow from the node in the artery to the node in the arteriole, and consequently the node in the arteriole may be considered a “child” node, while the node in the artery may be considered a “parent node.” The same principle can be applied to other tree structures including veins and lymphatic systems, among others. For example, if the edges of the graph are directional, by using gate mechanism, the parent node can selectively incorporate information from each child node for dynamic optimization of parameters of the disease quantification model. As an example, the gate may govern which information may be conveyed and what weight(s) may be set or adjusted. More particularly, each graph unit (could be a GRU or LSTM unit) contains input and output gates, a memory cell and hidden state. Instead of a single forget gate, each graph unit contains one forget gate for each child node, and the incoming edges indicate the child nodes of the node. The message passing could be bottom-to-up or up-to-bottom or both directions, see FIG. 4B for an example of bottom-to-up message passing. The graph unit could be any recurrent neural network (RNN) unit such as LSTM, GRU, convolutional LSTM (CLSTM) unit, convolutional GRU (CGRU) unit, etc.
  • The flowchart of implementation of a method for disease quantification modeling of an anatomical tree structure is illustrated in FIG. 5 . The method may include obtaining a centerline of an anatomical tree structure (Step S1). The method includes, at Step S2, generating a graph neural network including a plurality of nodes based on a graph, where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter. After the graph neural network is generated, labeled data (e.g., training data with ground-truth) of one or more nodes may be obtained, at Step S3. Then, at Step S4, the graph neural network may be trained by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes. The trained graph neural network has the optimized parameters (bias, weights, etc.) such that the optimal disease quantification model can be obtained. Particularly, the node number of the one or more nodes is less than a total number of the nodes in the graph neural network. According to an embodiment, the node number of the one or more nodes may be much less than a total number of the nodes in the graph neural network. Although Step S1 and Step S3 are shown as individual steps in FIG. 5 , they can also be integrated into one step. As an example, upon a training image labeled with several ground truth values along its centerline is received, the centerline may be extracted and obtained meanwhile obtaining label date of the one or more nodes, which correspond to the already labeled centerline points.
  • According to embodiments of the disclosure, the one or more nodes include at least a first node at the inlet of the anatomical tree structure. Alternatively, the one or more nodes include the first node at the inlet of the anatomical tree structure and 1-3 additional nodes. According to an embodiment, only one additional node may be sufficient to train the graph neural network with the information passing mechanism. According to some embodiments, the graph neural network with less than or even about 1000 nodes may be well-trained based on only one labeled node or several labeled nodes. The present disclosure does not intend to limit the number of the nodes, and any number of nodes is possible. As a result, the disclosed dynamic graph neural network may learn how to propagate label information from labeled data towards the unlabeled data during the optimization, so as to obtain a well-trained graph neural network despite of the deficiency of labeled data points.
  • The training and prediction phases for disease quantification modeling will be described in detail with reference to FIG. 6 , which illustrates an outline of implementations of disease quantification system 600. As shown, the system 600 may include a disease quantification model training unit 602 and a disease quantification predicting unit 604. The disease quantification model training unit 602 can acquire training image with ground truth values from a training image database 601 to train a disease quantification model, and as a result, can output the trained disease quantification model to the disease quantification predicting unit 604. The disease quantification predicting unit 604 may be communicatively coupled to a medical image database 606, and then may predict result(s) of disease quantification.
  • According to certain embodiments of the disclosure, training of the graph neural network may be performed by using gradient based methods, for example. In an implementation, the parameters of the graph neural network can be optimized by minimizing the objective function of the set of nodes during offline training. With only limited labeled data measured, the gradients and/or errors of the set of nodes can be transferred to the other nodes of the graph net through a back propagation approach for message or information passing. Thus, the structural information of the graph may be considered for a robust model. The disclosed architecture can, in certain embodiments, seamlessly integrate the information from the centerline points in the whole tree for more accurate prediction with only limited labeled data available. According to various embodiments, the objective function may be the means square error of the set of nodes. Alternatively, the objective function may be the weighted means square error of the set of nodes. In other words, the objective function may be defined by one skilled in the art as desired without departing from the spirit of the disclosure.
  • In some embodiments, the disease quantification predicting unit 604 may be communicatively coupled to the training image database 601 via network 605. In this manner, the predicted result of disease quantification obtained by the disease quantification predicting unit 604, upon confirmation by the radiologist or the clinician, may be fed back as training sample to the training image database 601 for future use. In this way, the training image database 601 may be augmented for expansion in scale in favor of better prediction results as well as improvement of accuracy of the model.
  • A block diagram illustrating an exemplary disease quantification system according to an embodiment of present disclosure is described below with reference to FIG. 7 . In some embodiments, the system may include a centerline generation device 700, an image acquisition device 701 and a disease quantification modeling device 702, for example. In some embodiments, the system may include only a disease quantification modeling device 702.
  • In some embodiments, the image acquisition device 701 may acquire and output an image by any type of imaging modalities, such as but not limited to CT, digital subtraction angiography (DSA), Magnetic Resonance imaging (MRI), functional MRI, dynamic contrast enhanced MRI, diffusion MRI, spiral CT, cone beam computed tomography (CBCT), positron emission tomography (PET), single-photon emission computed tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, radiotherapy portal imaging, and the like.
  • In some embodiments, the centerline generation device 700 is communicatively connected to the image acquisition device 701 and the disease quantification modeling device 702. According to an embodiment, the centerline generation device 700 may obtain the image directly or indirectly from the image acquisition device 701, conduct tree segment for the image, and then extract a centerline of the image, as illustrated in stages (a) and (b) of FIG. 3 .
  • In some embodiments, the disease quantification modeling device 702 may be a dedicated computer or a general-purpose computer. The disease quantification modeling device 702 may be a hospital-customized computer for performing image acquisition and image processing tasks, for example, or a server in the cloud. As shown in FIG. 7 , the disease quantification modeling device 702 may include a communication interface 703, a processor 706, a memory 705, a storage device 704, a bus 707, and an input/output device 708. For example, the communication interface 703, the processor 706, the memory 705, the storage device 704 and the input/output device 708 may be connected and communicated with one another.
  • In some embodiments, the communication interface 703 may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as optical fiber, USB 3.0, Thunderbolt or the like), a wireless network adapter (such as WiFi adapter), telecommunication (3G, 4G/LTE, 5G, 6G and beyond). The disease quantification modeling device 702 may be connected to the centerline generation device 700, the image acquisition device 701 and other components. In some embodiments, the disease quantification modeling device 702 may receive the generated centerline from the centerline generation device 700 and medical image (e.g., a sequence of images of vessel) from the image acquisition device 701 via the communication interface 703.
  • In some embodiments, the memory 705/storage device 704 may be a non-transitory computer-readable medium or machine-readable medium such as read only memory (ROM), random access memory (RAM), a phase change random-access memory (PRAM), a dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM, a static random-access memory (SRAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), magnetic storage device, etc., on which information or instructions which can be accessed and executed by a computer are stored in any format. In some embodiments, the trained graph neural network and model-related data may be stored in the storage device 704.
  • In some embodiments, the memory 705 may store computer-executable instructions, which, when executed by the processor 706, may perform the method for disease quantification modeling including the steps of: obtaining a centerline of an anatomical tree structure; generating a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter; obtaining labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network; and training the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes.
  • The computer-executable instructions, when executed by the processor 706, may perform the steps of predicting disease quantification parameter for each test image. Particularly, the computer-executable instructions, when executed by the processor 706, may extract a test centerline of the test anatomical tree structure, generate a trained test graph neural network, extract a test disease related feature or a 2D/3D image patch for each test centerline point corresponding to test node, and predict the disease quantification parameters along the test centerline based on the extracted disease related features or 2D/3D image patches by utilizing the trained test graph neural network. The trained test graph neural network may be based on a test graph where each test node corresponds to a test centerline point and edges between the nodes of the graph neural network are defined by the test centerline. A test neural network unit for each test node may follow a graph-template setting of a neural network unit for each node of the trained graph neural network.
  • In some embodiments, the processor 706 may be a single-core or multi-core processing device that includes one or more general processing devices, such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like. More specifically, the processor 706 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor that runs a combination of instruction sets. The processor 706 may also be one or more dedicated processing devices such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), system-on-chip (SoC), and the like.
  • In some embodiment, the processor 706 may be communicatively coupled to the memory 705, and may be configured to obtain a centerline of an anatomical tree structure, to generate a graph neural network including a plurality of nodes based on a graph where each node corresponds to a centerline point and edges between the nodes of the graph neural network are defined by the centerline, with an input of each node being a disease related feature or a 2D/3D image patch for the corresponding centerline point and an output of each node being a disease quantification parameter, to obtain labeled data of one or more nodes, the number of which is less than a total number of the nodes in the graph neural network (for example, the one or more nodes with labeled data may be a subset of the nodes of the graph neural network), and to train the graph neural network by transferring information between the one or more nodes and other nodes based on the labeled data of the one or more nodes. According to some embodiments, the processor 706 is also configured to train the graph neural network as follows by using gradient based methods: to optimize the parameters of the graph neural network by minimizing the objective function of the set of nodes; and to transfer the gradients and/or errors of the set of nodes to the other nodes.
  • The input/output device 708 may be any input and output device such as keyboard, mouse, printer, display, scanner, touch panel, via which an operator may interface with the computer. In some embodiments, prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient, etc.
  • Various operations or functions are described herein, which may be implemented as software code or instructions or defined as software code or instructions. Such content may be source code or differential code (“delta” or “patch” code) that can be executed directly (“object” or “executable” form). The software code or instructions may be stored in computer readable storage medium, and when executed, may cause a machine to perform the described functions or operations and include any mechanism for storing information in the form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable or non-recordable media.
  • Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the descriptions be considered as examples only, with a true scope being indicated following claims and their full scope of equivalents.

Claims (20)

What is claimed is:
1. A computer-implemented method for disease quantification, comprising:
receiving an image containing an anatomical tree structure;
extracting, by at least one processor, a centerline of the anatomical tree structure;
extracting, by the at least one processor, disease related features or image patches for a plurality of centerline points along the centerline; and
predicting disease quantification parameters along the centerline, by the at least one processor, by applying a graph neural network to the disease related features or image patches, wherein the graph neural network comprises a plurality of nodes each corresponding to the plurality of centerline points along the centerline.
2. The computer-implemented method of claim 1, wherein the graph neural network comprises a graph convolutional neural network configured as a function of the plurality of nodes and edges among the nodes.
3. The computer-implemented method of claim 2, wherein the graph convolutional neural network is configured to perform a graph convolution operation for each node with at least one neighbor node taken into account.
4. The computer-implemented method of claim 2, wherein each edge is undirected or directed, for propagating information between the nodes linked by the edge.
5. The computer-implemented method of claim 1, wherein the graph neural network comprises a plurality of graph units each corresponding to a node, wherein each graph unit is any one of a Gated Recurrent unit (GRU), a Long Short Term Memory (LSTM) unit, a convolutional LSTM (CLSTM) unit, or a convolutional GRU (CGRU).
6. The computer-implemented method of claim 5, wherein a node has at least one child node, wherein the graph unit corresponding to the node comprises one forget gate for each child node.
7. The computer-implemented method of claim 1, wherein the plurality of centerline points include at least a first point at an inlet of the anatomical tree structure.
8. The computer-implemented method of claim 1, wherein the disease related features comprise at least one of a structural feature, an intensity feature, or a derived feature.
9. The computer-implemented method of claim 1, wherein the graph neural network is trained using labeled data of one or more nodes, a number of which is less than a total number of the plurality of nodes in the graph neural network, wherein gradients or errors of the one or more nodes are transferred to the other nodes of the plurality of nodes.
10. The computer-implemented method of claim 1, wherein the anatomical tree structure is a vessel or an airway.
11. A system for disease quantification, comprising:
an interface configured to receive an image containing an anatomical tree structure;
at least one processor configured to:
receive an image containing an anatomical tree structure;
extract a centerline of the anatomical tree structure;
extract disease related features or image patches for a plurality of centerline points along the centerline; and
predict disease quantification parameters along the centerline, by the at least one processor, by applying a graph neural network to the disease related features or image patches, wherein the graph neural network comprises a plurality of nodes each corresponding to the plurality of centerline points along the centerline.
12. The system of claim 11, wherein the graph neural network comprises a graph convolutional neural network configured as a function of the plurality of nodes and edges among the nodes.
13. The system of claim 12, wherein the graph convolutional neural network is configured to perform a graph convolution operation for each node with at least one neighbor node taken into account.
14. The system of claim 11, wherein the graph neural network comprises a plurality of graph units each corresponding to a node, wherein each graph unit is any one of a Gated Recurrent unit (GRU), a Long Short Term Memory (LSTM) unit, a convolutional LSTM (CLSTM) unit, or a convolutional GRU (CGRU).
15. The system of claim 14, wherein the node has at least one child node, wherein the graph unit corresponding to the node comprises one forget gate for each child node.
16. The system of claim 11, wherein the plurality of centerline points include at least a first point at an inlet of the anatomical tree structure.
17. The system of claim 11, wherein the disease related features comprise at least one of a structural feature, an intensity feature, or a derived feature.
18. The system of claim 11, wherein the anatomical tree structure is a vessel or an airway.
19. A non-transitory computer readable medium, storing instructions that, when executed by a processor, perform a method for disease quantification, the method comprising:
receiving an image containing an anatomical tree structure;
extracting a centerline of the anatomical tree structure;
extracting disease related features or image patches for a plurality of centerline points along the centerline; and
predicting disease quantification parameters along the centerline by applying a graph neural network to the disease related features or image patches, wherein the graph neural network comprises a plurality of nodes each corresponding to the plurality of centerline points along the centerline.
20. The non-transitory computer readable medium of claim 19, wherein the anatomical tree structure is a vessel or an airway.
US17/894,363 2019-06-19 2022-08-24 Method and system for disease quantification modeling of anatomical tree structure Abandoned US20220415510A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/894,363 US20220415510A1 (en) 2019-06-19 2022-08-24 Method and system for disease quantification modeling of anatomical tree structure

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962863472P 2019-06-19 2019-06-19
US16/906,936 US11462326B2 (en) 2019-06-19 2020-06-19 Method and system for disease quantification modeling of anatomical tree structure
US17/894,363 US20220415510A1 (en) 2019-06-19 2022-08-24 Method and system for disease quantification modeling of anatomical tree structure

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/906,936 Continuation US11462326B2 (en) 2019-06-19 2020-06-19 Method and system for disease quantification modeling of anatomical tree structure

Publications (1)

Publication Number Publication Date
US20220415510A1 true US20220415510A1 (en) 2022-12-29

Family

ID=72675480

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/906,936 Active 2041-04-20 US11462326B2 (en) 2019-06-19 2020-06-19 Method and system for disease quantification modeling of anatomical tree structure
US17/894,363 Abandoned US20220415510A1 (en) 2019-06-19 2022-08-24 Method and system for disease quantification modeling of anatomical tree structure

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/906,936 Active 2041-04-20 US11462326B2 (en) 2019-06-19 2020-06-19 Method and system for disease quantification modeling of anatomical tree structure

Country Status (2)

Country Link
US (2) US11462326B2 (en)
CN (1) CN111754476A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837234B (en) * 2021-01-25 2022-07-22 重庆师范大学 Human face image restoration method based on multi-column gating convolution network
CN113017667A (en) * 2021-02-05 2021-06-25 上海市第六人民医院 Method, device and equipment for quantifying vascular stenosis and readable storage medium
CN113674856B (en) * 2021-04-15 2023-12-12 腾讯科技(深圳)有限公司 Medical data processing method, device, equipment and medium based on artificial intelligence
CN115359870B (en) * 2022-10-20 2023-03-24 之江实验室 Disease diagnosis and treatment process abnormity identification system based on hierarchical graph neural network

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7599535B2 (en) * 2004-08-02 2009-10-06 Siemens Medical Solutions Usa, Inc. System and method for tree-model visualization for pulmonary embolism detection
CN103324954B (en) * 2013-05-31 2017-02-08 中国科学院计算技术研究所 Image classification method based on tree structure and system using same
US9633306B2 (en) * 2015-05-07 2017-04-25 Siemens Healthcare Gmbh Method and system for approximating deep neural networks for anatomical object detection
US10971271B2 (en) * 2016-04-12 2021-04-06 Siemens Healthcare Gmbh Method and system for personalized blood flow modeling based on wearable sensor networks
DE102017214447B4 (en) * 2017-08-18 2021-05-12 Siemens Healthcare Gmbh Planar visualization of anatomical structures
CN107563983B (en) * 2017-09-28 2020-09-01 上海联影医疗科技有限公司 Image processing method and medical imaging device
US20190155993A1 (en) * 2017-11-20 2019-05-23 ThinkGenetic Inc. Method and System Supporting Disease Diagnosis
CN109635876B (en) * 2017-12-21 2021-04-09 北京科亚方舟医疗科技股份有限公司 Computer-implemented method, apparatus, and medium for generating anatomical labels for physiological tree structures
US11918333B2 (en) * 2017-12-29 2024-03-05 Analytics For Life Inc. Method and system to assess disease using phase space tomography and machine learning
US10699407B2 (en) * 2018-04-11 2020-06-30 Pie Medical Imaging B.V. Method and system for assessing vessel obstruction based on machine learning
US20200303075A1 (en) * 2019-03-18 2020-09-24 Kundan Krishna System and a method to predict occurrence of a chronic diseases
CA3137728A1 (en) * 2019-04-23 2020-10-29 Cedars-Sinai Medical Center Methods and systems for assessing inflammatory disease with deep learning

Also Published As

Publication number Publication date
CN111754476A (en) 2020-10-09
US11462326B2 (en) 2022-10-04
US20200402666A1 (en) 2020-12-24

Similar Documents

Publication Publication Date Title
US11462326B2 (en) Method and system for disease quantification modeling of anatomical tree structure
US10430949B1 (en) Automatic method and system for vessel refine segmentation in biomedical images using tree structure based deep learning model
Usman et al. Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning
US9280819B2 (en) Image segmentation techniques
Li et al. A hybrid deep learning framework for integrated segmentation and registration: evaluation on longitudinal white matter tract changes
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
US20220351863A1 (en) Method and System for Disease Quantification of Anatomical Structures
Ma et al. PialNN: a fast deep learning framework for cortical pial surface reconstruction
Gsaxner et al. Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data
US20160232330A1 (en) Manifold Diffusion of Solutions for Kinetic Analysis of Pharmacokinetic Data
Stember et al. Deep reinforcement learning with automated label extraction from clinical reports accurately classifies 3D MRI brain volumes
Subramanian et al. Multiatlas calibration of biophysical brain tumor growth models with mass effect
Garrido-Oliver et al. Machine learning for the automatic assessment of aortic rotational flow and wall shear stress from 4D flow cardiac magnetic resonance imaging
Lamash et al. Curved planar reformatting and convolutional neural network‐based segmentation of the small bowel for visualization and quantitative assessment of pediatric Crohn's disease from MRI
US20220222812A1 (en) Device and method for pneumonia detection based on deep learning
Sundar et al. Potentials and caveats of AI in hybrid imaging
Meng et al. Radiomics-enhanced deep multi-task learning for outcome prediction in head and neck cancer
Montalt-Tordera et al. Automatic segmentation of the great arteries for computational hemodynamic assessment
Yousefirizi et al. TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images—a multi-center generalizability analysis
Wang et al. Medical matting: Medical image segmentation with uncertainty from the matting perspective
CN114913174B (en) Method, apparatus and storage medium for vascular system variation detection
Klyuzhin et al. PSMA‐Hornet: Fully‐automated, multi‐target segmentation of healthy organs in PSMA PET/CT images
CN113129297B (en) Diameter automatic measurement method and system based on multi-phase tumor image
US20220215958A1 (en) System and method for training machine learning models with unlabeled or weakly-labeled data and applying the same for physiological analysis
Bellos et al. Temporal refinement of 3D CNN semantic segmentations on 4D time-series of undersampled tomograms using hidden Markov models

Legal Events

Date Code Title Description
AS Assignment

Owner name: BEIJING KEYA MEDICAL TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, XIN;YIN, YOUBING;BAI, JUNJIE;AND OTHERS;SIGNING DATES FROM 20200616 TO 20200617;REEL/FRAME:060886/0933

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION