US20210104040A1 - System and method for automated angiography - Google Patents
System and method for automated angiography Download PDFInfo
- Publication number
- US20210104040A1 US20210104040A1 US17/124,616 US202017124616A US2021104040A1 US 20210104040 A1 US20210104040 A1 US 20210104040A1 US 202017124616 A US202017124616 A US 202017124616A US 2021104040 A1 US2021104040 A1 US 2021104040A1
- Authority
- US
- United States
- Prior art keywords
- cta
- processor
- data
- interest
- image volume
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000002583 angiography Methods 0.000 title description 3
- 238000010968 computed tomography angiography Methods 0.000 claims abstract description 137
- 238000013527 convolutional neural network Methods 0.000 claims description 26
- 210000001367 artery Anatomy 0.000 claims description 25
- 210000003462 vein Anatomy 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 19
- 210000000988 bone and bone Anatomy 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 210000004872 soft tissue Anatomy 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 description 32
- 238000002591 computed tomography Methods 0.000 description 29
- 238000009877 rendering Methods 0.000 description 21
- 238000012552 review Methods 0.000 description 20
- 238000003384 imaging method Methods 0.000 description 18
- 238000012800 visualization Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 210000005166 vasculature Anatomy 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 239000002872 contrast media Substances 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 230000002792 vascular Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000011109 contamination Methods 0.000 description 3
- 210000000275 circle of willis Anatomy 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000007794 visualization technique Methods 0.000 description 2
- 206010002329 Aneurysm Diseases 0.000 description 1
- 208000004434 Calcinosis Diseases 0.000 description 1
- 208000005475 Vascular calcification Diseases 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000002308 calcification Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000006439 vascular pathology Effects 0.000 description 1
- 238000004846 x-ray emission Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10076—4D tomography; Time-sequential 3D tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20108—Interactive selection of 2D slice in a 3D data set
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- the subject matter disclosed herein relates to medical imaging and, in particular, to a system and method for performing automated computed tomography angiography.
- CT computed tomography
- a computer is able to reconstruct images of the portions of a patient's body responsible for the radiation attenuation.
- these images are based upon separate examination of a series of angularly displaced measurements.
- a CT system produces data that represent the distribution of linear attenuation coefficients of the scanned object. The data are then reconstructed to produce an image that is typically displayed on a screen, and may be printed or reproduced on film.
- vasculature and other circulatory system structures may be imaged, typically by administration of a radio-opaque dye prior to imaging.
- Visualization of the CTA data typically is performed in a two-dimensional manner, i.e., slice-by-slice, or in a three-dimensional manner, i.e., volume visualization, which allows the data to be analyzed for vascular pathologies.
- the data may be analyzed for aneurysms, vascular calcification, renal donor assessment, stent placement, vascular blockage, and vascular evaluation for sizing and/or runoff. Once a pathology is located, quantitative assessments of the pathology may be made of the on the original two-dimensional slices.
- the CTA process may include processes for segmenting structures in the image data, such as the vasculature and/or the bone structures. Such segmentation typically involves identifying which voxels of the image data are associated with a particular structure or structures of interest. Segmented structures may then be viewed outside of the context of the remainder of the image data or may be masked from the remainder of the image data to allow otherwise obstructed structure to be viewed. For example, in CTA, segmentation may be performed to identify all voxels associated with the vasculature, allowing the entire circulatory system in the imaged region to be extracted and viewed. Similarly, all voxels of the bone structures may be identified and masked, or subtracted, from the image data, allowing vasculature and/or other structures which might otherwise be obscured by the relatively opaque bone structures to be observed during subsequent visualization.
- segmentation of vasculature and bone structures may be complicated by a variety of factors. For example, in CTA, overlapping image intensities, close proximity of imaged structures, limited detector resolution, slow imaging volume coverage (i.e., slow scan speed), calcification, complexity of the anatomic regions and sub-regions, imperfect contrast timing, and interventional devices may make the identification and segmentation of bone and vascular structures difficult.
- image visualization specialists are utilized to manually intervene to generate images for radiologists. For example, these image visualization specialists both manually detect and/or remove structures (e.g., vein, artery, etc.) from the reconstructed CT data and reformat (e.g., transform or sample) the image volume to generate two-dimensional (2D) images.
- a method for analyzing computed tomography angiography (CTA) data includes receiving, at a processor, three-dimensional (3D) CTA data. The method also includes automatically, via the processor, detecting objects of interest within the 3D CTA data. The method further includes generating, via the processor, a CTA image volume that only includes the objects of interest.
- CTA computed tomography angiography
- a method for analyzing computed tomography angiography (CTA) data includes receiving, at a processor, four-dimensional (4D) CTA data. The method also includes generating, via the processor, non time-resolved CTA data from the 4D CTA data. The method further includes generating, via the processor, a first set of 4D images including veins only from the 4D CTA data. The method still further includes generating, via the processor, a second set of 4D images including arteries only from the 4D CTA data. The method yet further includes training, via the processor, a convolutional neural network utilizing the non time-resolved CTA data, the first set of 4D images, and the second set of 4D images to generate a trained convolutional neural network.
- 4D computed tomography angiography
- a method for analyzing computed tomography angiography (CTA) data includes obtaining, at the processor, past review types utilized by users, image reformat rendering angles relative to computed tomography (CT) system landmarks for a respective past review type selected by the users, and image reformat rendering angles relative to anatomical landmarks for the respective past review type selected by the users.
- the method also includes training, via the processor, the convolutional neural network utilizing the past review types utilized by users, the image reformat rendering angles relative to CT system landmarks for the respective past review type selected by the users, and the image reformat rendering angles relative to anatomical landmarks for the respective past review type selected by the users to generate a trained convolutional neural network.
- FIG. 1 is a block diagram depicting components of a computed tomography (CT) imaging system, in accordance with aspects of the present disclosure
- FIG. 2 is a flow chart of an embodiment of a method for analyzing computed tomography angiography (CTA) data;
- CTA computed tomography angiography
- FIG. 3 is a flow chart of an embodiment of a method for training a neural network with four-dimensional (4D) CTA data for utilization in detecting or removing objects from three-dimensional (3D) CTA data;
- FIG. 4 is a graphical representation of CTA data for a given voxel location over time
- FIG. 5 is flow chart of an embodiment of a method for utilizing a trained neural network to detect or remove objects from 3D CTA data
- FIG. 6 is a flow chart of an embodiment of a method for training a neural network for utilization in reformatting an image volume
- FIG. 7 is a flow chart of an embodiment of a method for utilizing a trained neural network to reformat an image volume.
- CTA computed tomography angiography
- processing circuitry e.g., of a console or computer of a computed tomography (CT) imaging system
- object of interest e.g., vein, artery, soft tissue, bone, etc.
- an imaging volume e.g., only having the object of interest
- a neural network may be trained on four-dimensional (4D) CTA data to learn how to automatically detect or remove objects from reconstructed 3D CTA data to generate image volumes.
- a neural network may be trained to identify an object of interest and desired orientation of a particular view based on past review types utilized by users and their respective image reformat rendering angles relative to CT system landmarks and/or anatomical landmarks utilized in those past review types.
- the automatization of the isolation of an object of interest and reformatting of CTA data enables analysis and visualization of CTA data on lower tier scanners (e.g., with less than 16 row scanners) having a slow volume coverage and/or situations with imperfect contrast timing.
- the disclosed techniques reduce venous contamination due to imperfect contrast timing. Further, this automatization reduces both the time and costs associated with utilizing visualization specialists in generating CTA data for analysis.
- imaging system 10 designed to acquire X-ray attenuation data at a variety of views around a patient (or other subject or object of interest) and suitable for automated angiography (i.e., automated object identification and reformatting) is provided in FIG. 1 .
- imaging system 10 includes a source of X-ray radiation 12 positioned adjacent to a collimator 14 .
- the X-ray source 12 may be an X-ray tube, a distributed X-ray source (such as a solid-state or thermionic X-ray source) or any other source of X-ray radiation suitable for the acquisition of medical or other images.
- the collimator 14 permits X-rays 16 to pass into a region in which a patient 18 , is positioned.
- the X-rays 16 are collimated to be a cone-shaped beam, i.e., a cone-beam that passes through the imaged volume.
- a portion of the X-ray radiation 20 passes through or around the patient 18 (or other subject of interest) and impacts a detector array, represented generally at reference numeral 22 .
- Detector elements of the array produce electrical signals that represent the intensity of the incident X-rays 20 . These signals are acquired and processed to reconstruct images of the features within the patient 18 .
- Source 12 is controlled by a system controller 24 , which furnishes both power, and control signals for CT examination sequences, including acquisition of 2D localizer or scout images used to identify anatomy of interest within the patient for subsequent scan protocols.
- the system controller 24 controls the source 12 via an X-ray controller 26 which may be a component of the system controller 24 .
- the X-ray controller 26 may be configured to provide power and timing signals to the X-ray source 12 .
- the detector 22 is coupled to the system controller 24 , which controls acquisition of the signals generated in the detector 22 .
- the system controller 24 acquires the signals generated by the detector using a data acquisition system 28 .
- the data acquisition system 28 receives data collected by readout electronics of the detector 22 .
- the data acquisition system 28 may receive sampled analog signals from the detector 22 and convert the data to digital signals for subsequent processing by a processor 30 discussed below.
- the digital-to-analog conversion may be performed by circuitry provided on the detector 22 itself.
- the system controller 24 may also execute various signal processing and filtration functions with regard to the acquired image signals, such as for initial adjustment of dynamic ranges, interleaving of digital image data, and so forth.
- system controller 24 is coupled to a rotational subsystem 32 and a linear positioning subsystem 34 .
- the rotational subsystem 32 enables the X-ray source 12 , collimator 14 and the detector 22 to be rotated one or multiple turns around the patient 18 , such as rotated primarily in an x, y-plane about the patient.
- the rotational subsystem 32 might include a gantry upon which the respective X-ray emission and detection components are disposed.
- the system controller 24 may be utilized to operate the gantry.
- the linear positioning subsystem 34 may enable the patient 18 , or more specifically a table supporting the patient, to be displaced within the bore of the CT system 10 , such as in the z-direction relative to rotation of the gantry.
- the table may be linearly moved (in a continuous or step-wise fashion) within the gantry to generate images of particular areas of the patient 18 .
- the system controller 24 controls the movement of the rotational subsystem 32 and/or the linear positioning subsystem 34 via a motor controller 36 .
- system controller 24 commands operation of the imaging system 10 (such as via the operation of the source 12 , detector 22 , and positioning systems described above) to execute examination protocols and to process acquired data.
- the system controller 24 via the systems and controllers noted above, may rotate a gantry supporting the source 12 and detector 22 about a subject of interest so that X-ray attenuation data may be obtained at one or more views relative to the subject.
- system controller 24 may also include signal processing circuitry, associated memory circuitry for storing programs and routines executed by the computer (such as routines for executing image visualization techniques that enable automatic (i.e., without user intervention) detection of objects of interests and reformatting of 2D images from an imaging volume as described herein), as well as configuration parameters, image data, reconstructed images, and so forth.
- the image signals acquired and processed by the system controller 24 are provided to a processing component 30 for reconstruction of images in accordance with the presently disclosed algorithms.
- the processing component 30 may be one or more general or application-specific microprocessors.
- the data collected by the data acquisition system 28 may be transmitted to the processing component 30 directly or after storage in a memory 38 .
- Any type of memory suitable for storing data might be utilized by such an exemplary system 10 .
- the memory 38 may include one or more optical, magnetic, and/or solid-state memory storage structures.
- the memory 38 may be located at the acquisition system site and/or may include remote storage devices for storing data, processing parameters, and/or routines for image reconstruction as described herein.
- the processing component 30 may be configured to receive commands and scanning parameters from an operator via an operator workstation 40 , typically equipped with a keyboard and/or other input devices.
- An operator may control the system 10 via the operator workstation 40 .
- the operator may observe the reconstructed images and/or otherwise operate the system 10 using the operator workstation 40 .
- a display 42 coupled to the operator workstation 40 may be utilized to observe the reconstructed images and to control imaging.
- the images may also be printed by a printer 44 which may be coupled to the operator workstation 40 .
- processing component 30 and operator workstation 40 may be coupled to other output devices, which may include standard or special purpose computer monitors and associated processing circuitry.
- One or more operator workstations 40 may be further linked in the system for outputting system parameters, requesting examinations, viewing images, and so forth.
- displays, printers, workstations, and similar devices supplied within the system may be local to the data acquisition components, or may be remote from these components, such as elsewhere within an institution or hospital, or in an entirely different location, linked to the image acquisition system via one or more configurable networks, such as the Internet, virtual private networks, and so forth.
- the operator workstation 40 may also be coupled to a picture archiving and communications system (PACS) 46 .
- PACS 46 may in turn be coupled to a remote client 48 , radiology department information system (RIS), hospital information system (HIS) or to an internal or external network, so that others at different locations may gain access to the raw or processed image data.
- RIS radiology department information system
- HIS hospital information system
- the processing component 30 , memory 38 , and operator workstation 40 may be provided collectively as a general or special purpose computer or workstation configured to operate in accordance with the aspects of the present disclosure.
- the general or special purpose computer may be provided as a separate component with respect to the data acquisition components of the system 10 or may be provided in a common platform with such components.
- the system controller 24 may be provided as part of such a computer or workstation or as part of a separate system dedicated to image acquisition.
- the system 10 of FIG. 1 may be used to conduct a computed tomography (CT) scan to acquire 3D or 4D CTA data from a patient 18 or object.
- CT computed tomography
- the 4D CTA may be utilized by the system to train a neural network (e.g., convolutional neural network) or machine learning algorithm to detect objects of interest within 3D CTA data.
- a neural network e.g., convolutional neural network
- past activities or review types (and the associated image reformat rendering angles utilized relative to CT system or anatomical landmarks) conducted by advanced visualization specialists may be utilized to train the neural network or machine learning algorithm to learn anatomical locations and reformat planes for utilization in identifying the location of objects of interests (e.g., vessels) and a desired orientation of view an CTA imaging volume derived from the 3D CTA data.
- the neural network or machine learning algorithm may enable the system to automatically detect object of interests from 3D CTA data and to automatically reformat the CTA imaging volume to generate desired 2D images of only the
- FIG. 2 is a flow chart of an embodiment of a method 50 for analyzing CTA data. Some or all of the steps of the method 50 may be performed by the system controller 24 , processing component 30 , and/or operator workstation 40 . One or more steps of the illustrated method 50 may performed in a different order from the order depicted in FIG. 2 and/or simultaneously.
- the method 50 includes acquiring CT data (e.g., 3D CTA data) of a patient or object (e.g., utilizing system 10 ) (block 52 ).
- the method 50 also includes reconstructing the CT data (block 54 ).
- the method 50 further includes automatically (i.e., without user interaction) detecting or identifying (e.g., via segmentation) an object of interest (e.g., artery, vein, bone, or soft tissue) from the reconstructed CT data to generate an image volume of interest (e.g., 3D CTA image volume) (block 56 ).
- the method 50 includes removing other objects other than the object of interest from the reconstructed CT data. For example, if the object of interest is an artery, veins, bone, and/or soft tissue may be removed from the image volume.
- the detection and/or removal of objects may be automatically executed via a trained neural network or machine learning algorithm.
- the trained neural network may be a convolutional neural network (CNN) that utilizes cross-correlation in analyzing imaging data.
- CNN utilizes different multilayer perceptrons that require minimal preprocessing.
- the CNN learns the filters or weights to be utilized (enabling independence from prior knowledge and human effort).
- the CNN shares weights that are utilized in the convolutional layers to reduce memory footprint and improve performance. The training of the neural network for object detection or identification is described in greater detail below.
- the method 50 yet further includes automatically (i.e., without user interaction) reformatting (i.e., sampling or transforming) or planar reformatting the image volume (e.g., CTA image volume) to generate one or more 2D images (e.g., for a specific review type) that include only the object of interest (block 58 ).
- Reformatting may utilize volume rendering, directional maximum intensity projection (MIP), or other visualization technique in generating the 2D images.
- the image reformat rendering angles of the 2D images may be set relative to global CT system landmarks (e.g., axial, coronal, or sagittal MIPs).
- the image reformat rendering angles of the 2D images may be set relative to anatomical landmarks (e.g., volume rendering of circle of Willis, left carotid, right carotid, etc.).
- the reformatting or planar reformatting may be automatically executed via a trained neural network or machine learning algorithm. The training of the neural network for reformatting is described in greater detail below.
- the method 50 even further includes providing the one or more generated 2D images to PACS (block 60 ) for visualization (e.g., in a radiologist report).
- FIG. 3 is a flow chart of an embodiment of a method 62 for training a neural network 89 with four-dimensional (4D) CTA data for utilization in detecting or removing objects from three-dimensional (3D) CTA data. Some or all of the steps of the method 62 may be performed by the system controller 24 , processing component 30 , and/or operator workstation 40 . One or more steps of the illustrated method 62 may performed in a different order from the order depicted in FIG. 3 and/or simultaneously.
- the method 62 includes acquiring or obtaining 4D CTA data 64 of a patient (e.g., utilizing system 10 ) (block 66 ). 4D CTA data includes x, y, and z data in conjunction with time.
- FIG. 1 is a flow chart of an embodiment of a method 62 for training a neural network 89 with four-dimensional (4D) CTA data for utilization in detecting or removing objects from three-dimensional (3D) CTA data.
- Some or all of the steps of the method 62 may
- FIG. 4 is a graphical representation 68 of CTA data for a given voxel location over time (i.e., 4D CTA data).
- the graph 68 includes an x-axis 70 representing time and a y-axis 72 representing CT intensity (e.g., due to the presence of a contrast agent).
- CTA may be collected at various times (T 1 , T 2 , T 3 , etc.) for the given voxel location to form the 4D CTA data.
- Plot 74 represents the signal from artery and plot 76 represents the signal from the vein. As depicted in FIG. 4 , initially (e.g., at T 1 ) the majority of the contribution to the intensity is from the artery (where most of the contrast agent is located).
- the contribution to the intensity is split between both the artery and the vein (due to the presence of the contrast agent in both).
- the majority of the contribution to the intensity is from the vein (where most of the contrast agent is located).
- the method 62 includes generating a weighted average from acquired or obtained 4D CTA data (x, y, and z data in conjunction with time) (block 78 ).
- the data points T 1 , T 2 , and T 3 may be given different weights, where the normalized weighted sum the weights may equal 1.
- data points that include the majority of intensity in the artery e.g., T 1
- data points that include the majority of intensity in the vein may be given a higher weight than data points that include the majority of intensity in the artery (e.g., T 1 ).
- the method 62 also includes generating non-time resolved or static 3D CTA image(s) 80 with arteries and veins based on the weighted average of the 4D CTA data (block 82 ). Non-time resolved images are similar to images acquired in standard CT acquisition.
- the method 62 further includes generating artery 84 and/or vein 86 only 4D images from the 4D CTA data (block 88 ).
- 4D segmentation techniques are utilized to generate the artery only images 84 and the vein only images 86 .
- the 4D segmentation techniques identify different classes of tissues (e.g., vein, artery, soft tissue, or bone) in the 4D CTA data.
- the method 62 even further includes training a neural network 89 (e.g., CNN as described above) or machine learning algorithm to detect or identify (or remove) objects of interest (e.g., vein, artery, soft tissue, bone) from 3D CTA data (block 90 ).
- a neural network 89 e.g., CNN as described above
- machine learning algorithm to detect or identify (or remove) objects of interest (e.g., vein, artery, soft tissue, bone) from 3D CTA data (block 90 ).
- the neural network 89 is trained on the non-time resolved image(s) 80 , artery only images 84 , and vein only images 86 . In other embodiments, the neural network 89 is trained on one or more of the non-time resolved image(s) 80 , artery only images 84 , and vein only images 86 .
- the weights learned by the trained neural network 89 may be stored for the application of the trained neural network 89 to 3D CTA data.
- FIG. 5 is flow chart of an embodiment of a method 92 for utilizing the trained neural network 89 to detect or remove objects from 3D CTA data. Some or all of the steps of the method 92 may be performed by the system controller 24 , processing component 30 , and/or operator workstation 40 .
- the method 92 includes applying the trained neural network 89 to the acquired 3D CTA data 94 from the patient (block 96 ). As noted above, the trained neural network 89 may utilize the weights learned during training to the 3D CTA data.
- the method 92 also includes automatically detecting or identifying (or removing) objects from the 3D CTA data (via the applied trained neural network 89 ) to generate a 3D CTA image volume 98 that only includes the object of interest (e.g., vein, artery, soft tissue, bone) (block 100 ).
- object of interest e.g., vein, artery, soft tissue, bone
- FIG. 6 is a flow chart of an embodiment of a method 102 for training a neural network 104 for utilization in reformatting (e.g., planar reformatting) an image volume. Some or all of the steps of the method 92 may be performed by the system controller 24 , processing component 30 , and/or operator workstation 40 . For a given CT protocol and review type, advanced visualization specialists or users manually determine image reformat rendering angles for an object interest in an image volume.
- the advanced visualization specialists set the image reformat rendering angles relative CT system landmarks (e.g., axial, coronal, and/or sagittal MIPs) and/or image reformat rendering angles relative to anatomical landmarks (e.g., volume rendering of the Circle of Willis, volume rendering of the left carotid, volume rendering of the right carotid, etc.) in generating the 2D images with only the object of interest.
- CT system landmarks e.g., axial, coronal, and/or sagittal MIPs
- anatomical landmarks e.g., volume rendering of the Circle of Willis, volume rendering of the left carotid, volume rendering of the right carotid, etc.
- Past review types 106 associated image reformat rendering angles 108 relative to system landmarks for these respective past review types, associated image reformat rendering angles 110 relative to anatomical landmarks for these respective past review types, and the 3D CTA data (imaging volumes) 112 utilized in these past review types may be monitored and stored for utilization in training the neural network 104 (e.g., CNN) or machine learning algorithm.
- the method 102 includes obtaining these past review types 106 and associated information (e.g., associated image reformat rendering angles 108 , 110 and/or associated 3D CTA data 112 ) (block 114 ).
- the method 102 also includes training the neural network 104 (e.g., CNN) with past review types 106 image reformat rendering angles 108 , 110 , and/or associated 3D CTA data 112 (block 116 ).
- the neural network 104 learns anatomical locations and reformat planes as well identifies a location of an object of interest (e.g., vessel of interest) and the desired orientation of the view based on the review type.
- the trained neural network 104 when applied can automatically set the image reformat rendering angles relative to system landmarks and anatomical landmarks based on the CT protocol and review type.
- FIG. 7 is a flow chart of an embodiment of a method 118 for utilizing the trained neural network 104 to reformat an image volume. Some or all of the steps of the method 92 may be performed by the system controller 24 , processing component 30 , and/or operator workstation 40 .
- the method 118 includes applying the trained neural network 104 to an image volume (e.g., acquired 3D CTA data 94 from the patient) (block 122 ).
- the method 118 also includes automatically reformatting or planar reformatting the image volume (via the applied trained neural network 104 ) to generate 2D CTA images 124 that only include the object of interest (e.g., artery, vein, bone, soft tissue) for the CT protocol and review type (block 126 ).
- object of interest e.g., artery, vein, bone, soft tissue
- inventions include providing systems and methods that automatically isolate (via detection and/or removal) an object of interest (e.g., vein, artery, soft tissue, bone, etc.) from 3D CTA data and automatically (i.e., without user interaction or input) reformat an imaging volume (e.g., only having the object of interest) to generate 2D CTA images.
- the automatization of the isolation of an object of interest and reformatting of CTA data enables analysis and visualization of CTA data on lower tier scanners (e.g., with less than 16 row scanners) having a slow volume coverage and/or situations with imperfect contrast timing.
- the disclosed techniques reduce venous contamination due to imperfect contrast timing. Further, this automatization reduces both the time and costs associated with utilizing visualization specialists in generating CTA data for analysis.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- Artificial Intelligence (AREA)
- Primary Health Care (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
- This application claims priority to and the benefit of U.S. patent application Ser. No. 15/924,706, entitled “SYSTEM AND METHOD FOR AUTOMATED ANGIOGRAPHY”, filed Mar. 19, 2018, which is herein incorporated by reference in its entirety.
- The subject matter disclosed herein relates to medical imaging and, in particular, to a system and method for performing automated computed tomography angiography.
- Volumetric medical imaging technologies use a variety of techniques to gather three-dimensional information about the body. For example, computed tomography (CT) imaging system measure the attenuation of X-ray beams passed through a patient from numerous angles. Based upon these measurements, a computer is able to reconstruct images of the portions of a patient's body responsible for the radiation attenuation. As will be appreciated by those skilled in the art, these images are based upon separate examination of a series of angularly displaced measurements. It should be pointed out that a CT system produces data that represent the distribution of linear attenuation coefficients of the scanned object. The data are then reconstructed to produce an image that is typically displayed on a screen, and may be printed or reproduced on film.
- For example, in the field of CT angiography (CTA), vasculature and other circulatory system structures may be imaged, typically by administration of a radio-opaque dye prior to imaging. Visualization of the CTA data typically is performed in a two-dimensional manner, i.e., slice-by-slice, or in a three-dimensional manner, i.e., volume visualization, which allows the data to be analyzed for vascular pathologies. For example, the data may be analyzed for aneurysms, vascular calcification, renal donor assessment, stent placement, vascular blockage, and vascular evaluation for sizing and/or runoff. Once a pathology is located, quantitative assessments of the pathology may be made of the on the original two-dimensional slices.
- The CTA process may include processes for segmenting structures in the image data, such as the vasculature and/or the bone structures. Such segmentation typically involves identifying which voxels of the image data are associated with a particular structure or structures of interest. Segmented structures may then be viewed outside of the context of the remainder of the image data or may be masked from the remainder of the image data to allow otherwise obstructed structure to be viewed. For example, in CTA, segmentation may be performed to identify all voxels associated with the vasculature, allowing the entire circulatory system in the imaged region to be extracted and viewed. Similarly, all voxels of the bone structures may be identified and masked, or subtracted, from the image data, allowing vasculature and/or other structures which might otherwise be obscured by the relatively opaque bone structures to be observed during subsequent visualization.
- However, segmentation of vasculature and bone structures may be complicated by a variety of factors. For example, in CTA, overlapping image intensities, close proximity of imaged structures, limited detector resolution, slow imaging volume coverage (i.e., slow scan speed), calcification, complexity of the anatomic regions and sub-regions, imperfect contrast timing, and interventional devices may make the identification and segmentation of bone and vascular structures difficult. Because of these complicating factors, image visualization specialists are utilized to manually intervene to generate images for radiologists. For example, these image visualization specialists both manually detect and/or remove structures (e.g., vein, artery, etc.) from the reconstructed CT data and reformat (e.g., transform or sample) the image volume to generate two-dimensional (2D) images. The utilization of these image visualization specialist is labor intensive and costly. In addition, on lower tier scanners (i.e., less than 16 rows) it is physically impossible to acquire a vascular study of the arteries without contamination of the veins given the required acquisition time. It may, therefore, be desirable to automate the detection and/or removal of structures from the reconstructed CT data as well as reformatting of the image volume in the CTA process.
- Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the claimed subject matter, but rather these embodiments are intended only to provide a brief summary of possible forms of the subject matter. Indeed, the subject matter may encompass a variety of forms that may be similar to or different from the embodiments set forth below.
- In accordance with a first embodiment, a method for analyzing computed tomography angiography (CTA) data is provided. The method includes receiving, at a processor, three-dimensional (3D) CTA data. The method also includes automatically, via the processor, detecting objects of interest within the 3D CTA data. The method further includes generating, via the processor, a CTA image volume that only includes the objects of interest.
- In accordance with a second embodiment, a method for analyzing computed tomography angiography (CTA) data is provided. The method includes receiving, at a processor, four-dimensional (4D) CTA data. The method also includes generating, via the processor, non time-resolved CTA data from the 4D CTA data. The method further includes generating, via the processor, a first set of 4D images including veins only from the 4D CTA data. The method still further includes generating, via the processor, a second set of 4D images including arteries only from the 4D CTA data. The method yet further includes training, via the processor, a convolutional neural network utilizing the non time-resolved CTA data, the first set of 4D images, and the second set of 4D images to generate a trained convolutional neural network.
- In accordance with a third embodiment, a method for analyzing computed tomography angiography (CTA) data is provided. The method includes obtaining, at the processor, past review types utilized by users, image reformat rendering angles relative to computed tomography (CT) system landmarks for a respective past review type selected by the users, and image reformat rendering angles relative to anatomical landmarks for the respective past review type selected by the users. The method also includes training, via the processor, the convolutional neural network utilizing the past review types utilized by users, the image reformat rendering angles relative to CT system landmarks for the respective past review type selected by the users, and the image reformat rendering angles relative to anatomical landmarks for the respective past review type selected by the users to generate a trained convolutional neural network.
- These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
-
FIG. 1 is a block diagram depicting components of a computed tomography (CT) imaging system, in accordance with aspects of the present disclosure; -
FIG. 2 is a flow chart of an embodiment of a method for analyzing computed tomography angiography (CTA) data; -
FIG. 3 . is a flow chart of an embodiment of a method for training a neural network with four-dimensional (4D) CTA data for utilization in detecting or removing objects from three-dimensional (3D) CTA data; -
FIG. 4 is a graphical representation of CTA data for a given voxel location over time; -
FIG. 5 is flow chart of an embodiment of a method for utilizing a trained neural network to detect or remove objects from 3D CTA data; -
FIG. 6 is a flow chart of an embodiment of a method for training a neural network for utilization in reformatting an image volume; and -
FIG. 7 is a flow chart of an embodiment of a method for utilizing a trained neural network to reformat an image volume. - One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
- When introducing elements of various embodiments of the present subject matter, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Furthermore, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numerical values, ranges, and percentages are within the scope of the disclosed embodiments.
- Disclosed herein are systems and methods for analyzing computed tomography angiography (CTA) data. In particular, the disclosed embodiments utilize processing circuitry (e.g., of a console or computer of a computed tomography (CT) imaging system) to automatically isolate (via detection and/or removal) an object of interest (e.g., vein, artery, soft tissue, bone, etc.) from three-dimensional (3D) CTA data and to automatically (i.e., without user interaction or input) reformat an imaging volume (e.g., only having the object of interest) to generate two-dimensional (2D) images. In certain embodiments, a neural network may be trained on four-dimensional (4D) CTA data to learn how to automatically detect or remove objects from reconstructed 3D CTA data to generate image volumes. In addition, a neural network may be trained to identify an object of interest and desired orientation of a particular view based on past review types utilized by users and their respective image reformat rendering angles relative to CT system landmarks and/or anatomical landmarks utilized in those past review types. The automatization of the isolation of an object of interest and reformatting of CTA data enables analysis and visualization of CTA data on lower tier scanners (e.g., with less than 16 row scanners) having a slow volume coverage and/or situations with imperfect contrast timing. In addition, on fast volumetric coverage systems, the disclosed techniques reduce venous contamination due to imperfect contrast timing. Further, this automatization reduces both the time and costs associated with utilizing visualization specialists in generating CTA data for analysis.
- With this in mind, an example of a
CT imaging system 10 designed to acquire X-ray attenuation data at a variety of views around a patient (or other subject or object of interest) and suitable for automated angiography (i.e., automated object identification and reformatting) is provided inFIG. 1 . Although the techniques below are discussed in the context of a CT imaging system, the techniques may also be utilized in other imaging systems (e.g., magnetic resonance (MR) imaging system, X-ray system, ultrasound system, positron emission tomography (PET) system, etc.). In the embodiment illustrated inFIG. 1 ,imaging system 10 includes a source ofX-ray radiation 12 positioned adjacent to acollimator 14. TheX-ray source 12 may be an X-ray tube, a distributed X-ray source (such as a solid-state or thermionic X-ray source) or any other source of X-ray radiation suitable for the acquisition of medical or other images. - The
collimator 14permits X-rays 16 to pass into a region in which apatient 18, is positioned. In the depicted example, theX-rays 16 are collimated to be a cone-shaped beam, i.e., a cone-beam that passes through the imaged volume. A portion of theX-ray radiation 20 passes through or around the patient 18 (or other subject of interest) and impacts a detector array, represented generally atreference numeral 22. Detector elements of the array produce electrical signals that represent the intensity of theincident X-rays 20. These signals are acquired and processed to reconstruct images of the features within thepatient 18. -
Source 12 is controlled by asystem controller 24, which furnishes both power, and control signals for CT examination sequences, including acquisition of 2D localizer or scout images used to identify anatomy of interest within the patient for subsequent scan protocols. In the depicted embodiment, thesystem controller 24 controls thesource 12 via anX-ray controller 26 which may be a component of thesystem controller 24. In such an embodiment, theX-ray controller 26 may be configured to provide power and timing signals to theX-ray source 12. - Moreover, the
detector 22 is coupled to thesystem controller 24, which controls acquisition of the signals generated in thedetector 22. In the depicted embodiment, thesystem controller 24 acquires the signals generated by the detector using adata acquisition system 28. Thedata acquisition system 28 receives data collected by readout electronics of thedetector 22. Thedata acquisition system 28 may receive sampled analog signals from thedetector 22 and convert the data to digital signals for subsequent processing by aprocessor 30 discussed below. Alternatively, in other embodiments the digital-to-analog conversion may be performed by circuitry provided on thedetector 22 itself. Thesystem controller 24 may also execute various signal processing and filtration functions with regard to the acquired image signals, such as for initial adjustment of dynamic ranges, interleaving of digital image data, and so forth. - In the embodiment illustrated in
FIG. 1 ,system controller 24 is coupled to arotational subsystem 32 and alinear positioning subsystem 34. Therotational subsystem 32 enables theX-ray source 12,collimator 14 and thedetector 22 to be rotated one or multiple turns around thepatient 18, such as rotated primarily in an x, y-plane about the patient. It should be noted that therotational subsystem 32 might include a gantry upon which the respective X-ray emission and detection components are disposed. Thus, in such an embodiment, thesystem controller 24 may be utilized to operate the gantry. - The
linear positioning subsystem 34 may enable thepatient 18, or more specifically a table supporting the patient, to be displaced within the bore of theCT system 10, such as in the z-direction relative to rotation of the gantry. Thus, the table may be linearly moved (in a continuous or step-wise fashion) within the gantry to generate images of particular areas of thepatient 18. In the depicted embodiment, thesystem controller 24 controls the movement of therotational subsystem 32 and/or thelinear positioning subsystem 34 via amotor controller 36. - In general,
system controller 24 commands operation of the imaging system 10 (such as via the operation of thesource 12,detector 22, and positioning systems described above) to execute examination protocols and to process acquired data. For example, thesystem controller 24, via the systems and controllers noted above, may rotate a gantry supporting thesource 12 anddetector 22 about a subject of interest so that X-ray attenuation data may be obtained at one or more views relative to the subject. In the present context,system controller 24 may also include signal processing circuitry, associated memory circuitry for storing programs and routines executed by the computer (such as routines for executing image visualization techniques that enable automatic (i.e., without user intervention) detection of objects of interests and reformatting of 2D images from an imaging volume as described herein), as well as configuration parameters, image data, reconstructed images, and so forth. - In the depicted embodiment, the image signals acquired and processed by the
system controller 24 are provided to aprocessing component 30 for reconstruction of images in accordance with the presently disclosed algorithms. Theprocessing component 30 may be one or more general or application-specific microprocessors. The data collected by thedata acquisition system 28 may be transmitted to theprocessing component 30 directly or after storage in amemory 38. Any type of memory suitable for storing data might be utilized by such anexemplary system 10. For example, thememory 38 may include one or more optical, magnetic, and/or solid-state memory storage structures. Moreover, thememory 38 may be located at the acquisition system site and/or may include remote storage devices for storing data, processing parameters, and/or routines for image reconstruction as described herein. - The
processing component 30 may be configured to receive commands and scanning parameters from an operator via anoperator workstation 40, typically equipped with a keyboard and/or other input devices. An operator may control thesystem 10 via theoperator workstation 40. Thus, the operator may observe the reconstructed images and/or otherwise operate thesystem 10 using theoperator workstation 40. For example, adisplay 42 coupled to theoperator workstation 40 may be utilized to observe the reconstructed images and to control imaging. Additionally, the images may also be printed by aprinter 44 which may be coupled to theoperator workstation 40. - Further, the
processing component 30 andoperator workstation 40 may be coupled to other output devices, which may include standard or special purpose computer monitors and associated processing circuitry. One ormore operator workstations 40 may be further linked in the system for outputting system parameters, requesting examinations, viewing images, and so forth. In general, displays, printers, workstations, and similar devices supplied within the system may be local to the data acquisition components, or may be remote from these components, such as elsewhere within an institution or hospital, or in an entirely different location, linked to the image acquisition system via one or more configurable networks, such as the Internet, virtual private networks, and so forth. - It should be further noted that the
operator workstation 40 may also be coupled to a picture archiving and communications system (PACS) 46.PACS 46 may in turn be coupled to aremote client 48, radiology department information system (RIS), hospital information system (HIS) or to an internal or external network, so that others at different locations may gain access to the raw or processed image data. - While the preceding discussion has treated the various exemplary components of the
imaging system 10 separately, these various components may be provided within a common platform or in interconnected platforms. For example, theprocessing component 30,memory 38, andoperator workstation 40 may be provided collectively as a general or special purpose computer or workstation configured to operate in accordance with the aspects of the present disclosure. In such embodiments, the general or special purpose computer may be provided as a separate component with respect to the data acquisition components of thesystem 10 or may be provided in a common platform with such components. Likewise, thesystem controller 24 may be provided as part of such a computer or workstation or as part of a separate system dedicated to image acquisition. - As discussed herein, the
system 10 ofFIG. 1 may be used to conduct a computed tomography (CT) scan to acquire 3D or 4D CTA data from a patient 18 or object. The 4D CTA may be utilized by the system to train a neural network (e.g., convolutional neural network) or machine learning algorithm to detect objects of interest within 3D CTA data. In addition, past activities or review types (and the associated image reformat rendering angles utilized relative to CT system or anatomical landmarks) conducted by advanced visualization specialists may be utilized to train the neural network or machine learning algorithm to learn anatomical locations and reformat planes for utilization in identifying the location of objects of interests (e.g., vessels) and a desired orientation of view an CTA imaging volume derived from the 3D CTA data. The neural network or machine learning algorithm may enable the system to automatically detect object of interests from 3D CTA data and to automatically reformat the CTA imaging volume to generate desired 2D images of only the object of interest. -
FIG. 2 is a flow chart of an embodiment of amethod 50 for analyzing CTA data. Some or all of the steps of themethod 50 may be performed by thesystem controller 24,processing component 30, and/oroperator workstation 40. One or more steps of the illustratedmethod 50 may performed in a different order from the order depicted inFIG. 2 and/or simultaneously. Themethod 50 includes acquiring CT data (e.g., 3D CTA data) of a patient or object (e.g., utilizing system 10) (block 52). Themethod 50 also includes reconstructing the CT data (block 54). - The
method 50 further includes automatically (i.e., without user interaction) detecting or identifying (e.g., via segmentation) an object of interest (e.g., artery, vein, bone, or soft tissue) from the reconstructed CT data to generate an image volume of interest (e.g., 3D CTA image volume) (block 56). In certain embodiments, themethod 50 includes removing other objects other than the object of interest from the reconstructed CT data. For example, if the object of interest is an artery, veins, bone, and/or soft tissue may be removed from the image volume. The detection and/or removal of objects may be automatically executed via a trained neural network or machine learning algorithm. In certain embodiments, the trained neural network may be a convolutional neural network (CNN) that utilizes cross-correlation in analyzing imaging data. The CNN utilizes different multilayer perceptrons that require minimal preprocessing. As a result, the CNN learns the filters or weights to be utilized (enabling independence from prior knowledge and human effort). In addition, the CNN shares weights that are utilized in the convolutional layers to reduce memory footprint and improve performance. The training of the neural network for object detection or identification is described in greater detail below. - The
method 50 yet further includes automatically (i.e., without user interaction) reformatting (i.e., sampling or transforming) or planar reformatting the image volume (e.g., CTA image volume) to generate one or more 2D images (e.g., for a specific review type) that include only the object of interest (block 58). Reformatting may utilize volume rendering, directional maximum intensity projection (MIP), or other visualization technique in generating the 2D images. The image reformat rendering angles of the 2D images may be set relative to global CT system landmarks (e.g., axial, coronal, or sagittal MIPs). In addition, the image reformat rendering angles of the 2D images may be set relative to anatomical landmarks (e.g., volume rendering of circle of Willis, left carotid, right carotid, etc.). The reformatting or planar reformatting may be automatically executed via a trained neural network or machine learning algorithm. The training of the neural network for reformatting is described in greater detail below. Themethod 50 even further includes providing the one or more generated 2D images to PACS (block 60) for visualization (e.g., in a radiologist report). -
FIG. 3 is a flow chart of an embodiment of amethod 62 for training aneural network 89 with four-dimensional (4D) CTA data for utilization in detecting or removing objects from three-dimensional (3D) CTA data. Some or all of the steps of themethod 62 may be performed by thesystem controller 24,processing component 30, and/oroperator workstation 40. One or more steps of the illustratedmethod 62 may performed in a different order from the order depicted inFIG. 3 and/or simultaneously. Themethod 62 includes acquiring or obtaining4D CTA data 64 of a patient (e.g., utilizing system 10) (block 66). 4D CTA data includes x, y, and z data in conjunction with time.FIG. 4 is agraphical representation 68 of CTA data for a given voxel location over time (i.e., 4D CTA data). Thegraph 68 includes anx-axis 70 representing time and a y-axis 72 representing CT intensity (e.g., due to the presence of a contrast agent). CTA may be collected at various times (T1, T2, T3, etc.) for the given voxel location to form the 4D CTA data.Plot 74 represents the signal from artery andplot 76 represents the signal from the vein. As depicted inFIG. 4 , initially (e.g., at T1) the majority of the contribution to the intensity is from the artery (where most of the contrast agent is located). Then, (e.g., at T2) the contribution to the intensity is split between both the artery and the vein (due to the presence of the contrast agent in both). Finally, (e.g., at T3) the majority of the contribution to the intensity is from the vein (where most of the contrast agent is located). - The
method 62 includes generating a weighted average from acquired or obtained 4D CTA data (x, y, and z data in conjunction with time) (block 78). For example, the data points T1, T2, and T3 may be given different weights, where the normalized weighted sum the weights may equal 1. In certain embodiments, data points that include the majority of intensity in the artery (e.g., T1) may be given a higher weight than data points that include the majority of intensity in the vein (e.g., T3). In other embodiments, data points that include the majority of intensity in the vein (e.g., T3) may be given a higher weight than data points that include the majority of intensity in the artery (e.g., T1). Themethod 62 also includes generating non-time resolved or static 3D CTA image(s) 80 with arteries and veins based on the weighted average of the 4D CTA data (block 82). Non-time resolved images are similar to images acquired in standard CT acquisition. - The
method 62 further includes generatingartery 84 and/orvein 86 only 4D images from the 4D CTA data (block 88). 4D segmentation techniques are utilized to generate the artery onlyimages 84 and the vein onlyimages 86. The 4D segmentation techniques identify different classes of tissues (e.g., vein, artery, soft tissue, or bone) in the 4D CTA data. Themethod 62 even further includes training a neural network 89 (e.g., CNN as described above) or machine learning algorithm to detect or identify (or remove) objects of interest (e.g., vein, artery, soft tissue, bone) from 3D CTA data (block 90). In certain embodiments, theneural network 89 is trained on the non-time resolved image(s) 80, artery onlyimages 84, and vein onlyimages 86. In other embodiments, theneural network 89 is trained on one or more of the non-time resolved image(s) 80, artery onlyimages 84, and vein onlyimages 86. The weights learned by the trainedneural network 89 may be stored for the application of the trainedneural network 89 to 3D CTA data. -
FIG. 5 is flow chart of an embodiment of amethod 92 for utilizing the trainedneural network 89 to detect or remove objects from 3D CTA data. Some or all of the steps of themethod 92 may be performed by thesystem controller 24,processing component 30, and/oroperator workstation 40. Themethod 92 includes applying the trainedneural network 89 to the acquired3D CTA data 94 from the patient (block 96). As noted above, the trainedneural network 89 may utilize the weights learned during training to the 3D CTA data. Themethod 92 also includes automatically detecting or identifying (or removing) objects from the 3D CTA data (via the applied trained neural network 89) to generate a 3DCTA image volume 98 that only includes the object of interest (e.g., vein, artery, soft tissue, bone) (block 100). -
FIG. 6 is a flow chart of an embodiment of amethod 102 for training aneural network 104 for utilization in reformatting (e.g., planar reformatting) an image volume. Some or all of the steps of themethod 92 may be performed by thesystem controller 24,processing component 30, and/oroperator workstation 40. For a given CT protocol and review type, advanced visualization specialists or users manually determine image reformat rendering angles for an object interest in an image volume. In particular, the advanced visualization specialists set the image reformat rendering angles relative CT system landmarks (e.g., axial, coronal, and/or sagittal MIPs) and/or image reformat rendering angles relative to anatomical landmarks (e.g., volume rendering of the Circle of Willis, volume rendering of the left carotid, volume rendering of the right carotid, etc.) in generating the 2D images with only the object of interest.Past review types 106, associated image reformatrendering angles 108 relative to system landmarks for these respective past review types, associated image reformatrendering angles 110 relative to anatomical landmarks for these respective past review types, and the 3D CTA data (imaging volumes) 112 utilized in these past review types may be monitored and stored for utilization in training the neural network 104 (e.g., CNN) or machine learning algorithm. Themethod 102 includes obtaining thesepast review types 106 and associated information (e.g., associated image reformatrendering angles method 102 also includes training the neural network 104 (e.g., CNN) withpast review types 106 image reformatrendering angles neural network 104 learns anatomical locations and reformat planes as well identifies a location of an object of interest (e.g., vessel of interest) and the desired orientation of the view based on the review type. Thus, the trainedneural network 104 when applied can automatically set the image reformat rendering angles relative to system landmarks and anatomical landmarks based on the CT protocol and review type. -
FIG. 7 is a flow chart of an embodiment of amethod 118 for utilizing the trainedneural network 104 to reformat an image volume. Some or all of the steps of themethod 92 may be performed by thesystem controller 24,processing component 30, and/oroperator workstation 40. Themethod 118 includes applying the trainedneural network 104 to an image volume (e.g., acquired3D CTA data 94 from the patient) (block 122). Themethod 118 also includes automatically reformatting or planar reformatting the image volume (via the applied trained neural network 104) to generate2D CTA images 124 that only include the object of interest (e.g., artery, vein, bone, soft tissue) for the CT protocol and review type (block 126). - Technical effects of the disclosed embodiments include providing systems and methods that automatically isolate (via detection and/or removal) an object of interest (e.g., vein, artery, soft tissue, bone, etc.) from 3D CTA data and automatically (i.e., without user interaction or input) reformat an imaging volume (e.g., only having the object of interest) to generate 2D CTA images. The automatization of the isolation of an object of interest and reformatting of CTA data enables analysis and visualization of CTA data on lower tier scanners (e.g., with less than 16 row scanners) having a slow volume coverage and/or situations with imperfect contrast timing. In addition, on fast volumetric coverage systems, the disclosed techniques reduce venous contamination due to imperfect contrast timing. Further, this automatization reduces both the time and costs associated with utilizing visualization specialists in generating CTA data for analysis.
- This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/124,616 US20210104040A1 (en) | 2018-03-19 | 2020-12-17 | System and method for automated angiography |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/924,706 US10902585B2 (en) | 2018-03-19 | 2018-03-19 | System and method for automated angiography utilizing a neural network |
US17/124,616 US20210104040A1 (en) | 2018-03-19 | 2020-12-17 | System and method for automated angiography |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/924,706 Continuation US10902585B2 (en) | 2018-03-19 | 2018-03-19 | System and method for automated angiography utilizing a neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210104040A1 true US20210104040A1 (en) | 2021-04-08 |
Family
ID=67905888
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/924,706 Active 2039-03-15 US10902585B2 (en) | 2018-03-19 | 2018-03-19 | System and method for automated angiography utilizing a neural network |
US17/124,616 Abandoned US20210104040A1 (en) | 2018-03-19 | 2020-12-17 | System and method for automated angiography |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/924,706 Active 2039-03-15 US10902585B2 (en) | 2018-03-19 | 2018-03-19 | System and method for automated angiography utilizing a neural network |
Country Status (1)
Country | Link |
---|---|
US (2) | US10902585B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10902585B2 (en) * | 2018-03-19 | 2021-01-26 | General Electric Company | System and method for automated angiography utilizing a neural network |
CN110458833B (en) * | 2019-08-15 | 2023-07-11 | 腾讯科技(深圳)有限公司 | Medical image processing method, medical device and storage medium based on artificial intelligence |
WO2021257906A1 (en) * | 2020-06-17 | 2021-12-23 | Northwestern University | Maskless 2d/3d artificial subtraction angiography |
US11896425B2 (en) | 2021-04-23 | 2024-02-13 | Fujifilm Sonosite, Inc. | Guiding instrument insertion |
US11900593B2 (en) * | 2021-04-23 | 2024-02-13 | Fujifilm Sonosite, Inc. | Identifying blood vessels in ultrasound images |
CN115713626B (en) * | 2022-11-21 | 2023-07-18 | 山东省人工智能研究院 | 3D coronary artery CTA plaque recognition method based on deep learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8428690B2 (en) * | 2007-05-16 | 2013-04-23 | General Electric Company | Intracardiac echocardiography image reconstruction in combination with position tracking system |
US9589374B1 (en) * | 2016-08-01 | 2017-03-07 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
US10251708B2 (en) * | 2017-04-26 | 2019-04-09 | International Business Machines Corporation | Intravascular catheter for modeling blood vessels |
US10902585B2 (en) * | 2018-03-19 | 2021-01-26 | General Electric Company | System and method for automated angiography utilizing a neural network |
-
2018
- 2018-03-19 US US15/924,706 patent/US10902585B2/en active Active
-
2020
- 2020-12-17 US US17/124,616 patent/US20210104040A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US20190287239A1 (en) | 2019-09-19 |
US10902585B2 (en) | 2021-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210104040A1 (en) | System and method for automated angiography | |
US11744472B2 (en) | Hemodynamic parameter estimation based on image data | |
US9070181B2 (en) | System and method for extracting features of interest from an image | |
US7177453B2 (en) | Method and apparatus for partitioning a volume | |
RU2627147C2 (en) | Real-time display of vasculature views for optimal device navigation | |
US7676257B2 (en) | Method and apparatus for segmenting structure in CT angiography | |
US10867375B2 (en) | Forecasting images for image processing | |
US7978886B2 (en) | System and method for anatomy based reconstruction | |
CN102631210B (en) | Method, image data record processing facility, x-ray system and for correcting image data of an examination object | |
CN111656405A (en) | Reducing metal artifacts using deep learning | |
US20120243759A1 (en) | Image processing apparatus, x-ray ct apparatus, and image processing method | |
DE102007028828A1 (en) | Image data processing method for e.g. computer tomography system, involves providing image with subsets of pixels, and modifying subsets between iterations such that pixel composition and size of subsets differs with each iteration | |
CN107257991B (en) | Method for reconstruction of quantitative iodine maps for tomography using energy analysis | |
US20120176412A1 (en) | Method and system for improved medical image analysis | |
US9953440B2 (en) | Method for tomographic reconstruction | |
US9858688B2 (en) | Methods and systems for computed tomography motion compensation | |
US9619889B2 (en) | Methods and systems for normalizing contrast across multiple acquisitions | |
US6975897B2 (en) | Short/long axis cardiac display protocol | |
US20230048231A1 (en) | Method and systems for aliasing artifact reduction in computed tomography imaging | |
US12125217B2 (en) | System and method for detecting stenosis | |
US20240104802A1 (en) | Medical image processing method and apparatus and medical device | |
EP4187496A1 (en) | System and method for autonomous identification of heterogeneous phantom regions | |
US10332252B2 (en) | Slope constrained cubic interpolation | |
Сherebylo et al. | Formation of personalized digital models based on tomographic data of the patient | |
Manhart et al. | Guided noise reduction with streak removal for high speed flat detector CT perfusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GENERAL ELECTRIC COMPANY, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NETT, BRIAN EDWARD;REEL/FRAME:054677/0653 Effective date: 20180316 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |