EP3501006B1 - Feature-based image processing using feature images extracted from different iterations - Google Patents
Feature-based image processing using feature images extracted from different iterations Download PDFInfo
- Publication number
- EP3501006B1 EP3501006B1 EP17757745.9A EP17757745A EP3501006B1 EP 3501006 B1 EP3501006 B1 EP 3501006B1 EP 17757745 A EP17757745 A EP 17757745A EP 3501006 B1 EP3501006 B1 EP 3501006B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- reconstructed
- difference
- feature
- reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims description 14
- 238000003384 imaging method Methods 0.000 claims description 48
- 238000003860 storage Methods 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 14
- 238000003672 processing method Methods 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 11
- 238000012805 post-processing Methods 0.000 claims description 8
- 238000000844 transformation Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000003902 lesion Effects 0.000 description 31
- 238000002591 computed tomography Methods 0.000 description 14
- 238000013459 approach Methods 0.000 description 13
- 238000002600 positron emission tomography Methods 0.000 description 13
- 238000001914 filtration Methods 0.000 description 12
- 230000008901 benefit Effects 0.000 description 10
- CYTYCFOTNPOANT-UHFFFAOYSA-N Perchloroethylene Chemical compound ClC(Cl)=C(Cl)Cl CYTYCFOTNPOANT-UHFFFAOYSA-N 0.000 description 8
- 238000012879 PET imaging Methods 0.000 description 7
- 210000000056 organ Anatomy 0.000 description 7
- 238000002603 single-photon emission computed tomography Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 230000004807 localization Effects 0.000 description 5
- 238000004321 preservation Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000009499 grossing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000012217 radiopharmaceutical Substances 0.000 description 4
- 229940121896 radiopharmaceutical Drugs 0.000 description 4
- 230000002799 radiopharmaceutical effect Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000005251 gamma ray Effects 0.000 description 2
- 210000004185 liver Anatomy 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010060862 Prostate cancer Diseases 0.000 description 1
- 208000000236 Prostatic Neoplasms Diseases 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000001370 mediastinum Anatomy 0.000 description 1
- 238000012636 positron electron tomography Methods 0.000 description 1
- 210000002307 prostate Anatomy 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/424—Iterative
Definitions
- the following relates generally to the image processing arts, image reconstruction arts, magnetic resonance (MR) imaging and image reconstruction and refinement arts, nuclear emission imaging and image reconstruction and refinement arts, computed tomography (CT) imaging and image reconstruction and refinement arts, and related arts.
- MR magnetic resonance
- CT computed tomography
- Nuclear emission imaging modalities such as positron emission tomography (PET) or single photon emission computed tomography (SPECT) provide for functional imaging of take-up and/or distribution of a radiopharmaceutical in tissue or organs.
- PET positron emission tomography
- SPECT single photon emission computed tomography
- Transmission computed tomography (CT) or magnetic resonance (MR) imaging are typically used to image anatomical features, although additional information may be obtained using these techniques in conjunction with a contrast agent or advanced contrast techniques, e.g. time-of-flight magnetic resonance angiography (TOF-MRA).
- TOF-MRA time-of-flight magnetic resonance angiography
- the acquired imaging data generally do not directly form a cognizable image.
- the imaging data are lines of response (LORs) defined by detected 511 keV gamma ray pairs, optionally with time-of-flight (TOF) localization.
- LORs lines of response
- SPECT data are generally collected as linear or narrow-angle conical projections defined by a honeycomb or other type of collimator
- CT data projections (here absorption line integrals) along paths from x-ray tube to detector element.
- MR data are generally acquired as k-space data in a Cartesian, radial, spiral, or other acquisition geometry.
- a suitable image reconstruction algorithm is applied to convert the imaging data from projection space or k-space to a reconstructed image in two-dimensional (2D) or three-dimensional (3D) image space.
- Image reconstruction is typically an iterative process, although non-iterative reconstruction algorithms such as filtered backprojection are also known.
- Various image refinement algorithms, such as filters and/or iterative resolution recovery, may optionally be applied to the reconstructed image to enhance salient characteristics.
- a challenge in the image reconstruction and refinement processing is the balancing of noise suppression and edge preservation (or edge enhancement). These goals tend to be in opposition, since noise constitutes unwanted image contrast that is to be suppressed; whereas edges constitute desired image contrast that is to be retained or perhaps even enhanced.
- Post-reconstruction filtering is a primary approach for noise suppression in medical imaging, but requires careful selection of filter type(s) and filter parameters to obtain an acceptable (even if not optimal) image for clinical analysis.
- Some known noise-suppressing filters include low-pass filters, bi-lateral filters, adaptive filters, or so forth. Low pass filters tend to smooth the image uniformly, which can suppress lesion contrast.
- Bi-lateral filters use the local image information to identify edges with the goal of only smoothing regions to the sides of the edge and leave the edge untouched or minimally smoothed. This is a type of edge-preserving filter, and if properly tuned may preserve lesion/organ quantitation. However, depending upon the filter parameters, edges may not be detected around some small/weak lesions/organs, in which case the small/weak lesions/organs are filtered and quantitative accuracy may be compromised. Other advanced adaptive image filters likewise require careful tuning.
- an image processing method is described in accordance with claim 1.
- the disclosed method is implemented on a computer and comprises: performing iterative image reconstruction on projection or k-space imaging data to generate an iteratively reconstructed, wherein the iterative processing generates a series of update images ending in the iteratively reconstructed image; generating a difference image between a choice of two update images selected from the series of update images; and using the difference image in post processing performed on the iteratively reconstructed image.
- a non-transitory storage medium stores instructions readable and executable by a computer to cause the computer to perform the above described image processing method.
- an image processing device comprises a computer an at least one non-transitory storage medium as described above.
- One advantage resides in improved image quality for an iteratively reconstructed image.
- Another advantage resides in providing for more accurate detection of malignant tumors or lesions.
- Another advantage resides in providing for reduction of obscuring noise in clinical images.
- Another advantage resides in providing for reduced likelihood of noise suppression image processing degrading or removing small lesion features.
- a given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
- the invention may take form in various components and arrangements of components, and in various steps and arrangements of steps.
- the drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
- Image reconstruction and refinement approaches disclosed herein are premised on the insight that, rather than attempting to identify edges in an image using local spatial information (e.g. by detecting large image intensity gradients), image features as a whole (not merely the edges) can be effectively detected based on a "temporal" evolution of update images during an iterative image reconstruction or refinement process.
- a difference image is computed as a difference (e.g. absolute difference) between corresponding pixels of two different update images of the iterative image reconstruction or refinement process.
- such a difference image can, for an appropriate choice of update images, produce a difference image that captures image features such as small lesions or tumors as areal structures, rather than as edges delineating such structures as in edge-preserving or edge-enhancing image filtering.
- the disclosed "temporal" approaches leverage certain observations about the evolution of update images during typical iterative reconstruction of PET and SPECT images.
- cold regions tend to converge more slowly than hot regions.
- cold refers to regions of low radiopharmaceutical concentration while “hot” refers to regions of high radiopharmaceutical concentration.
- hot refers to regions of high radiopharmaceutical concentration.
- small lesions and sharp edges correspond to high spatial frequency image signals.
- the optimal choice of update images for the difference image can be selected empirically, for example, via phantom studies to select update images for the difference image that produce the difference image with the strongest contrast for phantom features mimicking expected tumor sizes.
- the two update images that form the difference image do not necessarily need to be consecutive update images in the series of update images of the iterative reconstruction ending in the final iteratively reconstructed image.
- the ending iterative reconstructed image is itself defined using the iterative reconstruction termination criterion which may be variously chosen, e.g. stopping when a change metric between successive iterations is less than some minimum threshold, or stopping after a fixed number of iterations, or so forth).
- the optimal choice of update images is between these limits, and is preferably chosen so that the faster converging objects are close to stable (thus differences are small for these fast-converging object) while slower-converging objects are not yet stable (and hence the differences are still alrge)
- Such selection of the update images for computing the difference image thereby generates the strongest contrast for the smaller (and slower-convergin) features compared to the bigger (and faster-converging) background.
- the difference image is between two iterations of iterative processing (image reconstruction). Further transformations, e.g. scaling or weighting, may be applied to the difference image to generate a feature image.
- the feature image carries the "evolution" information of each object/organ between the iterations. The values of the same pixel or voxel in the images at different iterations are compared directly to each other, rather than being compared to its neighboring voxels in the individual images as in edge preserving or edge enhancing filtering techniques.
- an illustrative imaging device 10 is a combined system that includes a computed tomography (CT) gantry 12 and a positron emission tomography (PET) gantry 14, with a common subject support or couch 16 for moving a patient or other subject into a chosen gantry 12, 14 for CT or PET imaging.
- CT computed tomography
- PET positron emission tomography
- this arrangement enables, for example, acquisition of a CT image to provide anatomical information and of a PET image to provide functional information (e.g. radiopharmaceutical uptake and/or distribution in a patient).
- An example of a commercial PET/CT imaging device is the Vereos® digital PET/CT system available from Koninklijke Philips N.V., Eindhoven, the Netherlands.
- PET imaging data acquired using the PET gantry 14 comprise projection data in the form of lines of response (LORs) defined by detected 511 keV gamma ray pairs, optionally with time-of-flight (TOF) localization.
- CT imaging data acquired by the CT gantry 12 comprise projections (here absorption line integrals) along paths from x-ray tube to detector element.
- SPECT imaging data similarly comprise projections defined by a honeycomb or other type of collimator as linear or narrow-angle conical projections.
- MR imaging data are commonly collected as k-space imaging data, e.g. k-space samples acquired along a k-space trajectory (e.g., Cartesian, spiral, radial, zig-zag) defined by frequency and/or phase encoding implemented by suitably applied magnetic field gradients.
- k-space imaging data e.g. k-space samples acquired along a k-space trajectory (e.g., Cartesian, spiral, radial, zig-zag) defined by frequency and/or phase encoding implemented by suitably applied magnetic field gradients.
- the acquired imaging data are processed by a computing device 20, e.g. a computer 22 (network server, desktop computer, or so forth) that includes or has operative access with one or more electronic data storage devices (e.g. one or more hard drives, optical disks, solid state drives or other electronic digital storage devices, or so forth).
- a computing device 20 e.g. a computer 22 (network server, desktop computer, or so forth) that includes or has operative access with one or more electronic data storage devices (e.g. one or more hard drives, optical disks, solid state drives or other electronic digital storage devices, or so forth).
- the acquired imaging data are stored at an imaging data storage device 24.
- the computer 22 executes suitable software to implement an iterative image reconstruction 26 that generates a reconstructed image which is stored in a storage 28.
- the image reconstruction 26 may also be implemented in part using application-specific integrated circuitry (ASIC) or the like.
- ASIC application-specific integrated circuitry
- the iterative image reconstruction 26 is performed on projection imaging data (or k-space imaging data in the case of MR imaging) to generate an iteratively reconstructed image. More particularly, the iterative reconstruction 26 produces a series of update images ending in the iteratively reconstructed image which is stored in the storage 28.
- Some illustrative iterative image reconstruction algorithms for reconstructing PET imaging data include ordered subset expectation maximization (OSEM) image reconstruction and maximum a posteriori (MAP) image reconstruction using a quadratic prior or an edge-preserving prior (such as relative differences prior).
- the imaging data that is reconstructed may be two-dimensional (2D) imaging data in which case the image reconstruction produces a 2D image (sometimes called an image slice); or, the imaging data that is reconstructed may be three-dimensional (3D) imaging data in which case the image reconstruction produces a 3D image (sometimes called a volume image).
- 2D two-dimensional
- 3D three-dimensional
- the iterative reconstruction 26 produces a series of update images ending (e.g., when a specified number of iterations are performed or when some other termination criterion is met) in the iteratively reconstructed image.
- selected update images are subtracted to generate a difference image having contrast for features of interest.
- FIGURE 1 two selected update images 30, 32 are shown, which are indexed without loss of generality as update image i and update image j.
- a difference image 34 between the first update image 30 and a second update image 32 of the series of update images is generated.
- the difference image 34 is an absolute difference image between the first and second update images 30, 32 in which each pixel or voxel of the absolute difference image 34 is computed as the absolute value of the difference between corresponding pixels or voxels of the first and second update images 30, 32.
- negative pixel or voxel values and positive values in the difference image can be used to differentiate cold and hot features of the image.
- the difference image 34 is transformed by transformation operations 36 such as scaling or weighting of pixels or voxels of the difference image in order to generate a feature image 40.
- the difference image 34 (optionally transformed into feature image 40 ) is used in the iterative reconstruction 26 (i.e., used in iterations performed subsequent to the iterations that generated the update images 30, 32 ) as indicated by feedback path 42.
- the feature image 40 may serve as a prior image in subsequent iterations of the iterative image reconstruction 26.
- the difference image 34 (optionally transformed into feature image 40 ) is used in optional post-processing, such as illustrative image refinement 44, that is performed on the iteratively reconstructed image to produce the final clinical image that is stored in a clinical image storage 46 such as a Picture Archiving and Communication System (PACS).
- PPS Picture Archiving and Communication System
- the difference image is generated from update images produced by iterative image refinement, rather than by iterative image reconstruction.
- iterative image reconstruction operates to convert imaging data (projection data or k-space data) to image data in a 2D or 3D image space; whereas, iterative image refinement operates to improve an image already extant in a 2D or 3D image space.
- FIGURE 2 starts with the reconstructed image storage 28 that stores a reconstructed image; but it should be noted that in the example of FIGURE 2 the reconstructed image stored in the storage 28 may have been generated using either an iterative or a non-iterative image reconstruction algorithm.
- the computer 22 in the example of FIGURE 2 is programmed to perform an iterative image refinement 56 on the reconstructed image stored in the storage 28, which in this context of FIGURE 2 is an input reconstructed image that is input to the iterative image refinement 56.
- the iterative image refinement 56 may, for example, be iterative filtering, iterative resolution recovery, iterative scatter correction, or so forth.
- the iterative image refinement 56 is performed on the input reconstructed image to generate an iteratively refined image that is stored in the PACS or other clinical image storage 46.
- the iterative image refinement 56 produces a series of update images ending (e.g., when a specified number of iterations are performed or when some other termination criterion is met) in the iteratively refined image.
- selected update images of the series of update images produced by the iterative image refinement 56 are subtracted to generate a difference image having contrast for features of interest.
- two selected update images 60, 62 are shown, which are indexed without loss of generality as update image I U 1 and update image I U 2 .
- a difference image 64 between the first update image 60 and a second update image 62 of the series of update images is generated.
- the difference image 64 is an absolute difference image between the first and second update images 60, 62 in which each pixel or voxel of the absolute difference image 64 is computed as the absolute value of the difference between corresponding pixels or voxels of the first and second update images 60, 62.
- negative pixel or voxel values and positive values in the difference image can be used to differentiate cold and hot features of the image.
- the difference image 64 is transformed by transformation operations 66 such as scaling or weighting of pixels or voxels of the difference image in order to generate a feature image 70.
- the difference image 64 (optionally transformed into feature image 70 ) is used in the iterative image refinement 56 (i.e., used in iterations of the image refinement 56 performed subsequent to the iterations that generated the update images 60, 62 ) as indicated by feedback path 72.
- the difference image is between reconstructed images generated by two different (e.g., iterative or non-iterative, reconstruction with or without TOF) image reconstruction algorithms.
- the example of FIGURE 3 again operates on the imaging data (e.g. projection or k-space imaging data) acquired by the imaging device 10 and stored in the imaging data storage 24.
- the computer 22 is programmed to generate a first reconstructed image 80 by performing a first image reconstruction 81, and to generate a second reconstructed image 82 by performing a second image reconstruction 83 that is different from the first image reconstruction 81.
- one of the image reconstruction algorithms 81, 83 may be a TOF reconstruction that leverages TOF localization data while the other may be a non-TOF reconstruction that does not use TOF localization data.
- one of the image reconstruction algorithms 81, 83 may converge more rapidly than the other.
- a difference image 84 is generated as the difference between the two (differently) reconstructed images 80, 82. It is emphasized that both reconstructed images 80, 82 are generated by reconstructing the same imaging data, so that differences between the two reconstructed images 80, 82 are due to the different reconstruction algorithms 81, 83.
- intermediate image update(s) preceding production of the (final) reconstructed image(s) 80, 82 may be used, as indicated in FIGURE 3 by dotted inputs 86.
- the difference image 84 may be between an intermediate update image of the first image reconstruction 81 and an intermediate update image of the second image reconstruction 83.
- the difference image 84 may be between two different update images of the first image reconstruction 81.
- the difference image 84 may be an absolute difference image, and/or may be transformed by transformation operations such as scaling or weighting into a feature image (not shown in FIGURE 3 ).
- the computer 22 is further programmed to implement an image synthesizer 88 that combines the two reconstructed images 80, 82 using the difference image 84 (again, optionally transformed into a feature image by scaling, weighting, or so forth) to generate a final reconstructed image that is stored in the PACS or other clinical image storage 46.
- the two reconstructed images 80, 82 may be combined on a pixel-by-pixel or voxel-by-voxel basis in which each pixel or voxel of the synthesized image is a weighted combination of the pixel or voxel values of the two reconstructed images 80, 82 with the weights determined by the corresponding pixel or voxel values of the difference (or feature) image 84.
- the various computational components 26, 36, 44, 56, 66, 81, 83, 88 are implemented by suitable programming of the illustrative computer 22, although implementation of some computationally intensive aspects via ASIC, field-programmable gate array (FPGA), or other electronics is also contemplated.
- the computer 22 may be a single computer (server computer, desktop computer, or so forth) or an interconnected plurality of computers, e.g. a computing cluster, cloud computing resource, or so forth.
- the disclosed image processing techniques may be embodied as one or more non-transitory storage media storing instruction executable by the illustrative computer 22 or by some other computer or computing resource to perform the disclosed operations.
- the non-transitory storage medium may, for example, comprise a hard disk or other magnetic storage medium, an optical disk or other optical storage medium, a solid state drive, flash memory or other electronic storage medium, various combinations thereof, and/or so forth.
- a first example which comports with FIGURE 1 , generates the feature image 40 from update images 30, 32 of iterative reconstruction 26 and uses the feature image 40 in subsequent image refinement 44, namely in post-reconstruction filtering.
- the imaging data were acquired using a digital PET system with TOF information and with clinically relevant count level.
- the PET image was reconstructed using iterative TOF list-mode OSEM reconstruction as the iterative reconstruction 26, with one iteration and four subsets (Image1, i.e. update image 30 ), then with two iterations and four subsets (Image2, i.e. update image 32 ).
- the difference image 34 was generated by subtracting Image1 from Image2 and taking the absolute value of each voxel of the difference image to generate the absolute difference image.
- Subsequent scaling/weighting 36 to generate the feature image 40 included calculating the ratio of the absolute difference image to Image1 voxel-by-voxel to generate the ratio image Ratio 12, followed by clamping the voxel values to 0.15 and then dividing the image by 0.15 to obtain the feature image 40.
- the value of 0.15 was found empirically to be effective in this example, but a smaller or larger clamp value may be used to gauge the level of changes in the images from different reconstructions, and/or the clamp value may be adjusted based on how the iterative reconstruction is performed.
- update images 30, 32 in this example are from different iterations, more generally iterative image reconstruction is commonly performed with a number of subsets, and the image is updated at each subset.
- update image is used herein to emphasize that the images used to generate the difference image are not necessarily from different iterations, but more generally are from two different updates.
- the feature image 40 generated as described above for this example has the following characteristics: (1) Any voxel that has value change of 15% (in this specific illustrative example; more generally other values may be used) or more from Image1 to Image2 has value 1; (2) Any voxel that has value change between 0 to 15% is scaled to 0-1; and (3) Small structures (e.g., lesions) and cold regions tend to have large percentage change between iterations, therefore, the corresponding voxels in the feature image have values 1 or close to 1. Accordingly, when the feature image 40 is used for the post-reconstruction image refinement 44 (filtering, in this example), the feature image 40 provides extra information.
- a voxel is from a lesion then its value in the feature image 40 has value 1 or close to 1. This is used to guide the post-reconstruction processing 44 for optimized performance.
- voxels having value 1 in the feature image 40 should not be filtered at all, or should be filtered only slightly; by contrast, voxels of the feature image 40 with value 0 or close to 0 should be filtered heavily.
- the amount of filtering should (at least approximately) scale with the feature image voxel value, i.e. the feature image voxel value serves as a weight to determine how much the voxel will be filtered.
- the resulting filtered image thus preserves the quantitation of the lesions and organ boundaries (due to weak or no filtering) while smoothing out the noise in the background/uniform regions (by way of strong filtering).
- T 1 and T 2 Leveraging of the feature image 40 as weights in a weighted combination of two image transformations T 1 and T 2 can be expressed as follows: T 1 I i 1 ⁇ f i + T 2 I i f i where i indexes pixels or voxels, I ( i ) denotes pixels or voxels of the iteratively reconstructed image 28 and f ( i ) denotes corresponding pixels or voxels of the feature image, and T 1 and T 2 are two different image transformations. Specifically, T 1 is a strong (e.g., a Gaussian filter with a large kernel) filter and T 2 is a weak (e.g., a Gaussian filter with a small kernel) filter in this particular example.
- T 1 is a strong (e.g., a Gaussian filter with a large kernel) filter
- T 2 is a weak (e.g., a Gaussian filter with a small kernel) filter in this particular example.
- FIGURE 4 displays Image1, Image2, the Absolute Difference image, and the feature image obtained for a NEMA IEC phantom study with 30 million counts.
- FIGURE 5 illustrates a suitable filtering scheme of the NEMA IEC phantom image using the obtained feature image.
- the NEMA IEC phantom image was first reconstructed using a standard reconstruction protocol (IEC0). Then it was heavily filtered using three sequential box filters with window size 3 (IEC_Heavy), and slightly filtered using a box filter with kernel weight of 19 at the center and 1 at the other elements (IEC_Slight). The two filtered images were then combined using the feature image (IEC_Feature) in accordance with Equation (1) to obtain the final jointly filtered image (IEC_Joint).
- a voxel in the final image is a weighted sum of the value of the same voxel in the heavily filtered image and that in the slightly filtered image, using the voxel value in the feature image to calculate the weight.
- the voxel value is 1 in the feature image, so the weight is 1 for the slightly filtered image and 0 for the heavily filtered image.
- the lesions have the values from the slightly filtered image.
- the background regions have small value in the feature image, therefore, the weight for the heavily filtered image is large. Consequently, the obtained image showed preserved spheres and significantly filtered background.
- FIGURE 4 shows an example of extracting a feature image (an embodiment of the feature image 40 of FIGURE 1 ) from images at two different OSEM iterations. Images are displayed in FIGURE 4 using linear gray scale and each image was scaled to its own maximum.
- Image1 one iteration, four subsets; this is an embodiment of first update image 30 of FIGURE 1
- Image2 two iterations, four subsets; this is an embodiment of second update image 32 of FIGURE 1
- the absolute difference an embodiment of the difference image 34 of FIGURE 1
- the feature image an embodiment of feature image 40 of FIGURE 1
- the corresponding voxels of such objects in the feature image had high values.
- the uniform background (low frequency components) of the imaged phantom had low values (more black area in the gray scale display) in the feature image, indicating relatively small change from Image 1 to Image2 due to faster convergence than the spheres (higher frequency components).
- FIGURE 5 shows an example of using the feature image from FIGURE 4 to post-filter the NEMA image reconstructed using the standard reconstruction protocol (three iterations, 17 subsets).
- the NEMA image to be filtered this is an embodiment of the reconstructed image stored in the storage 28 of FIGURE 1 ), a heavily filtered image (box filter with window size 3, filter three times sequentially), a slightly filtered image (a box filter with window size 3 but with center of 19 and 1 for the rest), and the jointly filtered image using the feature image (i.e. the weighted sum of the heavily filtered and lightly filtered image combined using Equation (2)).
- the jointly filtered image significantly suppressed the noise in the background while still preserving the sphere quantitation.
- two reconstructed images are generated: one using a quadratic prior to obtain a (heavily) smoothed image, and the other using an edge-preserving prior to obtain an edge-preserved image.
- a feature image Using a feature image, these two images are combined in weighted fashion to synthesize the two reconstructed images into one joint image.
- a suitable weighted combination is: I 1 i 1 ⁇ f i + I 2 i f i where i indexes pixels or voxels, I 1 ( i ) and I 2 ( i ) denotes pixels or voxels of two different images generated by two different image reconstruction or refinement algorithms applied to the projection data (or k-space data in the case of MR image reconstruction), f ( i ) denotes corresponding pixels or voxels of the feature image. At least one of I 1 ( i ) and I 2 ( i ) is an iteratively reconstructed image, and a feature image is generated from two update images of the iterative reconstruction.
- the feature image was generated in the same way as the NEMA IEC phantom study in FIGURE 4 , but using the real patient data (i.e. there is a trial recon to extract features) to demonstrate that once the mechanism of generating the feature images is established (through IEC phantom studies), the mechanism is also application to patient studies.
- the combined image provides both the edge preserving advantage of the edge-preserved image and the smoothing advantage of the smooth image since the feature image provides extra information such as spatial frequency (i.e. how fast it changes locally) and object boundary information. This extra information is used to decide which region (or pixels) should be more heavily smoothed or more lightly smoothed.
- FIGURE 6 shows transaxial slices of images of a patient study that illustrate the effectiveness of the foregoing synthesis of images generated using quadratic and edge-preserving priors, respectively.
- FIGURE 7 illustrates the effect of this synthesis for the same patient study using coronal slices.
- the liver region was significantly filtered in the synthesized image as compared to the edge-preserving image, but the small structures, such as the hot spot in the center) was preserved as compared to the smooth image using a quadratic prior
- FIGURE 6 shows the feature image (leftmost image in FIGURE 6 ) used to synthesize a MAP reconstructed image using an edge-preserving prior (second image from left) and a MAP reconstructed image using a (non-edge-preserving) quadratic prior (third image from left, i.e. "smooth" image).
- the rightmost image in FIGURE 6 was the synthesized image combined using Equation (3) with the feature image (leftmost image of FIGURE 6 ) providing the f ( i ) weights.
- the synthesized image exhibits preservation of the small structures in the image and filtering of the soft tissue (indicated by the black regions in the feature image). This final image was better than either of the MAP images (middle two images of FIGURE 6 ).
- FIGURE 7 shows coronal slices of the same patient as in FIGURE 6 , illustrating the effectiveness of using the feature image (leftmost image in FIGURE 7 ) to obtain the final synthesized image (rightmost image in FIGURE 7 ) that has both the advantage of edge-preservation of small features in the edge-preserving image (second image from the left, MAP reconstruction using an edge-preserving prior) and the advantage of smoothness of the liver and mediastinum of the smooth image (third image from the left, MAP reconstruction using quadratic prior).
- ADF edge adaptive anisotropic diffusion filter
- a feature image may then be used to synthesis the two images to obtain the final image.
- the feature image is generated from a difference image generated by subtracting two update images of iterative image processing (either an iterative reconstruction or an iterative image refinement) with the update images selected to emphasize the features of interest.
- a feature image is used to provide reconstruction parameter guidance.
- regularized reconstruction one can use a quadratic prior of variable strength (guided by the feature image) to guide the regularization. For example, values of 1 in the feature image would reduce the smoothing strength of quadratic prior, and lower values would gradually enable it.
- the resulting image reconstruction will apply selective regularization using the extra information from the feature image, leading to optimized regularization in one reconstruction (as compared to performing two different reconstructions as in the example described with reference to FIGURES 6 and 7 ).
- FIGURE 8 illustrates an example of this single-reconstruction approach.
- Using a feature image for selective regularization in regularized reconstruction obtained advantageous lesion quantitation preservation and noise reduction in the background.
- the leftmost image in FIGURE 8 shows a regularized reconstruction using classical OSEM reconstruction without noise control. Lesions were sharp but background was noisy.
- the middle image in FIGURE 8 shows a regularized reconstruction using a quadratic prior for effectively suppressed noise in the background - but small lesions were also smoothed, and the contrast was decreased significantly.
- the rightmost image in FIGURE 8 is a regularized image where the strength of the quadratic prior was modulated by using a feature image to guide the selective regularization voxel-by-voxel and to preserve the edges.
- the feature image was created in the same way as for the NEMA IEC phantom study above.
- This approach provided comparable lesion preservation as the edge-preserving image with significantly reduced/suppressed background noise, particularly in the warm regions.
- one can use combinations of different priors such as edge preserving prior in regions where the feature image has high values; for voxels with small values in the feature image, one can use a stronger low-pass quadratic prior.
- the feature image can additionally or alternatively be displayed to provide the physician or other medical professional with visual guidance as to the features detected via the difference image.
- the feature image 40 is displayed side-by-side with a clinical image 90 on a display device 92, e.g. the LCD, plasma, or other graphical display component of a radiology workstation, oncology workstation, or other computer device, films etc. used by the medical professional to review medical images.
- the clinical image 90 may optionally be generated leveraging the feature image 40 as disclosed herein, or may be generated without resort to the feature image 40.
- the clinical image 90 may be generated by MAP reconstruction using an edge-preserving prior. This can lead to significant noise retention - however, the medical professional is assisted in detecting lesions in spite of this noise by reference to the "features guide" which is the displayed feature image 40.
- the feature image 40 may be used in scoring lesions identified by the medical professional. Such scoring employ various factors or metrics in providing a quantitative assessment of the likelihood that the feature identified as a lesion by the medical professional is indeed a lesion, rather than being noise or some other image artifact. Since the feature image using the illustrative scaling/weighting scheme has pixel or voxel values near 1 for features and values near zero otherwise, the sum of pixel or voxel values of the feature image 40 within the area or volume identified as a lesion by the physician is a metric of how likely it is that the lesion identification is correct.
- the average pixel or voxel value over the area or volume of the lesion 1 L ⁇ i ⁇ L f i provides a lesion likelihood metric.
- L represents the identified lesion
- the summation is over all pixels or voxels i within this lesion ( i ⁇ L )
- denotes the total number of pixels or voxels in the lesion L .
- the likelihood metric of Equation (4) may optionally be combined with other factors or metrics, e.g. whether the identified lesion L is wholly within an organ expected to contain the lesion (e.g. whether it is within the prostate in the case of a prostate cancer analysis), a measure based on the image texture in the lesion L , and/or so forth.
Description
- The following relates generally to the image processing arts, image reconstruction arts, magnetic resonance (MR) imaging and image reconstruction and refinement arts, nuclear emission imaging and image reconstruction and refinement arts, computed tomography (CT) imaging and image reconstruction and refinement arts, and related arts.
- Medical imaging is performed using various imaging modalities. Nuclear emission imaging modalities such as positron emission tomography (PET) or single photon emission computed tomography (SPECT) provide for functional imaging of take-up and/or distribution of a radiopharmaceutical in tissue or organs. Transmission computed tomography (CT) or magnetic resonance (MR) imaging are typically used to image anatomical features, although additional information may be obtained using these techniques in conjunction with a contrast agent or advanced contrast techniques, e.g. time-of-flight magnetic resonance angiography (TOF-MRA).
- In these techniques, the acquired imaging data generally do not directly form a cognizable image. In PET, the imaging data are lines of response (LORs) defined by detected 511 keV gamma ray pairs, optionally with time-of-flight (TOF) localization. SPECT data are generally collected as linear or narrow-angle conical projections defined by a honeycomb or other type of collimator, while CT data are projections (here absorption line integrals) along paths from x-ray tube to detector element. MR data are generally acquired as k-space data in a Cartesian, radial, spiral, or other acquisition geometry. In any of these cases, a suitable image reconstruction algorithm is applied to convert the imaging data from projection space or k-space to a reconstructed image in two-dimensional (2D) or three-dimensional (3D) image space. Image reconstruction is typically an iterative process, although non-iterative reconstruction algorithms such as filtered backprojection are also known. Various image refinement algorithms, such as filters and/or iterative resolution recovery, may optionally be applied to the reconstructed image to enhance salient characteristics.
- A challenge in the image reconstruction and refinement processing is the balancing of noise suppression and edge preservation (or edge enhancement). These goals tend to be in opposition, since noise constitutes unwanted image contrast that is to be suppressed; whereas edges constitute desired image contrast that is to be retained or perhaps even enhanced. Post-reconstruction filtering is a primary approach for noise suppression in medical imaging, but requires careful selection of filter type(s) and filter parameters to obtain an acceptable (even if not optimal) image for clinical analysis. Some known noise-suppressing filters include low-pass filters, bi-lateral filters, adaptive filters, or so forth. Low pass filters tend to smooth the image uniformly, which can suppress lesion contrast. Bi-lateral filters use the local image information to identify edges with the goal of only smoothing regions to the sides of the edge and leave the edge untouched or minimally smoothed. This is a type of edge-preserving filter, and if properly tuned may preserve lesion/organ quantitation. However, depending upon the filter parameters, edges may not be detected around some small/weak lesions/organs, in which case the small/weak lesions/organs are filtered and quantitative accuracy may be compromised. Other advanced adaptive image filters likewise require careful tuning.
- In "A feature refinement approach for statistical interior CT reconstruction" by Zhanli Hu et al., Physics in Medicine & Biology, vol. 61, p. 5311-5334, the authors describe a method for iteratively reconstructing CT images. In this method, in each step during the reconstruction, a feature descriptor is calculated using a calculated residual image. The feature descriptor is used to refine the image estimation that is used as the input for the next iteration cycle.
- The following discloses a new and improved systems and methods that address the above referenced issues, and others.
- In one aspect, an image processing method is described in accordance with
claim 1. The disclosed method is implemented on a computer and comprises: performing iterative image reconstruction on projection or k-space imaging data to generate an iteratively reconstructed, wherein the iterative processing generates a series of update images ending in the iteratively reconstructed image; generating a difference image between a choice of two update images selected from the series of update images; and using the difference image in post processing performed on the iteratively reconstructed image. - In another disclosed aspect, a non-transitory storage medium stores instructions readable and executable by a computer to cause the computer to perform the above described image processing method.
- In another disclosed aspect, an image processing device comprises a computer an at least one non-transitory storage medium as described above.
- One advantage resides in improved image quality for an iteratively reconstructed image.
- Another advantage resides in providing for more accurate detection of malignant tumors or lesions.
- Another advantage resides in providing for reduction of obscuring noise in clinical images.
- Another advantage resides in providing for reduced likelihood of noise suppression image processing degrading or removing small lesion features.
- A given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
- The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
-
FIGURE 1 diagrammatically shows an illustrative imaging system including image reconstruction and refinement that leverages a difference image computed using two different image updates of an iterative image reconstruction. -
FIGURE 2 diagrammatically shows an illustrative imaging system including image refinement that leverages a difference image computed using two different image updates of an iterative image refinement process. -
FIGURE 3 diagrammatically shows an illustrative imaging system that constructs a weighted combination of two different image reconstructions, with the weighing being in accord with a difference image between the two reconstructions or between image updates of one or both image reconstructions. -
FIGURES 4-8 present image reconstruction results as described herein. -
FIGURE 9 illustrates use of a feature image as disclosed herein displayed to provide visual guidance as to detected features. - Image reconstruction and refinement approaches disclosed herein are premised on the insight that, rather than attempting to identify edges in an image using local spatial information (e.g. by detecting large image intensity gradients), image features as a whole (not merely the edges) can be effectively detected based on a "temporal" evolution of update images during an iterative image reconstruction or refinement process. In particular, a difference image is computed as a difference (e.g. absolute difference) between corresponding pixels of two different update images of the iterative image reconstruction or refinement process. As disclosed herein, such a difference image can, for an appropriate choice of update images, produce a difference image that captures image features such as small lesions or tumors as areal structures, rather than as edges delineating such structures as in edge-preserving or edge-enhancing image filtering. The disclosed "temporal" approaches leverage certain observations about the evolution of update images during typical iterative reconstruction of PET and SPECT images.
- One observation is that large structures typically converge faster than small structures, i.e., it takes fewer number of iterations for large structures to converge. Similarly, low spatial frequency components converge faster than high spatial frequency components in the image. These observations are intuitively linked since large structures principally comprise lower spatial frequency components (e.g. in a spatial Fourier transform sense) while small structures principally comprise higher spatial frequency components. Undesirable noise is typically represented by high frequency components (higher than those needed for useful realistic structures). From these observations, it can be appreciated that a difference image employing earlier update images of an iterative image reconstruction tends to capture large features, while a difference image employing later update images tends to capture smaller features.
- Another observation is that, in the case of nuclear emission images (e.g. PET or SPECT), cold regions tend to converge more slowly than hot regions. Here "cold" refers to regions of low radiopharmaceutical concentration while "hot" refers to regions of high radiopharmaceutical concentration. More generally, small lesions and sharp edges correspond to high spatial frequency image signals.
- The optimal choice of update images for the difference image can be selected empirically, for example, via phantom studies to select update images for the difference image that produce the difference image with the strongest contrast for phantom features mimicking expected tumor sizes. It should be noted that the two update images that form the difference image do not necessarily need to be consecutive update images in the series of update images of the iterative reconstruction ending in the final iteratively reconstructed image. (Further, the ending iterative reconstructed image is itself defined using the iterative reconstruction termination criterion which may be variously chosen, e.g. stopping when a change metric between successive iterations is less than some minimum threshold, or stopping after a fixed number of iterations, or so forth).
- A further observation is that, in the case of time-of-flight PET (i.e. TOF-PET), reconstruction from data with time-of-flight (TOF) information converges faster in general than without TOF information, since the TOF localization provides additional information to improve convergence. Hence, if PET imaging data are reconstructed using a TOF reconstruction algorithm that leverages TOF information and by a non-TOF reconstruction algorithm that does not leverage TOF information, the former is expected to converge more rapidly than the latter. More generally, different image reconstruction algorithms applied to the same imaging data may converge more or less rapidly. This observation underlies variant embodiments disclosed herein in which, rather than taking the difference image as a difference between two update images of a single image reconstruction, the difference image is between reconstructed images, or update images, of two different reconstruction algorithms applied to the same imaging data.
- Further observations pertain to the relationship between convergence speed and the difference image (or the features in the difference image), as this can impact the choice of update images. Those objects with faster converge speed become close to their final reconstructed state after a few updates or iterations. On the contrary, the objects with slower converge speed remain farther away from their final reconstructed state at the time of convergence of the faster-converging objects. Thus, if the choice of update images is selected to be from the earliest updates, the difference for both faster converge objects and slower converge objects are large, and thus are not optimal to differentiate the objects. Conversely, if the choice of update images is selected to be from near the end of the iterations, the differences for both faster or slower converge objects are small, which is again not an optimal choice. In general, the optimal choice of update images is between these limits, and is preferably chosen so that the faster converging objects are close to stable (thus differences are small for these fast-converging object) while slower-converging objects are not yet stable (and hence the differences are still alrge) Such selection of the update images for computing the difference image thereby generates the strongest contrast for the smaller (and slower-convergin) features compared to the bigger (and faster-converging) background.
- Thus, in embodiments disclosed herein, the difference image is between two iterations of iterative processing (image reconstruction). Further transformations, e.g. scaling or weighting, may be applied to the difference image to generate a feature image. The feature image carries the "evolution" information of each object/organ between the iterations. The values of the same pixel or voxel in the images at different iterations are compared directly to each other, rather than being compared to its neighboring voxels in the individual images as in edge preserving or edge enhancing filtering techniques.
- With reference to
FIGURE 1 , anillustrative imaging device 10 is a combined system that includes a computed tomography (CT)gantry 12 and a positron emission tomography (PET)gantry 14, with a common subject support orcouch 16 for moving a patient or other subject into a chosengantry imaging device 10 acquires imaging data in the form of projection data. PET imaging data acquired using thePET gantry 14 comprise projection data in the form of lines of response (LORs) defined by detected 511 keV gamma ray pairs, optionally with time-of-flight (TOF) localization. CT imaging data acquired by theCT gantry 12 comprise projections (here absorption line integrals) along paths from x-ray tube to detector element. SPECT imaging data similarly comprise projections defined by a honeycomb or other type of collimator as linear or narrow-angle conical projections. MR imaging data are commonly collected as k-space imaging data, e.g. k-space samples acquired along a k-space trajectory (e.g., Cartesian, spiral, radial, zig-zag) defined by frequency and/or phase encoding implemented by suitably applied magnetic field gradients. - The acquired imaging data are processed by a
computing device 20, e.g. a computer 22 (network server, desktop computer, or so forth) that includes or has operative access with one or more electronic data storage devices (e.g. one or more hard drives, optical disks, solid state drives or other electronic digital storage devices, or so forth). Initially, the acquired imaging data are stored at an imagingdata storage device 24. In embodiments conforming withFIGURE 1 , thecomputer 22 executes suitable software to implement aniterative image reconstruction 26 that generates a reconstructed image which is stored in astorage 28. Theimage reconstruction 26 may also be implemented in part using application-specific integrated circuitry (ASIC) or the like. Theiterative image reconstruction 26 is performed on projection imaging data (or k-space imaging data in the case of MR imaging) to generate an iteratively reconstructed image. More particularly, theiterative reconstruction 26 produces a series of update images ending in the iteratively reconstructed image which is stored in thestorage 28. Some illustrative iterative image reconstruction algorithms for reconstructing PET imaging data include ordered subset expectation maximization (OSEM) image reconstruction and maximum a posteriori (MAP) image reconstruction using a quadratic prior or an edge-preserving prior (such as relative differences prior). In the case of MR imaging data, various iterative Fast Fourier Transform (FFT)-based image reconstruction algorithms can be employed, with the particular algorithm usually chosen based in part on the k-space trajectory used to acquire the MR imaging data. The imaging data that is reconstructed may be two-dimensional (2D) imaging data in which case the image reconstruction produces a 2D image (sometimes called an image slice); or, the imaging data that is reconstructed may be three-dimensional (3D) imaging data in which case the image reconstruction produces a 3D image (sometimes called a volume image). - As just noted, the
iterative reconstruction 26 produces a series of update images ending (e.g., when a specified number of iterations are performed or when some other termination criterion is met) in the iteratively reconstructed image. In approaches disclosed herein, selected update images are subtracted to generate a difference image having contrast for features of interest. In illustrativeFIGURE 1 , two selectedupdate images difference image 34 between thefirst update image 30 and asecond update image 32 of the series of update images is generated. To avoid the possibility of negative pixel or voxel values, in some embodiments thedifference image 34 is an absolute difference image between the first andsecond update images absolute difference image 34 is computed as the absolute value of the difference between corresponding pixels or voxels of the first andsecond update images difference image 34 is transformed bytransformation operations 36 such as scaling or weighting of pixels or voxels of the difference image in order to generate afeature image 40. - The difference image 34 (optionally transformed into feature image 40) is used in the iterative reconstruction 26 (i.e., used in iterations performed subsequent to the iterations that generated the
update images 30, 32) as indicated byfeedback path 42. For example, thefeature image 40 may serve as a prior image in subsequent iterations of theiterative image reconstruction 26. In other embodiments, the difference image 34 (optionally transformed into feature image 40) is used in optional post-processing, such asillustrative image refinement 44, that is performed on the iteratively reconstructed image to produce the final clinical image that is stored in aclinical image storage 46 such as a Picture Archiving and Communication System (PACS). Use of thefeature image 40 in the post-processing 44 is diagrammatically indicated inFIGURE 1 by data flow path 48. - With reference to
FIGURE 2 , in an example useful for understanding the invention, the difference image is generated from update images produced by iterative image refinement, rather than by iterative image reconstruction. The distinction between iterative image reconstruction and iterative image refinement is that iterative image reconstruction operates to convert imaging data (projection data or k-space data) to image data in a 2D or 3D image space; whereas, iterative image refinement operates to improve an image already extant in a 2D or 3D image space. To simplify illustration,FIGURE 2 starts with thereconstructed image storage 28 that stores a reconstructed image; but it should be noted that in the example ofFIGURE 2 the reconstructed image stored in thestorage 28 may have been generated using either an iterative or a non-iterative image reconstruction algorithm. Thecomputer 22 in the example ofFIGURE 2 is programmed to perform aniterative image refinement 56 on the reconstructed image stored in thestorage 28, which in this context ofFIGURE 2 is an input reconstructed image that is input to theiterative image refinement 56. Theiterative image refinement 56 may, for example, be iterative filtering, iterative resolution recovery, iterative scatter correction, or so forth. - The
iterative image refinement 56 is performed on the input reconstructed image to generate an iteratively refined image that is stored in the PACS or otherclinical image storage 46. Theiterative image refinement 56 produces a series of update images ending (e.g., when a specified number of iterations are performed or when some other termination criterion is met) in the iteratively refined image. In examples comporting withFIGURE 2 , selected update images of the series of update images produced by theiterative image refinement 56 are subtracted to generate a difference image having contrast for features of interest. In illustrativeFIGURE 2 , two selectedupdate images difference image 64 between thefirst update image 60 and asecond update image 62 of the series of update images is generated. To avoid the possibility of negative pixel or voxel values, in some examples thedifference image 64 is an absolute difference image between the first andsecond update images absolute difference image 64 is computed as the absolute value of the difference between corresponding pixels or voxels of the first andsecond update images difference image 64 is transformed bytransformation operations 66 such as scaling or weighting of pixels or voxels of the difference image in order to generate afeature image 70. The difference image 64 (optionally transformed into feature image 70) is used in the iterative image refinement 56 (i.e., used in iterations of theimage refinement 56 performed subsequent to the iterations that generated theupdate images 60, 62) as indicated byfeedback path 72. - With reference to
FIGURE 3 , in another example useful for understanding the invention, the difference image is between reconstructed images generated by two different (e.g., iterative or non-iterative, reconstruction with or without TOF) image reconstruction algorithms. Thus, the example ofFIGURE 3 again operates on the imaging data (e.g. projection or k-space imaging data) acquired by theimaging device 10 and stored in theimaging data storage 24. Thecomputer 22 is programmed to generate a firstreconstructed image 80 by performing afirst image reconstruction 81, and to generate a secondreconstructed image 82 by performing asecond image reconstruction 83 that is different from thefirst image reconstruction 81. For example, in the case of PET imaging data one of theimage reconstruction algorithms image reconstruction algorithms images images reconstructed images different reconstruction algorithms reconstructed images image reconstruction algorithms FIGURE 3 bydotted inputs 86. For example, the difference image 84 may be between an intermediate update image of thefirst image reconstruction 81 and an intermediate update image of thesecond image reconstruction 83. Alternatively, the difference image 84 may be between two different update images of thefirst image reconstruction 81. As already described respecting the examples ofFIGURES 1 and2 , the difference image 84 may be an absolute difference image, and/or may be transformed by transformation operations such as scaling or weighting into a feature image (not shown inFIGURE 3 ). Thecomputer 22 is further programmed to implement animage synthesizer 88 that combines the tworeconstructed images clinical image storage 46. For example, the tworeconstructed images reconstructed images - It is again noted that the various
computational components illustrative computer 22, although implementation of some computationally intensive aspects via ASIC, field-programmable gate array (FPGA), or other electronics is also contemplated. Thecomputer 22 may be a single computer (server computer, desktop computer, or so forth) or an interconnected plurality of computers, e.g. a computing cluster, cloud computing resource, or so forth. It will be further appreciated that the disclosed image processing techniques may be embodied as one or more non-transitory storage media storing instruction executable by theillustrative computer 22 or by some other computer or computing resource to perform the disclosed operations. The non-transitory storage medium may, for example, comprise a hard disk or other magnetic storage medium, an optical disk or other optical storage medium, a solid state drive, flash memory or other electronic storage medium, various combinations thereof, and/or so forth. - In the following, some more detailed illustrative examples are provided in the form of phantom studies and clinical studies. These examples are directed to PET imaging, but as already described the disclosed approaches levering difference images constructed from update images produced by iterative image reconstruction or refinement are more generally useful in other types of imaging (e.g., PET, SPECT, CT, MR, or so forth).
- A first example, which comports with
FIGURE 1 , generates thefeature image 40 fromupdate images iterative reconstruction 26 and uses thefeature image 40 insubsequent image refinement 44, namely in post-reconstruction filtering. In this example, the imaging data were acquired using a digital PET system with TOF information and with clinically relevant count level. The PET image was reconstructed using iterative TOF list-mode OSEM reconstruction as theiterative reconstruction 26, with one iteration and four subsets (Image1, i.e. update image 30), then with two iterations and four subsets (Image2, i.e. update image 32). Thedifference image 34 was generated by subtracting Image1 from Image2 and taking the absolute value of each voxel of the difference image to generate the absolute difference image. Subsequent scaling/weighting 36 to generate thefeature image 40 included calculating the ratio of the absolute difference image to Image1 voxel-by-voxel to generate theratio image Ratio 12, followed by clamping the voxel values to 0.15 and then dividing the image by 0.15 to obtain thefeature image 40. The value of 0.15 was found empirically to be effective in this example, but a smaller or larger clamp value may be used to gauge the level of changes in the images from different reconstructions, and/or the clamp value may be adjusted based on how the iterative reconstruction is performed. As an example of the latter, when TOF is used, image convergence is typically faster than for a non-TOF reconstruction, so that one may prefer a relatively larger clamp value for TOF reconstruction; when more subsets are used in each iteration, then the difference can be larger. - It is also noted that while the
update images - The
feature image 40 generated as described above for this example has the following characteristics: (1) Any voxel that has value change of 15% (in this specific illustrative example; more generally other values may be used) or more from Image1 to Image2 hasvalue 1; (2) Any voxel that has value change between 0 to 15% is scaled to 0-1; and (3) Small structures (e.g., lesions) and cold regions tend to have large percentage change between iterations, therefore, the corresponding voxels in the feature image havevalues 1 or close to 1. Accordingly, when thefeature image 40 is used for the post-reconstruction image refinement 44 (filtering, in this example), thefeature image 40 provides extra information. In particular, if a voxel is from a lesion then its value in thefeature image 40 hasvalue 1 or close to 1. This is used to guide thepost-reconstruction processing 44 for optimized performance. For the example of post-reconstruction filtering of the image, it is desired thatvoxels having value 1 in thefeature image 40 should not be filtered at all, or should be filtered only slightly; by contrast, voxels of thefeature image 40 with value 0 or close to 0 should be filtered heavily. For values between 0 and 1, the amount of filtering should (at least approximately) scale with the feature image voxel value, i.e. the feature image voxel value serves as a weight to determine how much the voxel will be filtered. The resulting filtered image thus preserves the quantitation of the lesions and organ boundaries (due to weak or no filtering) while smoothing out the noise in the background/uniform regions (by way of strong filtering). - Leveraging of the
feature image 40 as weights in a weighted combination of two image transformations T 1 and T 2 can be expressed as follows:image 28 and f(i) denotes corresponding pixels or voxels of the feature image, and T 1 and T 2 are two different image transformations. Specifically, T 1 is a strong (e.g., a Gaussian filter with a large kernel) filter and T 2 is a weak (e.g., a Gaussian filter with a small kernel) filter in this particular example. -
FIGURE 4 displays Image1, Image2, the Absolute Difference image, and the feature image obtained for a NEMA IEC phantom study with 30 million counts.FIGURE 5 illustrates a suitable filtering scheme of the NEMA IEC phantom image using the obtained feature image. The NEMA IEC phantom image was first reconstructed using a standard reconstruction protocol (IEC0). Then it was heavily filtered using three sequential box filters with window size 3 (IEC_Heavy), and slightly filtered using a box filter with kernel weight of 19 at the center and 1 at the other elements (IEC_Slight). The two filtered images were then combined using the feature image (IEC_Feature) in accordance with Equation (1) to obtain the final jointly filtered image (IEC_Joint). Using the foregoing notation, Equation (1) can be written for this task as: - More particularly,
FIGURE 4 shows an example of extracting a feature image (an embodiment of thefeature image 40 ofFIGURE 1 ) from images at two different OSEM iterations. Images are displayed inFIGURE 4 using linear gray scale and each image was scaled to its own maximum. InFIGURE 4 , from left to right: Image1 (one iteration, four subsets; this is an embodiment offirst update image 30 ofFIGURE 1 ), Image2 (two iterations, four subsets; this is an embodiment ofsecond update image 32 ofFIGURE 1 ), the absolute difference (an embodiment of thedifference image 34 ofFIGURE 1 ), and the feature image (an embodiment offeature image 40 ofFIGURE 1 ). The hot spheres and cold spheres of the IEC phantom as well as the lung insert in the center of the phantom (which is cold) exhibited large changes between Image1 and Image2. The corresponding voxels of such objects in the feature image had high values. The uniform background (low frequency components) of the imaged phantom had low values (more black area in the gray scale display) in the feature image, indicating relatively small change fromImage 1 to Image2 due to faster convergence than the spheres (higher frequency components). -
FIGURE 5 shows an example of using the feature image fromFIGURE 4 to post-filter the NEMA image reconstructed using the standard reconstruction protocol (three iterations, 17 subsets). From left to right: the NEMA image to be filtered (this is an embodiment of the reconstructed image stored in thestorage 28 ofFIGURE 1 ), a heavily filtered image (box filter with window size 3, filter three times sequentially), a slightly filtered image (a box filter with window size 3 but with center of 19 and 1 for the rest), and the jointly filtered image using the feature image (i.e. the weighted sum of the heavily filtered and lightly filtered image combined using Equation (2)). The jointly filtered image significantly suppressed the noise in the background while still preserving the sphere quantitation. - Next, an imaging example is described in which a final reconstructed image is synthesized from two different image reconstructions. In regularized reconstruction, different reconstruction schemes may lead to different image quality. For example, when using a quadratic prior image, regularized reconstruction leads to more smoothed images, but this approach has the disadvantage that some small structures may also be smoothed out. Conversely, when using an edge-preserving prior image, the edges in the image are preserved, but some areas may not be sufficiently smoothed if the noise level is relatively high in those areas.
- In this example, two reconstructed images are generated: one using a quadratic prior to obtain a (heavily) smoothed image, and the other using an edge-preserving prior to obtain an edge-preserved image. Using a feature image, these two images are combined in weighted fashion to synthesize the two reconstructed images into one joint image. A suitable weighted combination is:
FIGURE 4 , but using the real patient data (i.e. there is a trial recon to extract features) to demonstrate that once the mechanism of generating the feature images is established (through IEC phantom studies), the mechanism is also application to patient studies. - If one reconstructed image is heavily smoothed (e.g. using a quadratic prior) and the other is edge-preserving (e.g. using an edge-preserving prior) then the combined image provides both the edge preserving advantage of the edge-preserved image and the smoothing advantage of the smooth image since the feature image provides extra information such as spatial frequency (i.e. how fast it changes locally) and object boundary information. This extra information is used to decide which region (or pixels) should be more heavily smoothed or more lightly smoothed.
-
FIGURE 6 shows transaxial slices of images of a patient study that illustrate the effectiveness of the foregoing synthesis of images generated using quadratic and edge-preserving priors, respectively.FIGURE 7 illustrates the effect of this synthesis for the same patient study using coronal slices. The liver region was significantly filtered in the synthesized image as compared to the edge-preserving image, but the small structures, such as the hot spot in the center) was preserved as compared to the smooth image using a quadratic prior - More particularly,
FIGURE 6 shows the feature image (leftmost image inFIGURE 6 ) used to synthesize a MAP reconstructed image using an edge-preserving prior (second image from left) and a MAP reconstructed image using a (non-edge-preserving) quadratic prior (third image from left, i.e. "smooth" image). Again, the feature image was generated in the same way as for the NEMA IEC phantom study above. The rightmost image inFIGURE 6 was the synthesized image combined using Equation (3) with the feature image (leftmost image ofFIGURE 6 ) providing the f(i) weights. The synthesized image exhibits preservation of the small structures in the image and filtering of the soft tissue (indicated by the black regions in the feature image). This final image was better than either of the MAP images (middle two images ofFIGURE 6 ). -
FIGURE 7 shows coronal slices of the same patient as inFIGURE 6 , illustrating the effectiveness of using the feature image (leftmost image inFIGURE 7 ) to obtain the final synthesized image (rightmost image inFIGURE 7 ) that has both the advantage of edge-preservation of small features in the edge-preserving image (second image from the left, MAP reconstruction using an edge-preserving prior) and the advantage of smoothness of the liver and mediastinum of the smooth image (third image from the left, MAP reconstruction using quadratic prior). - The same synthesis approach can be applied to generate a feature image-weighted combination of two images generated using two different image refinement processes. For example, an edge adaptive anisotropic diffusion filter (ADF) can be used with two different parameter settings to obtain an edge-preserving image and a smooth image, respectively. A feature image may then be used to synthesis the two images to obtain the final image. In any such approach, the feature image is generated from a difference image generated by subtracting two update images of iterative image processing (either an iterative reconstruction or an iterative image refinement) with the update images selected to emphasize the features of interest.
- In a further example, a feature image is used to provide reconstruction parameter guidance. In regularized reconstruction, one can use a quadratic prior of variable strength (guided by the feature image) to guide the regularization. For example, values of 1 in the feature image would reduce the smoothing strength of quadratic prior, and lower values would gradually enable it. The resulting image reconstruction will apply selective regularization using the extra information from the feature image, leading to optimized regularization in one reconstruction (as compared to performing two different reconstructions as in the example described with reference to
FIGURES 6 and7 ). -
FIGURE 8 illustrates an example of this single-reconstruction approach. Using a feature image for selective regularization in regularized reconstruction obtained advantageous lesion quantitation preservation and noise reduction in the background. The leftmost image inFIGURE 8 shows a regularized reconstruction using classical OSEM reconstruction without noise control. Lesions were sharp but background was noisy. The middle image inFIGURE 8 shows a regularized reconstruction using a quadratic prior for effectively suppressed noise in the background - but small lesions were also smoothed, and the contrast was decreased significantly. The rightmost image inFIGURE 8 is a regularized image where the strength of the quadratic prior was modulated by using a feature image to guide the selective regularization voxel-by-voxel and to preserve the edges. Once again, the feature image was created in the same way as for the NEMA IEC phantom study above. This approach provided comparable lesion preservation as the edge-preserving image with significantly reduced/suppressed background noise, particularly in the warm regions. In another example, one can use combinations of different priors such as edge preserving prior in regions where the feature image has high values; for voxels with small values in the feature image, one can use a stronger low-pass quadratic prior. - With reference now to
FIGURE 9 , the feature image can additionally or alternatively be displayed to provide the physician or other medical professional with visual guidance as to the features detected via the difference image. In illustrativeFIGURE 9 , thefeature image 40 is displayed side-by-side with aclinical image 90 on adisplay device 92, e.g. the LCD, plasma, or other graphical display component of a radiology workstation, oncology workstation, or other computer device, films etc. used by the medical professional to review medical images. Theclinical image 90 may optionally be generated leveraging thefeature image 40 as disclosed herein, or may be generated without resort to thefeature image 40. As an example of the latter, theclinical image 90 may be generated by MAP reconstruction using an edge-preserving prior. This can lead to significant noise retention - however, the medical professional is assisted in detecting lesions in spite of this noise by reference to the "features guide" which is the displayedfeature image 40. - Additionally or alternatively, the
feature image 40 may be used in scoring lesions identified by the medical professional. Such scoring employ various factors or metrics in providing a quantitative assessment of the likelihood that the feature identified as a lesion by the medical professional is indeed a lesion, rather than being noise or some other image artifact. Since the feature image using the illustrative scaling/weighting scheme has pixel or voxel values near 1 for features and values near zero otherwise, the sum of pixel or voxel values of thefeature image 40 within the area or volume identified as a lesion by the physician is a metric of how likely it is that the lesion identification is correct. Thus, for example, the average pixel or voxel value over the area or volume of the lesion: - The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims.
Claims (14)
- An image processing method implemented on a computer (22) comprising:performing iterative image reconstruction (26) on tomographic projection or k-space imaging data to generate an iteratively reconstructed image,
wherein the iterative processing generates a series of update images ending in
the iteratively reconstructed image;
characterized bygenerating a difference image (34, 64) as a difference between corresponding pixels or voxels of a choice of two update images (30, 32, 60, 62) selected from the series of update images; andusing the difference image in post-processing (44) performed on the iteratively reconstructed image. - The image processing method of claim 1 wherein the difference image (34, 64) is an absolute difference image between the two update images (30, 32, 60, 62) in which each pixel or voxel of the absolute difference image is computed as the absolute value of the difference between corresponding pixels or voxels of the two update images.
- The image processing method of claim 1 wherein the difference image (34, 64) between the two update images (30, 32, 60, 62) has pixel or voxel values that indicate large positive changes and large negative changes between the two update images.
- The image processing method of any one of claims 1-3 wherein the performed operations further include:transforming the difference image (34, 64) into a feature image (40, 70) by transformation operations (36, 66) including at least scaling or weighting pixels or voxels of the difference image;wherein the using comprises using the feature image in the post-processing (44) performed on the iteratively reconstructed image.
- The image processing method of claim 4 wherein the using comprises:
post-processing (44) the iteratively reconstructed or refined image using the feature image (40, 70) according to the image transformation: - The image processing method of claim 5 wherein the two different image transformations T 1 and T 2 are two different image filters.
- The image processing method of claim 4 wherein the using comprises:
post-processing (44) the iteratively reconstructed or refined image using the feature image (40, 70) according to the image transformation: - The image processing method of any one of claims 1-7 wherein the post-processing (26, 56) includes iterative image refinement (56) performed on the iteratively reconstructed image to generate the iteratively refined image.
- The image processing method of claim 8 wherein the using comprises:
using the difference image (64) in iterations of the iterative image refinement (56) performed subsequent to producing the two update images (60, 62). - The image processing method according to claim 1, wherein the iteratively reconstructed image is
a first reconstructed image (80); the method further comprising:performing a second image reconstruction (83) on the projection or k-space imaging data to generate a second reconstructed image (82);generating the difference image (84) between a choice of two update images (30, 32, 60, 62) selected from the series of update images of the first reconstructed image, or between two images each selected from the group consisting of the first reconstructed image, an update image of the first image reconstruction, the second reconstructed image, and an update image of the second image reconstruction; andgenerating a final reconstructed image that combines the first reconstructed image and the second reconstructed image using the difference image. - The image processing method of claim 10 wherein the difference image (84) is between the first reconstructed image (80) and the second reconstructed image (82).
- A non-transitory storage medium storing instructions readable and executable by a computer (22) to cause the computer to perform the image processing method according to any of claims 1-11
- An image processing device comprising:a computer (22); andat least one non-transitory storage medium according to claim 12
- The image processing device of claim 13 further comprising:a display component (92);wherein the performed operations further include simultaneously displaying, on the display component, both the feature image (40) and a clinical image (90).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662377844P | 2016-08-22 | 2016-08-22 | |
PCT/EP2017/071175 WO2018037024A1 (en) | 2016-08-22 | 2017-08-22 | Feature-based image processing using feature images extracted from different iterations |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3501006A1 EP3501006A1 (en) | 2019-06-26 |
EP3501006B1 true EP3501006B1 (en) | 2020-11-04 |
Family
ID=59702715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17757745.9A Active EP3501006B1 (en) | 2016-08-22 | 2017-08-22 | Feature-based image processing using feature images extracted from different iterations |
Country Status (5)
Country | Link |
---|---|
US (1) | US11049230B2 (en) |
EP (1) | EP3501006B1 (en) |
JP (1) | JP7065830B6 (en) |
CN (1) | CN109844815A (en) |
WO (1) | WO2018037024A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10531275B2 (en) * | 2016-12-12 | 2020-01-07 | Commscope Technologies Llc | Cluster neighbor discovery in centralized radio access network using transport network layer (TNL) address discovery |
CN107705261B (en) * | 2017-10-09 | 2020-03-17 | 东软医疗系统股份有限公司 | Image reconstruction method and device |
US11721017B2 (en) * | 2021-03-31 | 2023-08-08 | James R. Glidewell Dental Ceramics, Inc. | CT reconstruction quality control |
CN113139518B (en) * | 2021-05-14 | 2022-07-29 | 江苏中天互联科技有限公司 | Section bar cutting state monitoring method based on industrial internet |
WO2023044653A1 (en) * | 2021-09-23 | 2023-03-30 | 京东方科技集团股份有限公司 | Display device system, and method for adaptively enhancing image quality |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6249693B1 (en) | 1999-11-01 | 2001-06-19 | General Electric Company | Method and apparatus for cardiac analysis using four-dimensional connectivity and image dilation |
US7660481B2 (en) | 2005-11-17 | 2010-02-09 | Vital Images, Inc. | Image enhancement using anisotropic noise filtering |
US8761478B2 (en) * | 2009-12-15 | 2014-06-24 | General Electric Company | System and method for tomographic data acquisition and image reconstruction |
US8938105B2 (en) * | 2010-10-28 | 2015-01-20 | Kabushiki Kaisha Toshiba | Denoising method and system for preserving clinically significant structures in reconstructed images using adaptively weighted anisotropic diffusion filter |
RU2013129823A (en) * | 2010-11-30 | 2015-01-10 | Конинклейке Филипс Электроникс Н.В. | ITERATIVE RECONSTRUCTION ALGORITHM WITH WEIGHT COEFFICIENT ON THE BASIS OF CONSTANT DISPERSION |
JP6141313B2 (en) * | 2011-12-13 | 2017-06-07 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Automatic determination of regularization coefficients for iterative image reconstruction using regularization and / or image denoising |
US8903152B2 (en) * | 2012-06-29 | 2014-12-02 | General Electric Company | Methods and systems for enhanced tomographic imaging |
US9269168B2 (en) | 2013-03-15 | 2016-02-23 | Carestream Health, Inc. | Volume image reconstruction using data from multiple energy spectra |
US9959640B2 (en) | 2014-09-15 | 2018-05-01 | Koninklijke Philips N.V. | Iterative image reconstruction with a sharpness driven regularization parameter |
US9713450B2 (en) * | 2014-12-15 | 2017-07-25 | General Electric Company | Iterative reconstruction of projection data |
US20160327622A1 (en) * | 2015-05-05 | 2016-11-10 | General Electric Company | Joint reconstruction of activity and attenuation in emission tomography using magnetic-resonance-based priors |
US11200709B2 (en) * | 2016-12-27 | 2021-12-14 | Canon Medical Systems Corporation | Radiation image diagnostic apparatus and medical image processing apparatus |
-
2017
- 2017-08-22 JP JP2019510402A patent/JP7065830B6/en active Active
- 2017-08-22 US US16/325,213 patent/US11049230B2/en active Active
- 2017-08-22 EP EP17757745.9A patent/EP3501006B1/en active Active
- 2017-08-22 CN CN201780058475.8A patent/CN109844815A/en active Pending
- 2017-08-22 WO PCT/EP2017/071175 patent/WO2018037024A1/en unknown
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
JP7065830B6 (en) | 2022-06-06 |
CN109844815A (en) | 2019-06-04 |
US11049230B2 (en) | 2021-06-29 |
WO2018037024A1 (en) | 2018-03-01 |
US20190197674A1 (en) | 2019-06-27 |
EP3501006A1 (en) | 2019-06-26 |
JP2019524356A (en) | 2019-09-05 |
JP7065830B2 (en) | 2022-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3501006B1 (en) | Feature-based image processing using feature images extracted from different iterations | |
Gong et al. | PET image denoising using a deep neural network through fine tuning | |
JP6855223B2 (en) | Medical image processing device, X-ray computer tomographic imaging device and medical image processing method | |
Mehranian et al. | PET image reconstruction using multi-parametric anato-functional priors | |
EP2380132B1 (en) | Denoising medical images | |
US9159122B2 (en) | Image domain de-noising | |
US9449403B2 (en) | Out of plane artifact reduction in digital breast tomosynthesis and CT | |
EP2992504B1 (en) | De-noised reconstructed image data edge improvement | |
EP3549104B1 (en) | Interactive targeted ultrafast reconstruction in emission and transmission tomography | |
Chen et al. | CT metal artifact reduction method based on improved image segmentation and sinogram in-painting | |
CN109791701B (en) | Iterative image reconstruction with dynamic suppression of the formation of noise-induced artifacts | |
Zhu et al. | Residual dense network for medical magnetic resonance images super-resolution | |
US20170172534A1 (en) | Thoracic imaging for cone beam computed tomography | |
Xu et al. | Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis | |
Wen et al. | A novel Bayesian-based nonlocal reconstruction method for freehand 3D ultrasound imaging | |
Xu et al. | A performance-driven study of regularization methods for gpu-accelerated iterative ct | |
Kulkarni et al. | A channelized Hotelling observer study of lesion detection in SPECT MAP reconstruction using anatomical priors | |
Yin et al. | Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble | |
Song et al. | Cardiac motion correction for improving perfusion defect detection in cardiac SPECT at standard and reduced doses of activity | |
Shieh et al. | Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR) | |
EP2478489B1 (en) | System and method for determining a property of blur in a blurred image | |
Bai et al. | Advanced Image Processing Using Feature Images Extracted from Different Iterations | |
Tahaei et al. | Combining different variance reduction approaches for PET image reconstruction | |
Turco et al. | Partial volume correction of doubly-gated cardiac datasets using anatomical and edge-preserving priors | |
Wang et al. | An MRI-guided PET partial volume correction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190322 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: KONINKLIJKE PHILIPS N.V. |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 11/00 20060101AFI20200402BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20200526 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1331792 Country of ref document: AT Kind code of ref document: T Effective date: 20201115 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602017026901 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20201104 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1331792 Country of ref document: AT Kind code of ref document: T Effective date: 20201104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210205 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210204 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210304 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210204 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210304 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602017026901 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20210805 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20210831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210831 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210304 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210822 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210822 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210831 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210831 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20220823 Year of fee payment: 6 Ref country code: DE Payment date: 20220628 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20170822 |