US20190236763A1 - Apparatus and method for context-oriented blending of reconstructed images - Google Patents
Apparatus and method for context-oriented blending of reconstructed images Download PDFInfo
- Publication number
- US20190236763A1 US20190236763A1 US15/884,089 US201815884089A US2019236763A1 US 20190236763 A1 US20190236763 A1 US 20190236763A1 US 201815884089 A US201815884089 A US 201815884089A US 2019236763 A1 US2019236763 A1 US 2019236763A1
- Authority
- US
- United States
- Prior art keywords
- image
- images
- weighting coefficients
- display
- blended
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 102
- 238000002156 mixing Methods 0.000 title claims abstract description 54
- 238000009499 grossing Methods 0.000 claims abstract description 82
- 238000002591 computed tomography Methods 0.000 claims abstract description 71
- 230000006870 function Effects 0.000 claims description 61
- 210000004072 lung Anatomy 0.000 claims description 38
- 210000004872 soft tissue Anatomy 0.000 claims description 33
- 238000012935 Averaging Methods 0.000 claims description 12
- 210000000988 bone and bone Anatomy 0.000 claims description 7
- 230000001419 dependent effect Effects 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 5
- 208000030886 Traumatic Brain injury Diseases 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 238000002583 angiography Methods 0.000 claims 1
- 210000005013 brain tissue Anatomy 0.000 claims 1
- 239000002872 contrast media Substances 0.000 claims 1
- 210000003205 muscle Anatomy 0.000 claims 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims 1
- 238000012545 processing Methods 0.000 description 22
- 230000015654 memory Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 7
- 239000010410 layer Substances 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 230000001629 suppression Effects 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 5
- 230000002829 reductive effect Effects 0.000 description 5
- 238000007251 Prelog reaction Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 4
- 238000002601 radiography Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000740 bleeding effect Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 229910052704 radon Inorganic materials 0.000 description 2
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 229910052724 xenon Inorganic materials 0.000 description 2
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 2
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 101000666896 Homo sapiens V-type immunoglobulin domain-containing suppressor of T-cell activation Proteins 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 241000414697 Tegra Species 0.000 description 1
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 102100038282 V-type immunoglobulin domain-containing suppressor of T-cell activation Human genes 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000010968 computed tomography angiography Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- IJJVMEJXYNJXOJ-UHFFFAOYSA-N fluquinconazole Chemical compound C=1C=C(Cl)C=C(Cl)C=1N1C(=O)C2=CC(F)=CC=C2N=C1N1C=NC=N1 IJJVMEJXYNJXOJ-UHFFFAOYSA-N 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000013152 interventional procedure Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000002574 poison Substances 0.000 description 1
- 231100000614 poison Toxicity 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 229910052719 titanium Inorganic materials 0.000 description 1
- 239000010936 titanium Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/02—Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computerised tomographs
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/46—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/002—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/424—Iterative
Definitions
- This disclosure relates to generating a blended image by blending reconstructed images of varying levels of soothing/denoising based on context information, and, more particularly, to using the context information to select the relative contributions of the reconstructed images to the blended image.
- Computed tomography (CT) systems and methods are widely used, particularly for medical imaging and diagnosis.
- a CT scan can be performed by positioning a patient on a CT scanner in a space between an X-ray source and X-ray detector, and then taking X-ray projection images through the patient at different angles as the X-ray source and detector are rotated through a scan.
- the resulting projection data is referred to as a CT sinogram, which represents attenuation through the body as a function of position along one or more axis and as a function of projection angle along another axis.
- IR filtered back-projection
- IR methods can provide improved image quality at reduced radiation doses.
- Various iterative reconstruction (IR) methods exist, such as the algebraic reconstruction technique. For example, one common IR method performs unconstrained (or constrained) optimization to find the argument p that minimizes the expression
- each matrix value a ij (i being a row index and j being a column index) represents an overlap between the volume corresponding to voxel p j and the X-ray trajectories corresponding to projection value i .
- the data-fidelity term ⁇ Ap ⁇ ⁇ W 2 is minimized when the forward projection A of the reconstructed image p provides a good approximation to all measured projection images .
- W 2 signifies a weighted inner product of the form, g T Wg, wherein W is the weight matrix (e.g., expressing a reliability of trustworthiness of the projection data based on a pixel-by-pixel signal-to-noise ratio).
- W is the weight matrix (e.g., expressing a reliability of trustworthiness of the projection data based on a pixel-by-pixel signal-to-noise ratio).
- the weight matrix W can be replaced by an identity matrix.
- the above IR method is referred to as a penalized weighted least squares (PLWS) approach.
- PLWS penalized weighted least squares
- the function U(p) is a regularization term,and this term is directed at imposing one or more constraints (e.g., a total variation (TV) minimization constraint) which often have the effect of smoothing or denoising the reconstructed image.
- the value ⁇ is a regularization parameter is a value that weights the relative contributions of the data fidelity term and the regularization term.
- the choice of the value for the regularization term ⁇ typically affects a tradeoff between noise and resolution.
- increasing the regularization term ⁇ reduces the noise, but at the cost of also reducing resolution.
- the best value for the regularization term ⁇ can depend on multiple factors, the primary of which is the application for which the reconstructed image is to be reconstructed. Because IR algorithms can be slow and require significant computational resources, a cut-and-try approach is inefficient (e.g., different values of the regularization term ⁇ are used for the IR method until an optimal solution is obtained).
- a single CT scan can be used for more than one clinical application, and, therefore, an ability to adjust the reconstructed image with regards the tradeoff between noise and resolution without repeating the computationally intensive IR algorithm is desirable.
- improved methods are desired for rapidly generating and modifying a reconstructed image to optimize a tradeoff between noise and resolution.
- CT computed tomography
- FIG. 1B shows an example of the same lung region using the same display settings as in FIG. 1A , except the CT image was generated using a large smoothing/denoising parameter rather than the small smoothing/denoising parameter used in FIG. 1A , according to one implementation;
- FIG. 2B shows an example of the same soft-tissue region and soft-tissue display settings as in FIG. 2A , except the CT image was generated using the large smoothing/denoising parameter, according to one implementation;
- FIG. 3 shows a flow diagram of a method of generating a blended image based on the content/context of a CT image, according to one implementation
- FIG. 4B shows a histogram of the CT slice binned according HU values, in which HU values are represented along the horizontal axis with counts of voxels in the respective HU bins represented along the left vertical axis; on the right vertical axis is a color legend and, in the default display settings, the HU values are related to the colors in the color legend by the line superimposed over the histogram;
- FIG. 5A shows a 2D image of the CT slice displayed using the lung settings
- FIG. 5B shows the histogram of the CT slice together with the line that translates HU values to colors according to the lung settings
- FIG. 6A shows a 2D image of the CT slice displayed using the soft-tissue settings
- FIG. 6B shows the histogram of the CT slice together with the line that translates HU values to colors according to the soft-tissue settings
- FIG. 7 shows a plot of an example of a weighting function representing a weighting value ⁇ along the vertical axis and an attenuation density in HU along the horizontal axis, the weighting value ⁇ being a weight used for combining a stack of two images generated using different smoothing/denoising parameters, according to one implementation;
- FIG. 8 shows a 2D map of the weighting value ⁇ as a function of position within the 2D image of the CT slice, according to one implementation
- FIG. 9 shows a full 2D slice of the blended image (center) generated using the 2D map of the weighting value ⁇ from FIG. 8 ; superimposed on the full slice images are two zoomed-in images with a image of the soft-tissue region (upper left) and a zoomed-in image of the lung region (lower right) displayed respectively using the soft-tissue and lung window settings, according to one implementation;
- FIG. 10 shows a diagram of a data-processing apparatus for performing the methods described herein, according to one implementation.
- FIG. 11 shows a schematic of an implementation of a CT scanner.
- the methods provided herein address the above-discussed challenges with regards to optimizing a tradeoff between resolution and noise based on the particular content/context of a displayed image.
- These methods addressed the afore-mentioned challenges by, e.g., using content/context of an image to control the generation of a blended image that is a weighted combination of a two or more reconstructed images having different degrees of smoothing/denoising (also referred to as amounts or levels of smoothing/denoising) and the corresponding tradeoff in resolution.
- the indicator of the image content/context can be a display setting (e.g., a slice thickness, window width, and/or window level selected by a user or by default), or the indicator of the image content can be a regional/segmented histogram of the Hounsfield Units (HU) or derivative thereof.
- a display setting e.g., a slice thickness, window width, and/or window level selected by a user or by default
- the indicator of the image content can be a regional/segmented histogram of the Hounsfield Units (HU) or derivative thereof.
- HU Hounsfield Units
- FIGS. 1A and 1B show two images of the same lung region but with different degrees of denoising (which herein is interchangeably referred to as smoothing).
- FIG. 9 which is discussed below, shows that this lung region is a part of a larger slice taken from a reconstructed image of a chest.
- FIGS. 2A and 2B show two images of the same soft-tissue region with different degrees of denoising.
- FIGS. 1A and 2A represent a first degree of denoising
- FIGS. 1B and 2B represent a second degree of denoising with more denoising/smoothing than the first degree of denoising shown in FIGS. 1A and 2A .
- FIG. 2B is generally regarded as being better for clinical applications because the additional resolution in FIG. 2A does not convey significantly more information, but the additional noise in FIG. 2A creates texture and structure that is distracting and could potentially lead to a poor diagnosis or during an interventional procedure a poor outcome. Accordingly, a greater degree of denoising and smoothing can be beneficial for soft-tissue images.
- FIG. 1A is generally regarded as being better for clinical applications because the additional resolution in FIG. 1A is significant to being able to distinguish the features of the lungs (e.g., the feature pointed to by the arrow in FIG. 1A ), and, compared to the larger widow width in the lung settings and the commensurately higher contrast signals in the lung regions, the additional noise is not as significant as in the small-tissue region. Consequently, the additional noise due to less smoothing obscure relatively little in the lung region and that drawbacks of additional noise are outweighed by the benefits of the improved resolution as exhibited in FIG. 1A .
- the degree of denoising can depend on the content/context of an image, or more particularly, the content of a region of interest within the reconstructed image. That is different regions within the same image can benefit from different degrees of denoising.
- the benefit of the denoising can depend on the thickness of the slice of the image. For example, a thicker slice averages together more layers of voxels from the reconstructed image, and, under an assumption of statistical independent noise between voxels of the reconstructed images, the signal-to-noise ratio (SNR) can be expected to grow as the square root of the number voxels being averaged.
- SNR signal-to-noise ratio
- the methods herein use the thickness of the slice in addition to other indicia of the content/context when determining the relative weights between high- and low-denoising images that are combined to generate a blended image.
- a blended image can be a weighted combination of images with different degrees of denoising. For example, through a weighted sum of two images (one having a low degree of denoising and the other having a high degree of denoising) a blended image can be generated to have any degree of denoising between these two originally images.
- This continuum for tuning the tradeoff between noise and resolution is achieved, e g., by adjusting their relative weights in the sum.
- these relative weights can vary as a function of position to represent spatial variations in the content of the reconstructed image (e.g., by segmenting the reconstructed image into lung regions and soft-tissue regions).
- the original images having different degrees of denoising can be obtained by various means. For example, as discussed above, different values for the regularization parameter ⁇ can be used during an IR method. But this is not the only way to generate images with different degrees of denoising/smoothing, and additional methods of denoising can be applied during image reconstruction as well as before and after image reconstruction and/or any combination thereof using any of the methods described below or the IR method described above.
- Various methods can be used to minimize noise in images that are reconstructed from computed tomography (CT) projection data.
- post-processing i.e., post-reconstruction
- denoising methods can be applied to the data after the reconstructed image has been generated.
- an image is reconstructed using an IR method, which minimizes a cost function having both a data fidelity term and a regularization
- the amount of noise in the reconstructed image and the statistical properties of the noise can depend on the type of regularizer and the magnitude of the regularization parameter ⁇ .
- the regularization parameter ⁇ becomes larger, the regularization term is emphasized more relative to the data fidelity tern, reducing the noise in the reconstructed image.
- this increased emphasis on the regularization term can also decrease the resolution and contrast, which is especially noticeable for fine and low-contrast features, such as those common in the lungs, as discussed above.
- the parameter ⁇ is used herein as an identifier to represent a general smoothing and/or denoising parameter to characterize the degree of smoothing/denoising of a given reconstructed image, whether or not the degree of smoothing/denoising arises from the value of regularization parameter, a pre-processing (pre-reconstruction) method, a post-processing (post-reconstruction) method, or combination thereof.
- Context will make clear those instances when the parameter ⁇ specifically refers to the regularization parameter, as opposed to referring more generally to an identifier of the type or degree of denoising/smoothing.
- blended image p is a weighted sum of a small-smoothing-parameter image p ( ⁇ S) and a large-smoothing-parameter image p ( ⁇ L) , and the small- and large-smoothing-parameter images are identical in all aspects including being reconstructed using the same IR method, except for using different values for the regularization parameter ⁇ in the cost function of the IR method.
- the stack of images can include more than two images.
- the stack of images can be variously combined (e.g., as a weighted algebraic or geometric average) to create the blended image, accordingly to desired characteristics.
- this combining of the stack of images can be performed as a weighted sum in which the weights add up to a constant value (e.g., the weights are normalized to sum to the value one).
- the stack of images corresponding to greater and lesser denoising/smoothing amounts can be generated using various post- and or pre-reconstruction denoising methods, reconstruction methods that integrate denoising (e.g., through the selection of regularizer), or a combination of denoising integrated with the reconstruction method together with post- and/or pre-reconstruction denoising methods.
- Each of the denoising methods described below can be applied, e.g., to the sinogram/projection data prior to reconstruction or to a reconstructed image after reconstruction, and some of the denoising methods described below can also be applied to a reconstructed image between iterations of an IR method.
- the various denoising methods can include linear smoothing filters, anisotropic diffusion, non-local means, and nonlinear filters.
- Linear smoothing filters remove noise by convolving the original image with a mask that represents a low-pass filter or smoothing operation.
- the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer agreement with the values of its neighbors.
- a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors: the Gaussian filter is just one possible set of weights.
- smoothing filters tend to blur an image because pixel intensity values that are significantly higher or lower than the surrounding neighborhood are smeared or averaged across their neighboring area. Sharp boundaries become fuzzy.
- local linear filter methods assume that local neighbourhood are homogeneous, and local linear filter methods, therefore, tend to impose homogeneity on the image obscuring non-homogeneous features, such as lesions or organ boundaries.
- Anisotropic diffusion removes noise while preserving sharp edges by evolving an image under a smoothing partial differential equation similar to the heat equation. If the diffusion coefficient were spatially constant, this smoothing would be equivalent to linear Gaussian filtering, but when the diffusion coefficient is anisotropic according to the presence of edges, the noise can be removed without blurring the edges of the image.
- a median filter is an example of a nonlinear filter and, if properly designed, a nonlinear filter can also preserve edges and avoid blurring.
- a median filter operates, for example, by evaluating each pixel in the image, sorting the neighboring pixels according to intensity, and replacing the original value of the pixel with the median value from the ordered list of intensities.
- the median filter is one example of a rank-conditioned rank-selection (RCRS) filter.
- RCRS rank-conditioned rank-selection
- median filters and other RCRS filters can be applied to remove salt and pepper noise from an image without introducing significant blurring artifacts.
- a filter using a total-variation (TV) minimization regularization term can be used where it is assumed that the areas being imaged are uniform over discrete areas with relatively sharp boundaries between the areas.
- TV filter is another example of a nonlinear filter.
- non-local means filtering, rather than performing a weighted average of pixels according to their spatial proximity, pixels are determined to be a weighted average according to the similarity between patches within the images.
- noise is removed based on non-local averaging of all the pixels in an image—not just the neighboring pixels.
- the amount of weighting for a pixel is based on the degree of similarity between a small patch centered near that pixel and another small patch centered on the pixel being denoised.
- various degrees of denoising/smoothing can be achieved through a choice of regularizer and IR method.
- regularization can be expressed as a constraint. For example, enforcing positivity for the attenuation coefficients can provide a level of regularization based on the practical assumption that there are no regions in the object OBJ that cause an increase (i.e., gain) in the intensity of the X-ray radiation.
- TV regularizer in conjunction with projection on convex sets (POCS) can be used to achieve desirable image characteristics in many clinical imaging applications.
- the TV regularizer can be incorporated into the cost function, e.g.,
- ⁇ p ⁇ TV ⁇ p ⁇ 1 is the 1 -norm of the gradient-magnitude image, which is the isotropic TV semi-norm.
- the spatial-vector image ⁇ p represents a discrete approximation to the image gradient.
- some regularizer can be imposed as constraints. For example, a combination of TV and POCS regularization are imposed as constraints when the optimization problem is framed as
- p * arg ⁇ ⁇ min p ⁇ ⁇ Ap - ⁇ ⁇ W 2 ⁇ ⁇ s . t . ⁇ ⁇ p ⁇ TV ⁇ ⁇ ⁇ ⁇ and ⁇ ⁇ p j ⁇ 0.
- CT data can, in practice, be modeled by independent random variables following a Poisson distribution with additive Gaussian distribution to account for electronic noise in the measurement.
- the statistical model of the random variable Y i measured by the detector element i can be described as
- ⁇ o 2 denotes the standard deviation of electronic noise.
- y i (p) is the expected pre-log projection data related to the attenuation image p by means of a nonlinear transformation, which is given by
- the attenuation image p can be reconstructed, e.g., from the measurement y using a complex likelihood function or from the shifted data
- ⁇ i [ Y i + ⁇ o 2 ] + ⁇ Poisson( y i ( p )+ ⁇ o 2 ),
- the tractable shifted-Poisson model wherein [ ⁇ ] + is a threshold function that sets negative values to zero.
- the shifted-Poisson model can be matched with that of the Poisson-Gaussian model, or the statistical model can be a Poisson model, a compound Poisson model, or any other statistical distribution or combination of statistical distribution representing the noise in the system.
- the image estimate is obtained by maximizing the log likelihood function of the shifted-Poisson model, which is given by
- p * arg ⁇ ⁇ max p ⁇ 0 ⁇ ⁇ i ⁇ [ y ⁇ i ⁇ ⁇ log ⁇ ( y _ i ⁇ ( p ) + ⁇ ⁇ 2 ) - ( y _ i ⁇ ( p ) + ⁇ ⁇ 2 ) ] - ⁇ ⁇ ⁇ U ⁇ ( p ) ,
- U(p) is a regularizer that represents an image roughness penalty.
- the regularization term can be determined as the intensity difference between neighboring voxels, which is given by
- ⁇ ⁇ (t) is the penalty function
- ⁇ is a parameter that controls the smoothness of the penalty function
- w jk is the weighting factor related to the distance between voxel j and voxel k in the neighborhood j .
- ⁇ ⁇ (t) is the Huber function, which can be expressed as
- ⁇ ⁇ ⁇ ( t ) ⁇ 1 2 ⁇ t 2 , ⁇ ⁇ ⁇ t ⁇ ⁇ ⁇ ⁇ t ⁇ - ⁇ 2 2 , otherwise .
- the regularization term U(p) can be a quadratic regularization term, a total variation minimization term, or any other regularization term.
- the above optimization problem can be solved by the separable paraboloidal surrogate (SPS) approach with acceleration by ordered subsets (OS), for example.
- SPS separable paraboloidal surrogate
- OS ordered subsets
- any optimization method can be used to find the image that minimizes the cost function, including, for example, a gradient-descent method or other known methods.
- Further examples of optimization methods that can be used to minimize the cost function can include an augmented-Lagrangian method, an alternating direction-method-of-multiplier method, a Nesterov method, a preconditioned-gradient-descent method, an ordered subset method, or a combination of the foregoing.
- Determining an optimal smoothing parameter ⁇ can be challenging because the optimal value for the smoothing parameter ⁇ can depend on the content to be viewed. That is, a single value for the smoothing parameter ⁇ might not be optimal for all clinical or detection tasks, or even for one detection tasks but for all of the regions within a single image. For example, when different types of regions (e.g., lung regions and soft-tissue regions) are displayed within a single image, it can be beneficial to segment the image into regions, and apply regions-adaptive blending to generate a blended image in which the relative weighting between images with small and large smoothing parameters ⁇ varies by region. This regions-adaptive blending can achieve the best resolution to the noise tradeoff on a region-by-region basis.
- regions e.g., lung regions and soft-tissue regions
- FIGS. 1A and 1B show the same lung (soft-tissue) region for two CT images generated using different values for the regularization parameter ⁇ .
- the images are from a slice of a reconstructed chest image generated using an IR method based to minimize a cost function that includes a regularization parameter ⁇ .
- the CT image used to generate the slices shown in FIGS. 1A and 2A was reconstructed using a small smoothing parameter ⁇ (i.e., the regularization parameter ⁇ method was small), whereas the CT image used to generate the slices shown in FIGS. 1B and 2B was reconstructed using a large smoothing parameter ⁇ .
- FIGS. 1A and 1B show the lung region displayed using a window width (WW) of 1500 Hounsfield Units (HU) and a window level (WL) of ⁇ 400 HU, which are standard width and level settings used to view lung regions.
- WW window width
- WL window level
- FIGS. 1A and 1B show the lung region displayed using a window width (WW) of 1500 Hounsfield Units (HU) and a window level (WL) of ⁇ 400 HU, which are standard width and level settings used to view lung regions.
- WW window width
- HU Hounsfield Units
- WL window level
- FIGS. 2A and 2B show the soft-tissue region displayed using a WW of 400 HU and a WL of 40 HU, which are standard width and level setting used to view soft-tissue regions.
- the noise present then using the small smoothing parameter tends to obscure the features of the tissue, increasing the difficulty of clinical diagnosis.
- the narrower window width of 400 HU of the display settings signals that a less noise can be tolerated before significantly masking or otherwise obscuring the signal.
- generating the CT image using a large regularization parameter ⁇ is optimal for soft-tissue regions.
- the methods herein dynamically adapt the displayed image according to signals (from the user and from the CT image itself) regarding the content/context being displayed. This is achieved by first acquiring a stack of images, each image having a different degree of smoothing/denoising. Then blending images from the image stack according to respective weights, which depend on the displayed content. That is, a blended image is generated and displayed using weighted combination of the images from the image stack, and the weighting depends on indicia of the image content.
- the weights for the weighted combination images are determined according to the display parameters (e.g., the window width and/or slice thickness) selected by a user. For example, on one hand, when a user chooses to display a slice of a reconstructed image using display settings for soft tissue with a WW of 400 HU and a WL, of 40 HU, then the weights for the blended image can be selected to have greater contributions from those of the stack images with large smoothing parameters. On the other hand, when a user chooses to display a slice of a reconstructed image using lung settings, the weights for the blended image can be selected to have greater contributions from those of the stack images with small smoothing parameters.
- the display parameters e.g., the window width and/or slice thickness
- the weights can vary as a function of position within the blended image (i.e., spatially-varying weights or region-adaptive weights), and these spatially-varying weights can be used to increase the contributions of large-smoothing-parameter images to the blended image in regions identified as having characteristics of soft tissue and bone, while increasing the contributions of small-smoothing-parameter images in regions identified as having characteristics of lung, for example.
- the methods described herein are advantageous because a single smoothing parameters in not necessarily optimal for all detection/imaging tasks or for all regions.
- image display parameters e.g., the WW and WL
- reconstructing a new image or applying a new post-reconstruction denoising method each time the display parameters are changed is impractical.
- the same effect i.e., optimizing the noise and resolution based on the display parameters
- the two stack images p ( ⁇ S) and p ( ⁇ L) occupy different points within the resolution-noise tradeoff spaces(i.e., point “A” in the tradeoff space corresponding to p ( ⁇ S) and point “B” corresponding to p ( ⁇ L) .
- a blended image p (Blended) at any point along a line segment in the tradeoff space extending from point “A” to point “B.” Translations along this line segment are achieved by merely changing the relative weights applied to the two images of the stack. That is, the relative weights used to generate the blended image determine where the blended image is positioned within the tradeoff space along the line segment between points “A” and “B.”
- the blending weights i.e., the weights applied to the images of the stack
- a third CT image in the stack corresponding to a point “C” on the tradeoff space that is not on the line same line as points “A” and “B” would allow a blended image to occupy any point within a triangle defined by points “A,” “B,” and “C.”
- the stack of images can include more than three images, with the corresponding generalizations of the features described herein (e.g., the possible combinations for a stack of four images the blended image occupies a quadrilateral in the tradeoff space) without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art
- This blending of the stack images can be seamless integrated into the clinical workflow. For example, streamlining this process so that it occurs automatically when display parameters are changed make it conducive for a clinical workflow, in which a doctor needs to focus on tasks other than changing the smoothing/denoising parameters. Accordingly, to avoid needless complicating a user interface, in certain implementations, the adjustment of the blending parameters can be tied directly to the display parameters or otherwise automated based on available signals and information. Thus, the user interface does not become needless complicated and the optimization of the displayed CT images can be optimized without frequent and/or complicated user interactions, which would reduce productivity by distracting clinicians from their primary tasks.
- FIG. 3 shows a flow diagram of a method 100 for generating a blended image from a stack of images corresponding to different smoothing parameters (the term “smoothing parameter” is used as a short hand for “smoothing/denoising parameter” and is interchangeable therewith).
- the methods described herein provide context-oriented blending of a stack of images, each representing a different point within a tradeoff space between noise and resolution.
- the stack multiple images are generated to have different degrees of smoothing/denoising, and the images can be respectively identified using different values of a smoothing/denoising parameter (e.g., by reconstructing the CT images using a same IR method and cost function but with different values for the regularization parameter ⁇ ).
- step 110 of method 100 projection data from a CT scan is obtained.
- CT images are acquired representing reconstructions from the projection data. These CT images form a stack of images each having a respective smoothing parameter that is different from the other images in the stack. Any of the reconstruction methods as well as any of the per- and post-reconstruction denoising methods discussed below can be used to generate the CT images in the stack, and any other known methods of generating denoised CT images can also be used to generate the images in the stack.
- the smoothing/denoising parameter can be a vector including multiple values representing different characteristic of the respective images of the stack (e.g., a first value can represent a noise level and a second value can be a figure of merit to represent the resolution).
- the weighting value ⁇ which is discussed below, can be a function with multiple inputs to weight the stack of images according to the content/context of the image and the multiple values of the smoothing/denoising parameter, which is a vector.
- signals indicating the content/context of the displayed image are obtained.
- the content/context indicia can be one or more display settings, such as the slice thickness, the window width, and the window level.
- the content/context indicia can be a map of the regional average of the attenuation density. Further, in certain implementations, these content/context indicia can be a segmentation of the image into tissue types.
- the content/context indicia can be information indicating a use/application/procedure intended for the CT scan or displayed image, or the content/context indicia can be information regarding which body part of the patient is being imaged. Additional variations or combinations of the content/context indicia can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- FIGS. 4A, 4B, 5A, 5B, 6A, and 6B illustrate the effect of choosing among various window settings.
- FIGS. 4A, 5A, and 6A respectively show two-dimensional views for a same slice of a chest CT image, but using different WW and WL settings in each image:
- FIGS. 4B, 5B, and 6B respectively show a histogram of voxel counts for the slice binned according HU value. Additionally, on the right hand side, each of these figures includes a color legend of the of the HU values in the corresponding slice image and includes a line relating the HU values to the colors represented in the color legend. As discussed above and as illustrated by FIGS. 4A, 4B, 5A, 5B, 6A, and 6B , the optimal tradeoff between resolution and noise can depend on the display settings.
- the window width and slice thickness inform optimal tradeoff between noise and resolution for several reasons.
- larger slice thicknesses correspond to reductions in noise due to the averaging of multiple voxel layers (e.g., shot noise in a Poison distribution is reduced as the square root of the number of voxel layers).
- a larger window width is representative of larger signal and a greater tolerance for noise.
- the window width and slice thickness inform the optimal tradeoff between noise and resolution under the logic that noise becomes less significant as the window width becomes larger and/or the noise is reduced due to voxel-layer averaging, shifting the optimal tradeoff away from noise suppression and towards better resolution.
- the window width becomes narrower noise becomes more significant, and, therefore, the optimal tradeoff shifts towards increased noise suppression and away from better resolution.
- window width and slice thickness can also inform the logic determining the optimal tradeoff. For example, when various content/context indicia indicate that the displayed image is being used for brain trauma diagnosis, strong smoothing is desired in a soft tissue region to reveal internal bleeding while reduced smoothing is desired in bone regions to reveal fractures.
- super-resolution reconstruction is desired to maintain details in lung regions while a pseudo normal-resolution image can be generated from the super-resolution reconstruction for diagnosing soft-tissue regions.
- the optimal tradeoff can be variously and automatically inferred from one or more of the content/context indicia discussed above or variation thereof as would be understood by a person of ordinary skill in the art.
- Table 1 shows a correspondence between a few content/context indicia corresponding to the display settings and various applications for CT imaging.
- step 140 of method 100 the blending weights) a is generated based on the content/context indicia.
- each of the smoothing/denoising parameters corresponding to an image in the stack can be optimized for a particular detection task (e.g., the detection of long nodules or the detection of lesions in soft tissue). Then, depending on user defined inputs related to the desired diagnosis (e.g., the WW and slice thickness), a blending weight ⁇ (or more generally as described below a blending map) is generated using weights corresponding to the user inputs. In certain implementations, the blended image is generated automatically based on the WW and the slice thickness.
- blending is performed using a spatially varying blending map; the blending weight of each voxel is determined by a blending map that is obtained from classifying different organs in a reconstructed CT image (e.g., one or more of the images in the image stack).
- the blended image is generated using a single value for blending weight ⁇ (e.g., the weight of the first image of the stack is ⁇ and the weight of the second image of the stack is ( 1 - ⁇ )), and the blending weight ⁇ is a function of only two input variables: the first variable being the window width variable ww and the second variable being the slice thickness variable st.
- blended image p (Blended) is a weighted summation of a small-smoothing-parameter image p ( ⁇ S) and a large-smoothing-parameter image p ( ⁇ L)
- blending weight ⁇ can be given by the equation
- ⁇ min is a minimum value of the window width (e.g., 420 HU) and ⁇ max is a maximum value of the window width (e.g., 1200 HU).
- the two images p ( ⁇ S) and p ( ⁇ L) are blended with blending weight ⁇ determined by the window width ww and slice thickness st currently selected by the user for the displayed image.
- the window width variable ww is in terms of HU and the slice thickness variable st is in terms of a number of voxel layers.
- the blended image p (Blended) is entirely the small-smoothing-parameter image p ( ⁇ S)
- the above equation expresses, in part, the logic that noise becomes less significant and is not a primary concern when either (i) the window width is large or (ii) a larger slice thickness causes the noise to be reduced due to averaging multiple layers of voxels. Therefore, when at least one of these conditions is met, the optimal noise-resolution tradeoff is skewed in favor of improved resolution by increasing contributions of the small-smoothing-parameter image p ( ⁇ S) in the blended image.
- a narrower window width indicates that the signal is likely to have a small amplitude (e.g., the features of interest have low contrast—small changes in HU values), increasing the importance of noise suppression by using a larger smoothing parameter. And this is especially true in the absence noise suppression due to layer averaging (i.e., when the slice thickness is small, corresponding to a single layer of voxels). Therefore, in this case, the optimal tradeoff shifts towards a blended image having increased contributions from the large-smoothing-parameter image p ( ⁇ L) .
- the blending weight ⁇ can be a blending map representing blending ratio that has a spatial dependence, as shown in FIG. 8 .
- FIGS. 7 and 8 both relate to blending maps.
- FIG. 7 illustrates an example of a function for blending weight ⁇ , along the vertical axis, as a function of an average HU value within a given region.
- the average HU value can be determined using any one or a combination of images from the stack.
- the processes described below for determining the average HU values can be applied to the large-smoothing-parameter image p ( ⁇ L) because the large-smoothing-parameter image p ( ⁇ L) already exhibit a significant amount of smoothing, which is an averaging over nearby voxels. Therefore, calculating the average HU values from the large-smoothing-parameter image p ( ⁇ L) reduces the amount of additional smoothing required to obtain the average HU values.
- the average HU value can be calculated using a window function (e.g., a Gaussian, Hann, Hamming, Blackman-Harris, or other window function known in signal processing) to weight the averaging of surrounding pixels/voxels to calculate the local average HU value for each with the reconstructed images of the stack.
- a window function e.g., a Gaussian, Hann, Hamming, Blackman-Harris, or other window function known in signal processing
- This can he performed as a convolution for example (i.e., low-pass filtering).
- the image space can be segmented using, e.g., a threshold and region growing method or any other known segmentation method, and each segmented region can be averaged to obtain an average HU value for each of the segmented regions.
- transitions between regions can be smoothed, e.g., using a feathering function, a spline function, an arctangent function, or any other method known to smooth transitions between regions.
- blending weights ⁇ for the respective voxels are generated using a lookup table or are calculated using a function such as the function shown in FIG. 7 .
- 4A, 4B, 5A, 5B, 6A, and 6B illustrate that HU values of certain regions tend to cluster according to a region type, and this general association between HU values and region types is indicated by the labels along the horizontal axis of FIG. 7 .
- dual-energy CT or spectrally-resolved CT can be used to perform material decomposition, and the complementary information provided by material decomposition can be used together with or in place of average HU value for enhanced discrimination of region types and for the subsequent selection of the optimal position-dependent weighted combination of images from the stack to generate the blended image.
- variations of the blending weight ⁇ function including variations of the inputs (e.g, average HU value, material-component ratio, etc.) and the outputs (e.g., for N images in the stack there can be N-1 values of ⁇ per voxel to define the ratios between the N images) can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- FIG. 7 exemplifies the logic that different average HU values can be indicia of content/context, and, therefore, the average HU values can be used to determine the weightings used in generating the blended image. That is, which weighting is likely optimal can depend on the content in the region, which is indicated by the average HU value.
- Table 1 specific imaging regions and applications can have unique optimality conditions, and different clinical and procedural applications of CT imaging can use variations of the function shown in FIG. 7 relating average HU to the blending weight ⁇ .
- the function shown in FIG. 7 has advantages when used for the above-identified application, including exhibiting high resolution in bone region together with noise suppression in the soft-tissue regions. Further, the function shown in FIG. 7 advantageously exhibits high resolution in the lung regions.
- contrast enhanced CT high resolution is desired in contrast enhanced region while low noise is desired in other regions.
- the function relating average HU to blending weight a can depend on other content/context indicia of the clinical/procedural application or body part being imaged).
- FIG. 8 shows a slice of a blending map generated by applying the blending-weight function of FIG. 7 to the stack of CT image used in generating FIGS. 1A, 1B, 2A, and 2B .
- the blending-weight function can be expressed as
- ⁇ j ⁇ 1 , - 1000 ⁇ HU j ⁇ - 300 HU j - ( - 50 ) - 300 - ( - 50 ) , - 300 ⁇ HU j ⁇ - 50 0 , - 50 ⁇ HU j ⁇ 400 HU j - 400 700 - 400 , 400 ⁇ HU j ⁇ 700 1 , 700 ⁇ HU j ,
- FIG. 8 shows that a small amount of smoothing together with high resolution is selected in the lung regions, but most of the soft-tissue regions is optimized to suppress noise by using a large amount of smoothing/denoising.
- the front of the chest region is in the transition range between ⁇ 300 HU to ⁇ 50 HU and represents a compromise between high resolution and high noise reduction.
- the blended image is generated by the weighted combination of images from the stack.
- the blended image can be generated by performing a weighted sum of the images from the stack.
- the blended image can be generated using the expression
- the blended image can be given by the equation
- p j (Blended) , p j ( ⁇ S) , and p j ( ⁇ L) are respectively the j th voxels of the blended image, small-smoothing-parameter image, and the large-smoothing-parameter image.
- the blended image is a combination of images from the stack, wherein the relative contributions among the images from the stack can vary voxel by voxel.
- spatially varying smoothness/denoising can be achieved to obtain an optimal resolution to noise tradeoff in every region of the blended image. Since the blending ratio is determined automatically, it does not require the user to adjust smoothing strength or blending ratio while navigating through different regions, and thus provides a simple and seamless user experience.
- Variations of step 150 including, three or more images in the stack and different weighted combinations of the images in the stack (e.g., weighted arithmetic averaging, weighted geometric averaging, weightings incorporating that a functions of a figure of merit for the noise and/or resolution, etc.) can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- weighted arithmetic averaging e.g., weighted geometric averaging, weightings incorporating that a functions of a figure of merit for the noise and/or resolution, etc.
- FIG. 9 shows (center) a reconstructed image generated using the blending map illustrated in FIG. 8 . Also shown in FIG. 9 is a magnification (upper left) of the soft-tissue region represented in FIGS. 2A and 2B displayed using the soft-tissue settings, and a magnification (lower right) of the lung region represented in FIGS. 1A and 1B displayed using the lung settings. Comparing FIG. 9 with FIGS. 1A, 1B, 2A, and 2B reveals that, using method 100 with a blending map implementation, the desirable aspects of FIG. 1A and the desirable aspects of FIG. 2B have been combined within a single blended image. That is, method 100 produces a single image with low noise in the soft tissue region while preserving high resolution in the lung regions.
- a blended image can be generated automatically from a stack of two or more reconstructed images having different degrees of smoothing/denoising. Further, the blended image can be generated and displayed without additional input or burden on a user, eliminating the need to adjust the smoothing parameter or blending weight manually. Further, the tradeoff between resolution and noise can be simultaneously optimized in all regions of interest.
- the data-processing apparatus 300 for processing CTP data includes a CPU 301 which performs the processes described above, including method 100 shown in FIG. 3 , the processes described herein, and variations as would be known to a person of ordinary skill in the art.
- the process data and instructions may be stored in memory 302 . These processes and instructions may also be stored on a storage medium disk 304 such as a hard drive (MD) or portable storage medium or may he stored remotely.
- MD hard drive
- the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored.
- the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the data-processing apparatus 300 communicates, such as a server or computer.
- claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 301 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
- an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
- CPU 301 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art.
- the CPU 301 may be implemented using a GPU processor such as a Tegra processor from Nvidia Corporation and an operating system, such as Multi-OS.
- the CPU 301 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 301 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
- the data-processing apparatus 300 in FIG. 10 also includes a network controller 306 , such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network 400 .
- the network 400 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks.
- the network 400 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems.
- the wireless network can also be WiFi, Bluetooth, or any other wireless form of communication that is known.
- the data-processing apparatus 300 further includes a display controller 308 , such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 310 , such as a Hewlett Packard HPL2445w LCD monitor.
- a general purpose I/O interface 312 interfaces with a keyboard and/or mouse 314 as well as a touch screen panel 316 on or separate from display 310 .
- General purpose I/O interface also connects to a variety of peripherals 318 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard.
- a sound controller 320 is also provided in the parallel scalar-multiplication apparatus, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 322 thereby providing sounds and/or music.
- the general purpose storage controller 324 connects the storage medium disk 304 with communication bus 326 , which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the Parallel scalar-multiplication apparatus.
- communication bus 326 may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the Parallel scalar-multiplication apparatus.
- a description of the general features and functionality of the display 310 , keyboard and/or mouse 314 , as well as the display controller 308 , storage controller 324 , network controller 306 , sound controller 320 , and general purpose I/O interface 312 is omitted herein for brevity as these features are known.
- FIG. 11 illustrates an implementation of the radiography gantry included in a CT apparatus or scanner.
- a radiography gantry 500 is illustrated from a side view and further includes an X-ray tube 501 , an annular frame 502 , and a multi-row or two-dimensional-array-type X-ray detector 503 .
- the X-ray tube 501 and X-ray detector 503 are diametrically mounted across an object OBJ on the annular frame 502 , which is rotatably supported around a rotation axis RA.
- a rotating unit 507 rotates the annular frame 502 at a high speed, such as 0.4 sec/rotation, while the object OBJ is being moved along the axis RA into or out of the illustrated page.
- X-ray CT apparatuses include various types of apparatuses, e.g., a rotate/rotate-type apparatus in which an X-ray tube and X-ray detector rotate together around an object to be examined, and a stationary/rotate-type apparatus in which many detection elements are arrayed in the form of a ring or plane, and only an X-ray tube rotates around an object to be examined.
- the present inventions can be applied to either type. In this case, the rotate/rotate type, which is currently the mainstream, will be exemplified.
- the multi-slice X-ray CT apparatus further includes a high voltage generator 509 that generates a tube voltage applied to the X-ray tube 501 through a slip ring 508 so that the X-ray tube 501 generates X-rays.
- the X-rays are emitted towards the object OBJ, whose cross sectional area is represented by a circle.
- the X-ray tube 501 having an average X-ray energy during a first scan that is less than an average X-ray energy during a second scan.
- two or more scans can be obtained corresponding to different X-ray energies.
- the X-ray detector 503 is located at an opposite side from the X-ray tube 501 across the object OBJ for detecting the emitted X-rays that have transmitted through the object OBJ.
- the X-ray detector 503 further includes individual detector elements or units.
- the CT apparatus further includes other devices for processing the detected signals from X-ray detector 503 .
- a data acquisition circuit or a Data Acquisition System (DAS) 504 converts a signal output from the X-ray detector 503 for each channel into a voltage signal, amplifies the signal, and further converts the signal into a digital signal.
- the X-ray detector 503 and the DAS 504 are configured to handle a predetermined total number of projections per rotation (TPPR).
- the above-described data is sent to a preprocessing device 506 , which is housed in a console outside the radiography gantry 500 through a non-contact data transmitter 505 .
- the preprocessing device 506 performs certain corrections, such as sensitivity correction on the raw data.
- a memory 512 stores the resultant data, which is also called projection data at a stage immediately before reconstruction processing.
- the memory 512 is connected to a system controller 510 through a data/control bus 511 , together with a reconstruction device 514 , input device 515 , and display 516 .
- the system controller 510 controls a current regulator 513 that limits the current to a level sufficient for driving the CT system.
- the detectors are rotated and/or fixed with respect to the patient among various generations of the CT scanner systems.
- the above-described CT system can be an example of a combined third-generation geometry and fourth-generation geometry system.
- the X-ray tube 501 and the X-ray detector 503 are diametrically mounted on the annular frame 502 and are rotated around the object OBJ as the annular frame 502 is rotated about the rotation axis RA.
- the detectors are fixedly placed around the patient and an X-ray tube rotates around the patient.
- the radiography gantry 500 has multiple detectors arranged on the annular frame 502 , which is supported by a C-arm and a stand.
- the memory 512 can store the measurement value representative of the irradiance of the X-rays at the X-ray detector unit 503 . Further, the memory 512 can store a dedicated program for executing methods 100 and 200 for post-reconstruction processing and enhancement of reconstructed CT images.
- the reconstruction device 514 can reconstruct CT images and can execute post processing of the reconstructed CT images, including methods 100 and 200 described herein. Further, reconstruction device 514 can execute pre-reconstruction processing image processing such as volume rendering processing and image difference processing as needed.
- the pre-reconstruction processing of the projection data performed by the preprocessing device 506 can include correcting for detector calibrations, detector nonlinearities, and polar effects, for example.
- Post-reconstruction processing performed by the reconstruction device 514 can include filtering and smoothing the image, volume rendering processing, and image difference processing as needed. Further, the post-reconstruction processing can include jagged-edge removal and resolution enhancement using method 100 and/or 200 .
- the image reconstruction process can be performed using known methods, including, e.g., filtered-backprojection, iterative reconstruction, algebraic reconstruction techniques, ordered subsets, and acceleration techniques.
- the reconstruction device 514 can use the memory to store, e.g., projection data, reconstructed images, calibration data and parameters, and computer programs.
- the reconstruction device 514 can include a CPU (processing circuitry) that can be implemented as discrete logic gates, as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Complex Programmable Logic Device (CPLD).
- An FPGA or CPLD implementation may be coded in VHDL. Verilog, or any other hardware description language and the code may be stored in an electronic memory directly within the FPGA or CPLD, or as a separate electronic memory.
- the memory 512 can be non-volatile, such as ROM, EPROM, EEPROM or FLASH memory.
- the memory 512 can also be volatile, such as static or dynamic RAM, and a processor, such as a microcontroller or microprocessor, can be provided to manage the electronic memory as well as the interaction between the FPGA or CPLD and the memory.
- the CPU in the reconstruction device 514 can execute a computer program including a set of computer-readable instructions that perform the functions described herein, the program being stored in any of the above-described non-transitory electronic memories and/or a hard disk drive, CD, DVD, FLASH drive or any other known storage media.
- the computer-readable instructions may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with a processor, such as a Xenon processor from Intel of America or an Opteron processor from AMD of America and an operating system, such as Microsoft VISTA, UNIX, Solaris, LINUX, Apple, MAC-OS and other operating systems known to those skilled in the art.
- CPU can be implemented as multiple processors cooperatively working in parallel to perform the instructions.
- the reconstructed images can be displayed on a display 516 .
- the display 516 can be an LCD display, CRT display, plasma display, OLED, LED or any other display known in the art.
- the memory 512 can be a hard disk drive, CD-ROM drive, DVD drive, FLASH drive, RAM, ROM or any other electronic storage known in the art.
Abstract
Description
- This disclosure relates to generating a blended image by blending reconstructed images of varying levels of soothing/denoising based on context information, and, more particularly, to using the context information to select the relative contributions of the reconstructed images to the blended image.
- Computed tomography (CT) systems and methods are widely used, particularly for medical imaging and diagnosis. A CT scan can be performed by positioning a patient on a CT scanner in a space between an X-ray source and X-ray detector, and then taking X-ray projection images through the patient at different angles as the X-ray source and detector are rotated through a scan. The resulting projection data is referred to as a CT sinogram, which represents attenuation through the body as a function of position along one or more axis and as a function of projection angle along another axis. Performing an inverse Radon transform—or any other image reconstruction method—reconstructs an image from the projection data represented in the sinogram.
- Various methods can be used to reconstruct CT images from projection data, including filtered back-projection (FBP) and statistical iterative reconstruction (IR) algorithms. Compared to more conventional FBP reconstruction methods, IR methods can provide improved image quality at reduced radiation doses. Various iterative reconstruction (IR) methods exist, such as the algebraic reconstruction technique. For example, one common IR method performs unconstrained (or constrained) optimization to find the argument p that minimizes the expression
-
- wherein is the projection data representing the logarithm of the X-ray intensity of projection images taken at a series of projection angles and p is a reconstructed image of the X-ray attenuation for voxels/volume pixels (or two-dimensional pixels in a two-dimensional reconstructed image) in an image space. For the system matrix A, each matrix value aij (i being a row index and j being a column index) represents an overlap between the volume corresponding to voxel pj and the X-ray trajectories corresponding to projection value i. The data-fidelity term ∥Ap− ∥W 2 is minimized when the forward projection A of the reconstructed image p provides a good approximation to all measured projection images . Thus, the data fidelity term is directed to solving the system matrix equation Ap=, which expresses the Radon transform (i.e., projections) of various rays from a source through an object OBJ in the space represented by p to X-ray detectors generating the values of (e.g., X-ray projections through the three-dimensional object OBJ onto a two-dimensional projection image ).
- The notation ∥g|W 2 signifies a weighted inner product of the form, gTWg, wherein W is the weight matrix (e.g., expressing a reliability of trustworthiness of the projection data based on a pixel-by-pixel signal-to-noise ratio). In other implementations, the weight matrix W can be replaced by an identity matrix. When the weight matrix W is used in the data fidelity term, the above IR method is referred to as a penalized weighted least squares (PLWS) approach.
- The function U(p) is a regularization term,and this term is directed at imposing one or more constraints (e.g., a total variation (TV) minimization constraint) which often have the effect of smoothing or denoising the reconstructed image. The value β is a regularization parameter is a value that weights the relative contributions of the data fidelity term and the regularization term.
- Consequently, the choice of the value for the regularization term β typically affects a tradeoff between noise and resolution. In general, increasing the regularization term β reduces the noise, but at the cost of also reducing resolution. The best value for the regularization term β can depend on multiple factors, the primary of which is the application for which the reconstructed image is to be reconstructed. Because IR algorithms can be slow and require significant computational resources, a cut-and-try approach is inefficient (e.g., different values of the regularization term β are used for the IR method until an optimal solution is obtained). Moreover, a single CT scan can be used for more than one clinical application, and, therefore, an ability to adjust the reconstructed image with regards the tradeoff between noise and resolution without repeating the computationally intensive IR algorithm is desirable. Thus, improved methods are desired for rapidly generating and modifying a reconstructed image to optimize a tradeoff between noise and resolution.
- A more complete understanding of this disclosure is provided by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
-
FIG. 1A shows an example of a lung region in slice of a reconstructed computed tomography (CT) image that was generated using a small smoothing/denoising parameter and that is displayed using lung settings (i.e., the window level is WL=−400 Hounsfield Units (HU) and the window width is WW=1500 HU), according to one implementation; -
FIG. 1B shows an example of the same lung region using the same display settings as inFIG. 1A , except the CT image was generated using a large smoothing/denoising parameter rather than the small smoothing/denoising parameter used inFIG. 1A , according to one implementation; -
FIG. 2A shows an example of a soft-tissue region in slice of the CT image that was generated using the small smoothing/denoising parameter and the image is displayed using soft-tissue settings (i.e., WL=40 HU and WW=400 HU), according to one implementation; -
FIG. 2B shows an example of the same soft-tissue region and soft-tissue display settings as inFIG. 2A , except the CT image was generated using the large smoothing/denoising parameter, according to one implementation; -
FIG. 3 shows a flow diagram of a method of generating a blended image based on the content/context of a CT image, according to one implementation; -
FIG. 4A shows a two-dimensional (2D) image of a CT slice displayed using default settings (i.e., WL=−297.5 HU and WW=3497 HU), according to one implementation; -
FIG. 4B shows a histogram of the CT slice binned according HU values, in which HU values are represented along the horizontal axis with counts of voxels in the respective HU bins represented along the left vertical axis; on the right vertical axis is a color legend and, in the default display settings, the HU values are related to the colors in the color legend by the line superimposed over the histogram; -
FIG. 5A shows a 2D image of the CT slice displayed using the lung settings; -
FIG. 5B shows the histogram of the CT slice together with the line that translates HU values to colors according to the lung settings; -
FIG. 6A shows a 2D image of the CT slice displayed using the soft-tissue settings; -
FIG. 6B shows the histogram of the CT slice together with the line that translates HU values to colors according to the soft-tissue settings; -
FIG. 7 shows a plot of an example of a weighting function representing a weighting value α along the vertical axis and an attenuation density in HU along the horizontal axis, the weighting value α being a weight used for combining a stack of two images generated using different smoothing/denoising parameters, according to one implementation; -
FIG. 8 shows a 2D map of the weighting value α as a function of position within the 2D image of the CT slice, according to one implementation; -
FIG. 9 shows a full 2D slice of the blended image (center) generated using the 2D map of the weighting value α fromFIG. 8 ; superimposed on the full slice images are two zoomed-in images with a image of the soft-tissue region (upper left) and a zoomed-in image of the lung region (lower right) displayed respectively using the soft-tissue and lung window settings, according to one implementation; -
FIG. 10 shows a diagram of a data-processing apparatus for performing the methods described herein, according to one implementation; and -
FIG. 11 shows a schematic of an implementation of a CT scanner. - The methods provided herein address the above-discussed challenges with regards to optimizing a tradeoff between resolution and noise based on the particular content/context of a displayed image. These methods addressed the afore-mentioned challenges by, e.g., using content/context of an image to control the generation of a blended image that is a weighted combination of a two or more reconstructed images having different degrees of smoothing/denoising (also referred to as amounts or levels of smoothing/denoising) and the corresponding tradeoff in resolution. In certain implementations, the indicator of the image content/context can be a display setting (e.g., a slice thickness, window width, and/or window level selected by a user or by default), or the indicator of the image content can be a regional/segmented histogram of the Hounsfield Units (HU) or derivative thereof.
- Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views,
FIGS. 1A and 1B show two images of the same lung region but with different degrees of denoising (which herein is interchangeably referred to as smoothing).FIG. 9 , which is discussed below, shows that this lung region is a part of a larger slice taken from a reconstructed image of a chest. Similarly,FIGS. 2A and 2B show two images of the same soft-tissue region with different degrees of denoising.FIGS. 1A and 2A represent a first degree of denoising, andFIGS. 1B and 2B represent a second degree of denoising with more denoising/smoothing than the first degree of denoising shown inFIGS. 1A and 2A . - In a comparison between
FIGS. 2A and 2B ,FIG. 2B is generally regarded as being better for clinical applications because the additional resolution inFIG. 2A does not convey significantly more information, but the additional noise inFIG. 2A creates texture and structure that is distracting and could potentially lead to a poor diagnosis or during an interventional procedure a poor outcome. Accordingly, a greater degree of denoising and smoothing can be beneficial for soft-tissue images. - In contrast, a lesser degree of denoising and smoothing can be beneficial for lung images. In a comparison between
FIGS. 1A and 1B ,FIG. 1A is generally regarded as being better for clinical applications because the additional resolution inFIG. 1A is significant to being able to distinguish the features of the lungs (e.g., the feature pointed to by the arrow inFIG. 1A ), and, compared to the larger widow width in the lung settings and the commensurately higher contrast signals in the lung regions, the additional noise is not as significant as in the small-tissue region. Consequently, the additional noise due to less smoothing obscure relatively little in the lung region and that drawbacks of additional noise are outweighed by the benefits of the improved resolution as exhibited inFIG. 1A . - Thus, the degree of denoising can depend on the content/context of an image, or more particularly, the content of a region of interest within the reconstructed image. That is different regions within the same image can benefit from different degrees of denoising. Further, the benefit of the denoising can depend on the thickness of the slice of the image. For example, a thicker slice averages together more layers of voxels from the reconstructed image, and, under an assumption of statistical independent noise between voxels of the reconstructed images, the signal-to-noise ratio (SNR) can be expected to grow as the square root of the number voxels being averaged. Thus, when the image displayed is a thicker slice, less denoising/smoothing is required in order to achieve the same degree of noise suppression and SNR. Accordingly, in certain implementations, the methods herein use the thickness of the slice in addition to other indicia of the content/context when determining the relative weights between high- and low-denoising images that are combined to generate a blended image.
- A blended image can be a weighted combination of images with different degrees of denoising. For example, through a weighted sum of two images (one having a low degree of denoising and the other having a high degree of denoising) a blended image can be generated to have any degree of denoising between these two originally images. This continuum for tuning the tradeoff between noise and resolution is achieved, e g., by adjusting their relative weights in the sum. In certain implementations, these relative weights can vary as a function of position to represent spatial variations in the content of the reconstructed image (e.g., by segmenting the reconstructed image into lung regions and soft-tissue regions).
- The original images having different degrees of denoising can be obtained by various means. For example, as discussed above, different values for the regularization parameter β can be used during an IR method. But this is not the only way to generate images with different degrees of denoising/smoothing, and additional methods of denoising can be applied during image reconstruction as well as before and after image reconstruction and/or any combination thereof using any of the methods described below or the IR method described above.
- Various methods can be used to minimize noise in images that are reconstructed from computed tomography (CT) projection data. For example, post-processing (i.e., post-reconstruction) denoising methods can be applied to the data after the reconstructed image has been generated. Additionally, when an image is reconstructed using an IR method, which minimizes a cost function having both a data fidelity term and a regularization, the amount of noise in the reconstructed image and the statistical properties of the noise can depend on the type of regularizer and the magnitude of the regularization parameter β. As the regularization parameter β becomes larger, the regularization term is emphasized more relative to the data fidelity tern, reducing the noise in the reconstructed image. However, this increased emphasis on the regularization term can also decrease the resolution and contrast, which is especially noticeable for fine and low-contrast features, such as those common in the lungs, as discussed above.
- In general, the parameter β is used herein as an identifier to represent a general smoothing and/or denoising parameter to characterize the degree of smoothing/denoising of a given reconstructed image, whether or not the degree of smoothing/denoising arises from the value of regularization parameter, a pre-processing (pre-reconstruction) method, a post-processing (post-reconstruction) method, or combination thereof. Context will make clear those instances when the parameter β specifically refers to the regularization parameter, as opposed to referring more generally to an identifier of the type or degree of denoising/smoothing.
- At various points below the generation of blended images is illustrated using a non-limiting example of a stack having only two images. Further, the blended image p(Blended) is a weighted sum of a small-smoothing-parameter image p(βS) and a large-smoothing-parameter image p(βL), and the small- and large-smoothing-parameter images are identical in all aspects including being reconstructed using the same IR method, except for using different values for the regularization parameter β in the cost function of the IR method.
- More generally, the stack of images can include more than two images. Further, the stack of images can be variously combined (e.g., as a weighted algebraic or geometric average) to create the blended image, accordingly to desired characteristics. In its simplest form, this combining of the stack of images can be performed as a weighted sum in which the weights add up to a constant value (e.g., the weights are normalized to sum to the value one).
- Additionally, it is contemplated that the stack of images corresponding to greater and lesser denoising/smoothing amounts can be generated using various post- and or pre-reconstruction denoising methods, reconstruction methods that integrate denoising (e.g., through the selection of regularizer), or a combination of denoising integrated with the reconstruction method together with post- and/or pre-reconstruction denoising methods. Each of the denoising methods described below can be applied, e.g., to the sinogram/projection data prior to reconstruction or to a reconstructed image after reconstruction, and some of the denoising methods described below can also be applied to a reconstructed image between iterations of an IR method. As described below, the various denoising methods can include linear smoothing filters, anisotropic diffusion, non-local means, and nonlinear filters.
- Linear smoothing filters remove noise by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer agreement with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors: the Gaussian filter is just one possible set of weights. Disadvantageously, smoothing filters tend to blur an image because pixel intensity values that are significantly higher or lower than the surrounding neighborhood are smeared or averaged across their neighboring area. Sharp boundaries become fuzzy. Generally, local linear filter methods assume that local neighbourhood are homogeneous, and local linear filter methods, therefore, tend to impose homogeneity on the image obscuring non-homogeneous features, such as lesions or organ boundaries.
- Anisotropic diffusion removes noise while preserving sharp edges by evolving an image under a smoothing partial differential equation similar to the heat equation. If the diffusion coefficient were spatially constant, this smoothing would be equivalent to linear Gaussian filtering, but when the diffusion coefficient is anisotropic according to the presence of edges, the noise can be removed without blurring the edges of the image.
- A median filter is an example of a nonlinear filter and, if properly designed, a nonlinear filter can also preserve edges and avoid blurring. A median filter operates, for example, by evaluating each pixel in the image, sorting the neighboring pixels according to intensity, and replacing the original value of the pixel with the median value from the ordered list of intensities. The median filter is one example of a rank-conditioned rank-selection (RCRS) filter. For example, median filters and other RCRS filters can be applied to remove salt and pepper noise from an image without introducing significant blurring artifacts.
- In addition a filter using a total-variation (TV) minimization regularization term can be used where it is assumed that the areas being imaged are uniform over discrete areas with relatively sharp boundaries between the areas. A TV filter is another example of a nonlinear filter.
- In non-local means filtering, rather than performing a weighted average of pixels according to their spatial proximity, pixels are determined to be a weighted average according to the similarity between patches within the images. Thus, noise is removed based on non-local averaging of all the pixels in an image—not just the neighboring pixels. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered near that pixel and another small patch centered on the pixel being denoised.
- In addition to the denoising methods discussed above, various degrees of denoising/smoothing can be achieved through a choice of regularizer and IR method. Additionally, regularization can be expressed as a constraint. For example, enforcing positivity for the attenuation coefficients can provide a level of regularization based on the practical assumption that there are no regions in the object OBJ that cause an increase (i.e., gain) in the intensity of the X-ray radiation.
- Other regularization terms can similarly rely on a priori knowledge of characteristics or constraints imposed on the reconstructed image. For example, minimizing a TV regularizer in conjunction with projection on convex sets (POCS) can be used to achieve desirable image characteristics in many clinical imaging applications. The TV regularizer can be incorporated into the cost function, e.g.,
-
- wherein ∥p∥TV=∥∇p∥1 is the 1-norm of the gradient-magnitude image, which is the isotropic TV semi-norm. The spatial-vector image ∇p represents a discrete approximation to the image gradient. Alternatively, some regularizer can be imposed as constraints. For example, a combination of TV and POCS regularization are imposed as constraints when the optimization problem is framed as
-
- So far the data fidelity term in the cost function has been for post-log projection data. Alternatively, a pre-log data fidelity term can be used, e.g., when the X-ray flux onto the detectors is low. In the discussion below the symbol y∝exp() is used to represent the pre-log projection data. After preprocessing the X-ray detector counts to account for calibrations and data corrections (e.g., beam-hardening, detector nonlinearities, k escape, pileup, etc.), CT data can, in practice, be modeled by independent random variables following a Poisson distribution with additive Gaussian distribution to account for electronic noise in the measurement. The statistical model of the random variable Yi measured by the detector element i can be described as
-
Yi˜Poisson(y i(p))+Gaussian(0, σo 2) - wherein σo 2 denotes the standard deviation of electronic noise. The value
y i(p) is the expected pre-log projection data related to the attenuation image p by means of a nonlinear transformation, which is given by -
y i(p)=b i exp(−[Ap]i)+r i - wherein bi is a calibration factor of the detector element i determined, e.g., during a calibration scan, and ri is the mean of background measurement scattered photons). In pre-log methods, the attenuation image p can be reconstructed, e.g., from the measurement y using a complex likelihood function or from the shifted data
-
Ŷ i=[Y i+σo 2]+˜Poisson(y i(p)+σo 2), - using the tractable shifted-Poisson model, wherein [·]+is a threshold function that sets negative values to zero. Alternatively, the shifted-Poisson model can be matched with that of the Poisson-Gaussian model, or the statistical model can be a Poisson model, a compound Poisson model, or any other statistical distribution or combination of statistical distribution representing the noise in the system. For the shifted-Poisson model, the image estimate is obtained by maximizing the log likelihood function of the shifted-Poisson model, which is given by
-
- wherein U(p) is a regularizer that represents an image roughness penalty. For example, the regularization term can be determined as the intensity difference between neighboring voxels, which is given by
-
-
- In addition to the Huber function, the regularization term U(p) can be a quadratic regularization term, a total variation minimization term, or any other regularization term.
- In certain implementations, the above optimization problem can be solved by the separable paraboloidal surrogate (SPS) approach with acceleration by ordered subsets (OS), for example. In general any optimization method can be used to find the image that minimizes the cost function, including, for example, a gradient-descent method or other known methods. Further examples of optimization methods that can be used to minimize the cost function can include an augmented-Lagrangian method, an alternating direction-method-of-multiplier method, a Nesterov method, a preconditioned-gradient-descent method, an ordered subset method, or a combination of the foregoing.
- The above examples of denoising and smoothing methods have been provided as non-limiting examples. Variations of the above implementations for generating the stack of images having different noise and smoothing characteristics can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- Methods of generating a stack on images with different amounts or degrees of smoothing/denoising are discussed above. Now, we turn to the methods of combining the various images of the stack to generate a blended image.
- As discussed above, tradeoffs exist between reducing noise and maintaining resolution. The outcome of a clinical procedure or diagnosis can depend on the image quality and how well adapted a reconstructed image is to the given application. That is, the optimal tradeoff between noise and resolution can be different depending on the clinical applications and organ being imaged. For example, more denoising/smoothing (i.e., a larger contribution from images having large smoothing parameters β) is preferred for suppressing noise in soft-tissue images, whereas less denoising/smoothing (i.e., a larger contribution from images having small smoothing parameters β) is preferred for preserving resolution in order to resolve finer features in lung images, for example.
- Determining an optimal smoothing parameter β can be challenging because the optimal value for the smoothing parameter β can depend on the content to be viewed. That is, a single value for the smoothing parameter β might not be optimal for all clinical or detection tasks, or even for one detection tasks but for all of the regions within a single image. For example, when different types of regions (e.g., lung regions and soft-tissue regions) are displayed within a single image, it can be beneficial to segment the image into regions, and apply regions-adaptive blending to generate a blended image in which the relative weighting between images with small and large smoothing parameters β varies by region. This regions-adaptive blending can achieve the best resolution to the noise tradeoff on a region-by-region basis.
- Returning to
FIGS. 1A, 1B, 2A, and 2B ,FIGS. 1A and 1B (2A and 2B) show the same lung (soft-tissue) region for two CT images generated using different values for the regularization parameter β. In these four figures, the images are from a slice of a reconstructed chest image generated using an IR method based to minimize a cost function that includes a regularization parameter β. The CT image used to generate the slices shown inFIGS. 1A and 2A was reconstructed using a small smoothing parameter β (i.e., the regularization parameter β method was small), whereas the CT image used to generate the slices shown inFIGS. 1B and 2B was reconstructed using a large smoothing parameter β. -
FIGS. 1A and 1B show the lung region displayed using a window width (WW) of 1500 Hounsfield Units (HU) and a window level (WL) of −400 HU, which are standard width and level settings used to view lung regions. As observed inFIG. 1A , the resolution obtained using the small smoothing parameter is adequate for observing the fine features in the lungs, but the poorer resolution exhibited inFIG. 1B , which is due to using the large smoothing parameter, makes the fine features more difficult to observe than inFIG. 1A . Further, the wider window width of 1500 HU signals that a larger noise level can be tolerated without masking or obscuring the signal. Thus, generating the CT image using a small regularization parameter β is optimal for lung regions. - On the other hand,
FIGS. 2A and 2B show the soft-tissue region displayed using a WW of 400 HU and a WL of 40 HU, which are standard width and level setting used to view soft-tissue regions. As observed inFIG. 2B , the noise present then using the small smoothing parameter tends to obscure the features of the tissue, increasing the difficulty of clinical diagnosis. Also, the narrower window width of 400 HU of the display settings signals that a less noise can be tolerated before significantly masking or otherwise obscuring the signal. Thus, generating the CT image using a large regularization parameter β is optimal for soft-tissue regions. - Given the above-identified tradeoff space between resolution and noise and given that optimality within the tradeoff space can depend on which type of tissue/organ is being imaged and depend on the application for which the CT image is being used, the methods herein dynamically adapt the displayed image according to signals (from the user and from the CT image itself) regarding the content/context being displayed. This is achieved by first acquiring a stack of images, each image having a different degree of smoothing/denoising. Then blending images from the image stack according to respective weights, which depend on the displayed content. That is, a blended image is generated and displayed using weighted combination of the images from the image stack, and the weighting depends on indicia of the image content.
- In certain implementations, the weights for the weighted combination images are determined according to the display parameters (e.g., the window width and/or slice thickness) selected by a user. For example, on one hand, when a user chooses to display a slice of a reconstructed image using display settings for soft tissue with a WW of 400 HU and a WL, of 40 HU, then the weights for the blended image can be selected to have greater contributions from those of the stack images with large smoothing parameters. On the other hand, when a user chooses to display a slice of a reconstructed image using lung settings, the weights for the blended image can be selected to have greater contributions from those of the stack images with small smoothing parameters.
- In certain implementations, the weights can vary as a function of position within the blended image (i.e., spatially-varying weights or region-adaptive weights), and these spatially-varying weights can be used to increase the contributions of large-smoothing-parameter images to the blended image in regions identified as having characteristics of soft tissue and bone, while increasing the contributions of small-smoothing-parameter images in regions identified as having characteristics of lung, for example.
- The methods described herein are advantageous because a single smoothing parameters in not necessarily optimal for all detection/imaging tasks or for all regions. When the same data is used for multiple applications with corresponding image display parameters (e.g., the WW and WL), reconstructing a new image or applying a new post-reconstruction denoising method each time the display parameters are changed is impractical. However, the same effect (i.e., optimizing the noise and resolution based on the display parameters) can be achieved by changing the weights/blending between two or more images from the image stack, which is not impractical.
- For example, returning to the non-limiting example of blending two CT images from an image stack (i.e., the small-smoothing-parameter image p(βS) and the large-smoothing-parameter image p(βL) to generate the blended image p(Blended), the two stack images p(βS) and p(βL) occupy different points within the resolution-noise tradeoff spaces(i.e., point “A” in the tradeoff space corresponding to p(βS) and point “B” corresponding to p(βL) . Therefore, it is possible to generate a blended image p(Blended) at any point along a line segment in the tradeoff space extending from point “A” to point “B.” Translations along this line segment are achieved by merely changing the relative weights applied to the two images of the stack. That is, the relative weights used to generate the blended image determine where the blended image is positioned within the tradeoff space along the line segment between points “A” and “B.” Thus, the blending weights (i.e., the weights applied to the images of the stack) can be adjusted according to what is optimal for different regions and display parameters.
- Further, a third CT image in the stack corresponding to a point “C” on the tradeoff space that is not on the line same line as points “A” and “B” would allow a blended image to occupy any point within a triangle defined by points “A,” “B,” and “C.” Additionally, the stack of images can include more than three images, with the corresponding generalizations of the features described herein (e.g., the possible combinations for a stack of four images the blended image occupies a quadrilateral in the tradeoff space) without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art
- This blending of the stack images can be seamless integrated into the clinical workflow. For example, streamlining this process so that it occurs automatically when display parameters are changed make it conducive for a clinical workflow, in which a doctor needs to focus on tasks other than changing the smoothing/denoising parameters. Accordingly, to avoid needless complicating a user interface, in certain implementations, the adjustment of the blending parameters can be tied directly to the display parameters or otherwise automated based on available signals and information. Thus, the user interface does not become needless complicated and the optimization of the displayed CT images can be optimized without frequent and/or complicated user interactions, which would reduce productivity by distracting clinicians from their primary tasks.
-
FIG. 3 shows a flow diagram of amethod 100 for generating a blended image from a stack of images corresponding to different smoothing parameters (the term “smoothing parameter” is used as a short hand for “smoothing/denoising parameter” and is interchangeable therewith). - Accordingly, the methods described herein provide context-oriented blending of a stack of images, each representing a different point within a tradeoff space between noise and resolution. In the stack, multiple images are generated to have different degrees of smoothing/denoising, and the images can be respectively identified using different values of a smoothing/denoising parameter (e.g., by reconstructing the CT images using a same IR method and cost function but with different values for the regularization parameter β).
- In
step 110 ofmethod 100, projection data from a CT scan is obtained. - In
step 120 ofmethod 100, CT images are acquired representing reconstructions from the projection data. These CT images form a stack of images each having a respective smoothing parameter that is different from the other images in the stack. Any of the reconstruction methods as well as any of the per- and post-reconstruction denoising methods discussed below can be used to generate the CT images in the stack, and any other known methods of generating denoised CT images can also be used to generate the images in the stack. - In certain implementations, the smoothing/denoising parameter can be a vector including multiple values representing different characteristic of the respective images of the stack (e.g., a first value can represent a noise level and a second value can be a figure of merit to represent the resolution). Then, the weighting value α, which is discussed below, can be a function with multiple inputs to weight the stack of images according to the content/context of the image and the multiple values of the smoothing/denoising parameter, which is a vector.
- In
step 130 ofmethod 100, signals indicating the content/context of the displayed image (i.e., content/context indicia) are obtained. In certain implementations, the content/context indicia can be one or more display settings, such as the slice thickness, the window width, and the window level. In certain implementations, the content/context indicia can be a map of the regional average of the attenuation density. Further, in certain implementations, these content/context indicia can be a segmentation of the image into tissue types. Moreover, in certain implementations, the content/context indicia can be information indicating a use/application/procedure intended for the CT scan or displayed image, or the content/context indicia can be information regarding which body part of the patient is being imaged. Additional variations or combinations of the content/context indicia can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art. - Consider, for example, the use of the window width and slice thickness as content/context indicia.
FIGS. 4A, 4B, 5A, 5B, 6A, and 6B illustrate the effect of choosing among various window settings.FIGS. 4A, 5A, and 6A respectively show two-dimensional views for a same slice of a chest CT image, but using different WW and WL settings in each image: (i)FIG. 4A uses default settings (i.e., WW=−297.5 HU and WL=3497 HU); (ii)FIG. 5A uses lung settings (i.e., WW=−400 HU and WL=1500 HU); and (iii)FIG. 6A uses soft-tissue settings (i.e., WW=400 HU and WL=40 HU).FIGS. 4B, 5B, and 6B respectively show a histogram of voxel counts for the slice binned according HU value. Additionally, on the right hand side, each of these figures includes a color legend of the of the HU values in the corresponding slice image and includes a line relating the HU values to the colors represented in the color legend. As discussed above and as illustrated byFIGS. 4A, 4B, 5A, 5B, 6A, and 6B , the optimal tradeoff between resolution and noise can depend on the display settings. - The window width and slice thickness inform optimal tradeoff between noise and resolution for several reasons. First, as discussed above, larger slice thicknesses correspond to reductions in noise due to the averaging of multiple voxel layers (e.g., shot noise in a Poison distribution is reduced as the square root of the number of voxel layers). Second, a larger window width is representative of larger signal and a greater tolerance for noise. Thus, the window width and slice thickness inform the optimal tradeoff between noise and resolution under the logic that noise becomes less significant as the window width becomes larger and/or the noise is reduced due to voxel-layer averaging, shifting the optimal tradeoff away from noise suppression and towards better resolution. On the other hand, in the absence noise reduction due to voxel-layer averaging, as the window width becomes narrower noise becomes more significant, and, therefore, the optimal tradeoff shifts towards increased noise suppression and away from better resolution.
- In addition to window width and slice thickness, other factors and considerations can also inform the logic determining the optimal tradeoff. For example, when various content/context indicia indicate that the displayed image is being used for brain trauma diagnosis, strong smoothing is desired in a soft tissue region to reveal internal bleeding while reduced smoothing is desired in bone regions to reveal fractures.
- In a different application for lung imaging, super-resolution reconstruction is desired to maintain details in lung regions while a pseudo normal-resolution image can be generated from the super-resolution reconstruction for diagnosing soft-tissue regions.
- When the content/context indicia indicate that the displayed image is being used for a CT angiography application, high resolution is desired for vessels and low noise is desired in other regions.
- Similarly,when the content/context indicia indicate that the displayed image is being used for a contrast enhanced CT application, high resolution is desired in the contrast-enhanced region while low noise is desired in other regions.
- In each of these applications, the optimal tradeoff can be variously and automatically inferred from one or more of the content/context indicia discussed above or variation thereof as would be understood by a person of ordinary skill in the art. For example, Table 1 shows a correspondence between a few content/context indicia corresponding to the display settings and various applications for CT imaging.
-
TABLE 1 Various applications for CT imaging and their corresponding display settings. Application Window Level (WL) Window Width (WW) Brain - Soft tissue 40 80 Chest 40 400 Abdomen 60 400 Lung −400 1500 Brain - Bone 480 2000 - In
step 140 ofmethod 100, the blending weights) a is generated based on the content/context indicia. - In certain implementations, each of the smoothing/denoising parameters corresponding to an image in the stack can be optimized for a particular detection task (e.g., the detection of long nodules or the detection of lesions in soft tissue). Then, depending on user defined inputs related to the desired diagnosis (e.g., the WW and slice thickness), a blending weight α (or more generally as described below a blending map) is generated using weights corresponding to the user inputs. In certain implementations, the blended image is generated automatically based on the WW and the slice thickness. Additionally, in certain implementations, blending is performed using a spatially varying blending map; the blending weight of each voxel is determined by a blending map that is obtained from classifying different organs in a reconstructed CT image (e.g., one or more of the images in the image stack).
- In certain implementations, the blended image is generated using a single value for blending weight α (e.g., the weight of the first image of the stack is α and the weight of the second image of the stack is (1-α)), and the blending weight α is a function of only two input variables: the first variable being the window width variable ww and the second variable being the slice thickness variable st.
- For example, when the blended image p(Blended) is a weighted summation of a small-smoothing-parameter image p(βS) and a large-smoothing-parameter image p(βL), then blending weight α can be given by the equation
-
- wherein μmin is a minimum value of the window width (e.g., 420 HU) and μmax is a maximum value of the window width (e.g., 1200 HU). In this implementation, the two images p(βS) and p(βL) are blended with blending weight α determined by the window width ww and slice thickness st currently selected by the user for the displayed image. In the equation above, the window width variable ww is in terms of HU and the slice thickness variable st is in terms of a number of voxel layers. When α=1, the blended image p(Blended) is entirely the small-smoothing-parameter image p(βS), whereas the blended image p(Blended) is entirely the large-smoothing-parameter image p(βL) when α=0. The above equation expresses, in part, the logic that noise becomes less significant and is not a primary concern when either (i) the window width is large or (ii) a larger slice thickness causes the noise to be reduced due to averaging multiple layers of voxels. Therefore, when at least one of these conditions is met, the optimal noise-resolution tradeoff is skewed in favor of improved resolution by increasing contributions of the small-smoothing-parameter image p(βS) in the blended image.
- On the other hand, a narrower window width indicates that the signal is likely to have a small amplitude (e.g., the features of interest have low contrast—small changes in HU values), increasing the importance of noise suppression by using a larger smoothing parameter. And this is especially true in the absence noise suppression due to layer averaging (i.e., when the slice thickness is small, corresponding to a single layer of voxels). Therefore, in this case, the optimal tradeoff shifts towards a blended image having increased contributions from the large-smoothing-parameter image p(βL).
- Variations of this implementation can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- In certain implementations, the blending weight α can be a blending map representing blending ratio that has a spatial dependence, as shown in
FIG. 8 .FIGS. 7 and 8 both relate to blending maps.FIG. 7 illustrates an example of a function for blending weight α, along the vertical axis, as a function of an average HU value within a given region. The average HU value can be determined using any one or a combination of images from the stack. - Consider for example the case of a stack containing only two images, the processes described below for determining the average HU values can be applied to the large-smoothing-parameter image p(βL) because the large-smoothing-parameter image p(βL) already exhibit a significant amount of smoothing, which is an averaging over nearby voxels. Therefore, calculating the average HU values from the large-smoothing-parameter image p(βL) reduces the amount of additional smoothing required to obtain the average HU values.
- For example, the average HU value can be calculated using a window function (e.g., a Gaussian, Hann, Hamming, Blackman-Harris, or other window function known in signal processing) to weight the averaging of surrounding pixels/voxels to calculate the local average HU value for each with the reconstructed images of the stack. This can he performed as a convolution for example (i.e., low-pass filtering).
- Alternatively, the image space can be segmented using, e.g., a threshold and region growing method or any other known segmentation method, and each segmented region can be averaged to obtain an average HU value for each of the segmented regions. In certain implementations, transitions between regions can be smoothed, e.g., using a feathering function, a spline function, an arctangent function, or any other method known to smooth transitions between regions.
- Further variations of determining the averages HU value can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
- Returning to
FIG. 7 and having determined the average HU values for the respective voxels of the blended image, blending weights α for the respective voxels are generated using a lookup table or are calculated using a function such as the function shown inFIG. 7 . The blending weight α inFIG. 7 assumes a stack having only two images, such that when α=1 the blended image p(Blended) is entirely the small-smoothing-parameter image p(βS), and the blended image p(Blended) is entirely the large-smoothing-parameter image p(βL) when α=0.FIGS. 4A, 4B, 5A, 5B, 6A, and 6B illustrate that HU values of certain regions tend to cluster according to a region type, and this general association between HU values and region types is indicated by the labels along the horizontal axis ofFIG. 7 . - Additionally, in certain implementations, dual-energy CT or spectrally-resolved CT can be used to perform material decomposition, and the complementary information provided by material decomposition can be used together with or in place of average HU value for enhanced discrimination of region types and for the subsequent selection of the optimal position-dependent weighted combination of images from the stack to generate the blended image.
- Further, variations of the blending weight α function including variations of the inputs (e.g, average HU value, material-component ratio, etc.) and the outputs (e.g., for N images in the stack there can be N-1 values of α per voxel to define the ratios between the N images) can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
-
FIG. 7 exemplifies the logic that different average HU values can be indicia of content/context, and, therefore, the average HU values can be used to determine the weightings used in generating the blended image. That is, which weighting is likely optimal can depend on the content in the region, which is indicated by the average HU value. As discussed above for Table 1, specific imaging regions and applications can have unique optimality conditions, and different clinical and procedural applications of CT imaging can use variations of the function shown inFIG. 7 relating average HU to the blending weight α. - For example, in brain trauma diagnosis, strong smoothing is desired in soft tissue region to reveal internal bleeding, while high resolution is desired in a bone region to reveal fractures. The function shown in
FIG. 7 has advantages when used for the above-identified application, including exhibiting high resolution in bone region together with noise suppression in the soft-tissue regions. Further, the function shown inFIG. 7 advantageously exhibits high resolution in the lung regions. - In another example, in contrast enhanced CT, high resolution is desired in contrast enhanced region while low noise is desired in other regions.
- In certain implementations, the function relating average HU to blending weight a can depend on other content/context indicia of the clinical/procedural application or body part being imaged).
- Variations in the shape of blending-weight function can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
-
FIG. 8 shows a slice of a blending map generated by applying the blending-weight function ofFIG. 7 to the stack of CT image used in generatingFIGS. 1A, 1B, 2A, and 2B . The blending-weight function can be expressed as -
- wherein HUj is the average HU value at the voxel indicated by index j, and αj is the blending weight α corresponding to index j. According to this function,
FIG. 8 shows that a small amount of smoothing together with high resolution is selected in the lung regions, but most of the soft-tissue regions is optimized to suppress noise by using a large amount of smoothing/denoising. The front of the chest region is in the transition range between −300 HU to −50 HU and represents a compromise between high resolution and high noise reduction. - In step 150 of
method 100, the blended image is generated by the weighted combination of images from the stack. For example, the blended image can be generated by performing a weighted sum of the images from the stack. - For example, when the stack has two images and the blending weight α(ww, st) is a function of the window width variable ww and the slice thickness variable st, the blended image can be generated using the expression
-
p (Blended) =p (βS)[α(ww, st)]+p(βL)[1−α(ww,st)]. - Additionally, when a blending map αj(HUj) is used (e.g., with the HU-to-alpha mapping shown in
FIGS. 7 and 8 ), then the blended image can be given by the equation -
p j (Blended)=αj p j (βS)+(1−αj)p j (βL), - wherein pj (Blended), pj (βS), and pj (βL) are respectively the jth voxels of the blended image, small-smoothing-parameter image, and the large-smoothing-parameter image.
- When a blending map is used, the blended image is a combination of images from the stack, wherein the relative contributions among the images from the stack can vary voxel by voxel. Thus, spatially varying smoothness/denoising can be achieved to obtain an optimal resolution to noise tradeoff in every region of the blended image. Since the blending ratio is determined automatically, it does not require the user to adjust smoothing strength or blending ratio while navigating through different regions, and thus provides a simple and seamless user experience.
- Variations of step 150 including, three or more images in the stack and different weighted combinations of the images in the stack (e.g., weighted arithmetic averaging, weighted geometric averaging, weightings incorporating that a functions of a figure of merit for the noise and/or resolution, etc.) can be used without departing from the spirit or essential characteristics thereof, as will be understood by those skilled in the art.
-
FIG. 9 shows (center) a reconstructed image generated using the blending map illustrated inFIG. 8 . Also shown inFIG. 9 is a magnification (upper left) of the soft-tissue region represented inFIGS. 2A and 2B displayed using the soft-tissue settings, and a magnification (lower right) of the lung region represented inFIGS. 1A and 1B displayed using the lung settings. ComparingFIG. 9 withFIGS. 1A, 1B, 2A, and 2B reveals that, usingmethod 100 with a blending map implementation, the desirable aspects ofFIG. 1A and the desirable aspects ofFIG. 2B have been combined within a single blended image. That is,method 100 produces a single image with low noise in the soft tissue region while preserving high resolution in the lung regions. - Accordingly, by
method 100, a blended image can be generated automatically from a stack of two or more reconstructed images having different degrees of smoothing/denoising. Further, the blended image can be generated and displayed without additional input or burden on a user, eliminating the need to adjust the smoothing parameter or blending weight manually. Further, the tradeoff between resolution and noise can be simultaneously optimized in all regions of interest. - Next, a hardware description, according to exemplary embodiments, is described with reference to
FIG. 10 for a data-processing apparatus 300 for processing the CT projection data and the stack of reconstructed images by performingmethod 100 and the various processes herein. InFIG. 10 , the data-processing apparatus 300 for processing CTP data includes aCPU 301 which performs the processes described above, includingmethod 100 shown inFIG. 3 , the processes described herein, and variations as would be known to a person of ordinary skill in the art. The process data and instructions may be stored inmemory 302. These processes and instructions may also be stored on astorage medium disk 304 such as a hard drive (MD) or portable storage medium or may he stored remotely. Further, the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored. For example, the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the data-processing apparatus 300 communicates, such as a server or computer. - Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with
CPU 301 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art. -
CPU 301 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, theCPU 301 may be implemented using a GPU processor such as a Tegra processor from Nvidia Corporation and an operating system, such as Multi-OS. Moreover, theCPU 301 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further,CPU 301 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above. - The data-
processing apparatus 300 inFIG. 10 also includes anetwork controller 306, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing withnetwork 400. As can be appreciated, thenetwork 400 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. Thenetwork 400 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be WiFi, Bluetooth, or any other wireless form of communication that is known. - The data-
processing apparatus 300 further includes adisplay controller 308, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing withdisplay 310, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 312 interfaces with a keyboard and/or mouse 314 as well as atouch screen panel 316 on or separate fromdisplay 310. General purpose I/O interface also connects to a variety ofperipherals 318 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard. - A
sound controller 320 is also provided in the parallel scalar-multiplication apparatus, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 322 thereby providing sounds and/or music. - The general
purpose storage controller 324 connects thestorage medium disk 304 with communication bus 326, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the Parallel scalar-multiplication apparatus. A description of the general features and functionality of thedisplay 310, keyboard and/or mouse 314, as well as thedisplay controller 308,storage controller 324,network controller 306,sound controller 320, and general purpose I/O interface 312 is omitted herein for brevity as these features are known. -
FIG. 11 illustrates an implementation of the radiography gantry included in a CT apparatus or scanner. As shown inFIG. 11 , aradiography gantry 500 is illustrated from a side view and further includes anX-ray tube 501, anannular frame 502, and a multi-row or two-dimensional-array-type X-ray detector 503. TheX-ray tube 501 andX-ray detector 503 are diametrically mounted across an object OBJ on theannular frame 502, which is rotatably supported around a rotation axis RA. Arotating unit 507 rotates theannular frame 502 at a high speed, such as 0.4 sec/rotation, while the object OBJ is being moved along the axis RA into or out of the illustrated page. - The first embodiment of an X-ray computed tomography (CT) apparatus according to the present inventions will be described below with reference to the views of the accompanying drawing. Note that X-ray CT apparatuses include various types of apparatuses, e.g., a rotate/rotate-type apparatus in which an X-ray tube and X-ray detector rotate together around an object to be examined, and a stationary/rotate-type apparatus in which many detection elements are arrayed in the form of a ring or plane, and only an X-ray tube rotates around an object to be examined. The present inventions can be applied to either type. In this case, the rotate/rotate type, which is currently the mainstream, will be exemplified.
- The multi-slice X-ray CT apparatus further includes a
high voltage generator 509 that generates a tube voltage applied to theX-ray tube 501 through aslip ring 508 so that theX-ray tube 501 generates X-rays. The X-rays are emitted towards the object OBJ, whose cross sectional area is represented by a circle. For example, theX-ray tube 501 having an average X-ray energy during a first scan that is less than an average X-ray energy during a second scan. Thus, two or more scans can be obtained corresponding to different X-ray energies. - The
X-ray detector 503 is located at an opposite side from theX-ray tube 501 across the object OBJ for detecting the emitted X-rays that have transmitted through the object OBJ. TheX-ray detector 503 further includes individual detector elements or units. - The CT apparatus further includes other devices for processing the detected signals from
X-ray detector 503. A data acquisition circuit or a Data Acquisition System (DAS) 504 converts a signal output from theX-ray detector 503 for each channel into a voltage signal, amplifies the signal, and further converts the signal into a digital signal. TheX-ray detector 503 and theDAS 504 are configured to handle a predetermined total number of projections per rotation (TPPR). - The above-described data is sent to a
preprocessing device 506, which is housed in a console outside theradiography gantry 500 through anon-contact data transmitter 505. Thepreprocessing device 506 performs certain corrections, such as sensitivity correction on the raw data. Amemory 512 stores the resultant data, which is also called projection data at a stage immediately before reconstruction processing. Thememory 512 is connected to asystem controller 510 through a data/control bus 511, together with areconstruction device 514,input device 515, anddisplay 516. Thesystem controller 510 controls acurrent regulator 513 that limits the current to a level sufficient for driving the CT system. - The detectors are rotated and/or fixed with respect to the patient among various generations of the CT scanner systems. In one implementation, the above-described CT system can be an example of a combined third-generation geometry and fourth-generation geometry system. In the third-generation system, the
X-ray tube 501 and theX-ray detector 503 are diametrically mounted on theannular frame 502 and are rotated around the object OBJ as theannular frame 502 is rotated about the rotation axis RA. In the fourth-generation geometry system, the detectors are fixedly placed around the patient and an X-ray tube rotates around the patient. In an alternative embodiment, theradiography gantry 500 has multiple detectors arranged on theannular frame 502, which is supported by a C-arm and a stand. - The
memory 512 can store the measurement value representative of the irradiance of the X-rays at theX-ray detector unit 503. Further, thememory 512 can store a dedicated program for executingmethods 100 and 200 for post-reconstruction processing and enhancement of reconstructed CT images. - The
reconstruction device 514 can reconstruct CT images and can execute post processing of the reconstructed CT images, includingmethods 100 and 200 described herein. Further,reconstruction device 514 can execute pre-reconstruction processing image processing such as volume rendering processing and image difference processing as needed. - The pre-reconstruction processing of the projection data performed by the
preprocessing device 506 can include correcting for detector calibrations, detector nonlinearities, and polar effects, for example. - Post-reconstruction processing performed by the
reconstruction device 514 can include filtering and smoothing the image, volume rendering processing, and image difference processing as needed. Further, the post-reconstruction processing can include jagged-edge removal and resolutionenhancement using method 100 and/or 200. The image reconstruction process can be performed using known methods, including, e.g., filtered-backprojection, iterative reconstruction, algebraic reconstruction techniques, ordered subsets, and acceleration techniques. Thereconstruction device 514 can use the memory to store, e.g., projection data, reconstructed images, calibration data and parameters, and computer programs. - The
reconstruction device 514 can include a CPU (processing circuitry) that can be implemented as discrete logic gates, as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Complex Programmable Logic Device (CPLD). An FPGA or CPLD implementation may be coded in VHDL. Verilog, or any other hardware description language and the code may be stored in an electronic memory directly within the FPGA or CPLD, or as a separate electronic memory. Further, thememory 512 can be non-volatile, such as ROM, EPROM, EEPROM or FLASH memory. Thememory 512 can also be volatile, such as static or dynamic RAM, and a processor, such as a microcontroller or microprocessor, can be provided to manage the electronic memory as well as the interaction between the FPGA or CPLD and the memory. - Alternatively, the CPU in the
reconstruction device 514 can execute a computer program including a set of computer-readable instructions that perform the functions described herein, the program being stored in any of the above-described non-transitory electronic memories and/or a hard disk drive, CD, DVD, FLASH drive or any other known storage media. Further, the computer-readable instructions may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with a processor, such as a Xenon processor from Intel of America or an Opteron processor from AMD of America and an operating system, such as Microsoft VISTA, UNIX, Solaris, LINUX, Apple, MAC-OS and other operating systems known to those skilled in the art. Further, CPU can be implemented as multiple processors cooperatively working in parallel to perform the instructions. - In one implementation, the reconstructed images can be displayed on a
display 516. Thedisplay 516 can be an LCD display, CRT display, plasma display, OLED, LED or any other display known in the art. - The
memory 512 can be a hard disk drive, CD-ROM drive, DVD drive, FLASH drive, RAM, ROM or any other electronic storage known in the art. - While certain implementations have been described, these implementations have been presented by way of example only, and are not intended to limit the teachings of this disclosure. Indeed, the novel methods, apparatuses and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods, apparatuses and systems described herein may be made without departing from the spirit of this disclosure.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/884,089 US10643319B2 (en) | 2018-01-30 | 2018-01-30 | Apparatus and method for context-oriented blending of reconstructed images |
JP2019005392A JP7330703B2 (en) | 2018-01-30 | 2019-01-16 | Medical image processing device and X-ray CT system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/884,089 US10643319B2 (en) | 2018-01-30 | 2018-01-30 | Apparatus and method for context-oriented blending of reconstructed images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190236763A1 true US20190236763A1 (en) | 2019-08-01 |
US10643319B2 US10643319B2 (en) | 2020-05-05 |
Family
ID=67391586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/884,089 Active 2038-11-10 US10643319B2 (en) | 2018-01-30 | 2018-01-30 | Apparatus and method for context-oriented blending of reconstructed images |
Country Status (2)
Country | Link |
---|---|
US (1) | US10643319B2 (en) |
JP (1) | JP7330703B2 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180122108A1 (en) * | 2016-10-31 | 2018-05-03 | Samsung Electronics Co., Ltd. | Medical imaging apparatus and method of processing medical image |
CN110490832A (en) * | 2019-08-23 | 2019-11-22 | 哈尔滨工业大学 | A kind of MR image reconstruction method based on regularization depth image transcendental method |
CN111539008A (en) * | 2020-05-22 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Image processing method and device for protecting privacy |
CN111667428A (en) * | 2020-06-05 | 2020-09-15 | 北京百度网讯科技有限公司 | Noise generation method and device based on automatic search |
US10902649B2 (en) * | 2016-03-11 | 2021-01-26 | Shimadzu Corporation | Image reconstruction processing method, image reconstruction processing program, and tomography apparatus provided therewith |
CN112330665A (en) * | 2020-11-25 | 2021-02-05 | 沈阳东软智能医疗科技研究院有限公司 | CT image processing method, device, storage medium and electronic equipment |
US20210104023A1 (en) * | 2020-05-18 | 2021-04-08 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for image reconstruction |
CN112869761A (en) * | 2019-11-29 | 2021-06-01 | 株式会社日立制作所 | Medical image diagnosis support system, medical image processing apparatus, and medical image processing method |
US11315221B2 (en) | 2019-04-01 | 2022-04-26 | Canon Medical Systems Corporation | Apparatus and method for image reconstruction using feature-aware deep learning |
US20220138911A1 (en) * | 2020-11-05 | 2022-05-05 | Massachusetts Institute Of Technology | Neural network systems and methods for removing noise from signals |
US20220351431A1 (en) * | 2020-08-31 | 2022-11-03 | Zhejiang University | A low dose sinogram denoising and pet image reconstruction method based on teacher-student generator |
US11544899B2 (en) * | 2019-10-15 | 2023-01-03 | Toyota Research Institute, Inc. | System and method for generating terrain maps |
US11810290B2 (en) * | 2018-12-19 | 2023-11-07 | Siemens Healthcare Gmbh | Method and computer system for generating a combined tissue-vessel representation |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE0400731D0 (en) * | 2004-03-22 | 2004-03-22 | Contextvision Ab | Method, computer program product and apparatus for enhancing a computerized tomography image |
JP5080986B2 (en) * | 2005-02-03 | 2012-11-21 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Diagnostic imaging system and diagnostic imaging method |
US20080285881A1 (en) * | 2005-02-07 | 2008-11-20 | Yaniv Gal | Adaptive Image De-Noising by Pixels Relation Maximization |
CA2748234A1 (en) * | 2008-12-25 | 2010-07-01 | Medic Vision - Imaging Solutions Ltd. | Denoising medical images |
US8280135B2 (en) * | 2009-01-20 | 2012-10-02 | Mayo Foundation For Medical Education And Research | System and method for highly attenuating material artifact reduction in x-ray computed tomography |
WO2010132722A2 (en) * | 2009-05-13 | 2010-11-18 | The Regents Of The University Of California | Computer tomography sorting based on internal anatomy of patients |
JP5759159B2 (en) | 2010-12-15 | 2015-08-05 | 富士フイルム株式会社 | Radiation tomographic image generation method and apparatus |
JP6154375B2 (en) | 2011-07-28 | 2017-06-28 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Image generation device |
EP2783344B1 (en) | 2011-11-23 | 2017-05-03 | Koninklijke Philips N.V. | Image domain de-noising |
US9510799B2 (en) | 2012-06-11 | 2016-12-06 | Konica Minolta, Inc. | Medical imaging system and medical image processing apparatus |
DE102014206720A1 (en) * | 2014-04-08 | 2015-10-08 | Siemens Aktiengesellschaft | Noise reduction in tomograms |
WO2015168147A1 (en) * | 2014-04-29 | 2015-11-05 | Carl Zeiss X-ray Microscopy, Inc. | Segmentation and spectrum based metal artifact reduction method and system |
US9911208B2 (en) | 2016-04-11 | 2018-03-06 | Toshiba Medical Systems Corporation | Apparatus and method of iterative image reconstruction using regularization-parameter control |
US11234666B2 (en) * | 2018-05-31 | 2022-02-01 | Canon Medical Systems Corporation | Apparatus and method for medical image reconstruction using deep learning to improve image quality in position emission tomography (PET) |
-
2018
- 2018-01-30 US US15/884,089 patent/US10643319B2/en active Active
-
2019
- 2019-01-16 JP JP2019005392A patent/JP7330703B2/en active Active
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10902649B2 (en) * | 2016-03-11 | 2021-01-26 | Shimadzu Corporation | Image reconstruction processing method, image reconstruction processing program, and tomography apparatus provided therewith |
US20180122108A1 (en) * | 2016-10-31 | 2018-05-03 | Samsung Electronics Co., Ltd. | Medical imaging apparatus and method of processing medical image |
US10818045B2 (en) * | 2016-10-31 | 2020-10-27 | Samsung Electronics Co., Ltd. | Medical imaging apparatus and method of processing medical image |
US11810290B2 (en) * | 2018-12-19 | 2023-11-07 | Siemens Healthcare Gmbh | Method and computer system for generating a combined tissue-vessel representation |
US11315221B2 (en) | 2019-04-01 | 2022-04-26 | Canon Medical Systems Corporation | Apparatus and method for image reconstruction using feature-aware deep learning |
CN110490832A (en) * | 2019-08-23 | 2019-11-22 | 哈尔滨工业大学 | A kind of MR image reconstruction method based on regularization depth image transcendental method |
US11544899B2 (en) * | 2019-10-15 | 2023-01-03 | Toyota Research Institute, Inc. | System and method for generating terrain maps |
CN112869761A (en) * | 2019-11-29 | 2021-06-01 | 株式会社日立制作所 | Medical image diagnosis support system, medical image processing apparatus, and medical image processing method |
US11436786B2 (en) * | 2019-11-29 | 2022-09-06 | Fujifilm Healthcare Corporation | Medical diagnostic imaging support system, medical image processing device, and medical image processing method |
US20210104023A1 (en) * | 2020-05-18 | 2021-04-08 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for image reconstruction |
US11847763B2 (en) * | 2020-05-18 | 2023-12-19 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for image reconstruction |
CN111539008A (en) * | 2020-05-22 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Image processing method and device for protecting privacy |
CN111667428A (en) * | 2020-06-05 | 2020-09-15 | 北京百度网讯科技有限公司 | Noise generation method and device based on automatic search |
US20220351431A1 (en) * | 2020-08-31 | 2022-11-03 | Zhejiang University | A low dose sinogram denoising and pet image reconstruction method based on teacher-student generator |
US20220138911A1 (en) * | 2020-11-05 | 2022-05-05 | Massachusetts Institute Of Technology | Neural network systems and methods for removing noise from signals |
CN112330665A (en) * | 2020-11-25 | 2021-02-05 | 沈阳东软智能医疗科技研究院有限公司 | CT image processing method, device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
JP2019130302A (en) | 2019-08-08 |
US10643319B2 (en) | 2020-05-05 |
JP7330703B2 (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10643319B2 (en) | Apparatus and method for context-oriented blending of reconstructed images | |
JP7106405B2 (en) | Medical image processing device, medical imaging device and medical image processing program | |
Koonsanit et al. | Image enhancement on digital x-ray images using N-CLAHE | |
Tian et al. | Low-dose CT reconstruction via edge-preserving total variation regularization | |
US8965078B2 (en) | Projection-space denoising with bilateral filtering in computed tomography | |
US9613440B2 (en) | Digital breast Tomosynthesis reconstruction using adaptive voxel grid | |
US10111638B2 (en) | Apparatus and method for registration and reprojection-based material decomposition for spectrally resolved computed tomography | |
US10789738B2 (en) | Method and apparatus to reduce artifacts in a computed-tomography (CT) image by iterative reconstruction (IR) using a cost function with a de-emphasis operator | |
US8855394B2 (en) | Methods and apparatus for texture based filter fusion for CBCT system and cone-beam image reconstruction | |
US10475215B2 (en) | CBCT image processing method | |
US10748263B2 (en) | Medical image processing apparatus, medical image processing method and medical image processing system | |
US11010960B2 (en) | Method for enhanced display of image slices from 3-D volume image | |
Hu et al. | A feature refinement approach for statistical interior CT reconstruction | |
US6973157B2 (en) | Method and apparatus for weighted backprojection reconstruction in 3D X-ray imaging | |
Zhu et al. | Iterative CT reconstruction via minimizing adaptively reweighted total variation | |
US10685461B1 (en) | Apparatus and method for context-oriented iterative reconstruction for computed tomography (CT) | |
EP3238175A1 (en) | Anti-correlated noise filter | |
Huang et al. | An iterative reconstruction method for sparse-projection data for low-dose CT | |
US20060165271A1 (en) | Method for reducing cupping artifacts in cone beam CT image data sets | |
US20210282733A1 (en) | Edge noise reduction | |
Hofmann et al. | Removing blooming artifacts with binarized deconvolution in cardiac CT | |
US20230397899A1 (en) | Systems and methods for image generation | |
Li et al. | Joint regularization-based image reconstruction by combining data-driven tight frame and total variation for low-dose computed tomography | |
Hofmann et al. | A new approach to regularized iterative CT image reconstruction | |
Rashed Essam et al. | Statistical image reconstruction from limited projection data with intensity prior |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON MEDICAL SYSTEMS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAN, CHUNG;YU, ZHOU;ZHOU, JIAN;SIGNING DATES FROM 20180125 TO 20180126;REEL/FRAME:044777/0536 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |