US20090052727A1 - Methods for non-linear image blending, adjustment and display - Google Patents

Methods for non-linear image blending, adjustment and display Download PDF

Info

Publication number
US20090052727A1
US20090052727A1 US11/976,129 US97612907A US2009052727A1 US 20090052727 A1 US20090052727 A1 US 20090052727A1 US 97612907 A US97612907 A US 97612907A US 2009052727 A1 US2009052727 A1 US 2009052727A1
Authority
US
United States
Prior art keywords
image
image data
blending
linear
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/976,129
Other versions
US8233683B2 (en
Inventor
Christian Eusemann
David Holmes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthineers AG
Mayo Foundation for Medical Education and Research
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11/976,129 priority Critical patent/US8233683B2/en
Application filed by Individual filed Critical Individual
Assigned to SIEMENS MEDICAL SOLUTIONS USA, INC. reassignment SIEMENS MEDICAL SOLUTIONS USA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EUSEMANN, CHRISTIAN
Assigned to SIEMENS AKTIENGESELLSCHAFT reassignment SIEMENS AKTIENGESELLSCHAFT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS MEDICAL SOLUTIONS USA, INC.
Assigned to MAYO FOUNDATION FOR MEDICAL EDUCATION AND RESEARCH reassignment MAYO FOUNDATION FOR MEDICAL EDUCATION AND RESEARCH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOLMES, DAVID
Publication of US20090052727A1 publication Critical patent/US20090052727A1/en
Priority to US13/528,159 priority patent/US8712128B2/en
Publication of US8233683B2 publication Critical patent/US8233683B2/en
Application granted granted Critical
Assigned to SIEMENS HEALTHCARE GMBH reassignment SIEMENS HEALTHCARE GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS AKTIENGESELLSCHAFT
Assigned to Siemens Healthineers Ag reassignment Siemens Healthineers Ag ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS HEALTHCARE GMBH
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • Attenuation depends on the type of body tissue scanned and the average energy level of the X-ray beam.
  • the average energy level of the X-ray beam may be adjusted via an X-ray tube's energy setting.
  • An X-ray tube's energy setting is measured in kilovolts (kV).
  • CT imaging may be performed using a single energy level (referred to as single energy CT imaging) or dual energy levels (referred to as dual energy imaging).
  • dual energy images may be acquired using two or more scans of different energies during a single procedure or using one or more energy sources.
  • image data is obtained using a single energy value, for example, 120 kV.
  • dual image data e.g., two image data sets or two images
  • dual image data may be obtained concurrently, simultaneously or sequentially. If two different energy levels are used to acquire dual energy images, each of the two sets of image data may have different attenuation characteristics. The difference in attenuation levels allows for classification of elemental chemical compositions of imaged tissues.
  • Different energy levels may also impact contrast resolution and/or noise characteristics of respective image data.
  • 80 kV image data may provide greater contrast resolution than 140 kV image data. But, the 80 kV image data may be noisier than the 140 kV image data.
  • the higher and lower image data may be combined into resultant image data using a linear mixing ratio.
  • a conventional linear mixing ratio may be 70/30.
  • resultant image data may be obtained by blending 70% of a 140 kV image data with a 30% 80 kV image data.
  • Methods for linear blending of image data are well-known in the art. Thus, a detailed discussion will be omitted for the sake of brevity.
  • dual energy image data for a pancreas may have somewhat grainy 80 kV images lacking sharp contours, which may be shown in, for example, 120-140 kV images.
  • the 80 kV images may have a better contrast resolution than the 120-140 kV images.
  • the better contrast resolution may enable physicians to differentiate between tissues.
  • benefits of the 80 kV image data and the 120-140 kV image data may be at least partially offset by the drawbacks (e.g., noise) due to the linear nature of the combination.
  • Example embodiments provide improved blending of at least two sets of image information (e.g., lower and higher image information) such that the benefits of each image information may be appreciated and visualized in a single resultant image.
  • a tunable user interface for physicians e.g., radiologists
  • Example embodiments also provide organ and/or pathology specific presets, which may simplify case review.
  • Example embodiments provide improved blending of image information (e.g., image data) obtained using a plurality of X-ray sources such that the benefits of each image information may be appreciated and/or visualized in a single resultant image.
  • image information e.g., image data
  • the plurality of image data sets may include a first image data set obtained using a first X-ray energy level (e.g., between about 120 kV-about 140 kV, inclusive, however, energy levels higher than 140 kV or lower than 120 kV may be used as well) and a second image data set obtained using a second X-ray energy level (e.g., less than or equal to about 100 kV)
  • the first and second energy levels may be different.
  • the first and second image data may be blended together or combined using a non-linear function to generate resultant image data.
  • a resultant image may be displayed to a user (e.g., a physician, such as a radiologist) via a display.
  • Example embodiments provide a method for non-linear blending of a plurality of image data sets using a parametric or non-linear function or algorithm.
  • Methods according to example embodiments provide a temporally and spatially independent resultant image data for generating an image.
  • Each resultant image is generated based on a combination or blend of plurality of image data sets at each voxel. That is, for example, each voxel of a resultant image may have its individual blend of the plurality of image data sets.
  • Each image voxel may be generated independently, and thus, the resultant image may be independent of dimensionality.
  • Example embodiments also provide a resultant image needing only fine adjustments for viewing different anatomy regions.
  • At least one example embodiment provides a method for generating an image based on computed tomography data
  • a computed tomography image may be generated by blending image data associated with a plurality of X-ray energy levels. The blending may be performed using a non-linear blending function.
  • the computed tomography image may be displayed to a user and/or stored in a memory.
  • At least one other example embodiment provides a computed tomography (CT) apparatus.
  • the CT apparatus may include a CT unit and a display.
  • the CT unit may be configured to generate a computed tomography image by blending image data associated with a plurality of X-ray energy levels. The blending may be performed using a non-linear blending function.
  • the display may be for displaying the resultant image to the user.
  • first image data may be obtained using X-rays having a first X-ray energy level
  • second image data may be obtained using X-rays having a second X-ray energy level
  • the computed tomography image may be generated by blending the first and second image data using the non-linear function.
  • the first and second X-ray energy levels may be different.
  • the first and second image data may be blended according to a blending ratio, wherein the computed tomography image may be generated by blending unequal portions of the first image data and the second image data.
  • a blending parameter may be calculated based on the first image data.
  • the blending parameter may be indicative of a blending ratio for blending first and second image data.
  • the computed tomography image may be adjusted by modifying the blending parameter and/or at least one image parameter associated with the computed tomography image.
  • the blending parameter may be a non-linear blending value.
  • the at least one parameter may include at least one of a slice number, viewing window and at least one non-linear blending function parameter.
  • the at least one parameter may include at least one non-linear blending function parameter, the at least one non-linear blending function parameter including at least one of a non-linear blending width and a non-linear blending level of the computed tomography image.
  • the non-linear blending function, width, and/or level may be a moidal blending function, moidal width, and/or moidal level.
  • the first image data may include individual image data associated with each of a plurality of first image voxels
  • the second image data may include individual image data associated with each of a plurality of first image voxels.
  • the image data for each first image voxel may be blended with corresponding image data for each second image voxel using the non-linear blending function to generate a plurality of individual resultant image voxels.
  • the computed tomography image may be generated based on the plurality of individual resultant image voxels.
  • the non-linear blending function may be a moidal blending function.
  • the CT unit may further include a first energy source and a second energy source.
  • the first energy source may emit X-rays at a first energy level
  • the second energy source may emit X-rays at a second energy level.
  • the CT unit may be further configured to obtain first image data based on X-rays emitted from the first energy source, obtain second image data based on X-rays emitted from the second energy source, and generate the CT image by blending the first and second image data using the non-linear function.
  • the CT unit may further include a non-linear blending module.
  • the non-linear blending module may be configured to calculate a blending parameter based on the first image data.
  • the blending parameter may be indicative of a blending ratio for blending first and second image data.
  • the non-linear blending module may also generate computed tomography image data by blending the first image data and the second image data based on the blending parameter.
  • the CT unit may generate the computed tomography image based on the computed tomography image data.
  • FIG. 1 is a flowchart illustrating a method according to an example embodiment
  • FIG. 2 is a block diagram illustrating a computed tomography (CT) unit according to an example embodiment
  • FIG. 3 illustrates an example graphical user interface (GUI) capable of enabling users to change slice number, moidal level and/or moidal width in real time; and
  • FIG. 4 shows two graphs (a) and (b), which illustrate the moidal level and/or moidal width dependence of example embodiments.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the program modules discussed herein may be implemented using existing hardware in existing CT scanners.
  • Example embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked.
  • program modules may be located in both local and remote memory storage devices.
  • the data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data.
  • example embodiments are described in the foregoing context, it is not meant to be limiting as those of skill in the art will appreciate that various acts and operations described hereinafter may also be implemented in hardware.
  • Example embodiments provide methods for combining a plurality of image data sets obtained using a plurality of energy levels to generate improved image visualization.
  • image information, image data and image data set may be used interchangeably, and may refer to image data used to generate and display a corresponding image to a user.
  • Image on the other hand, may refer to the image displayed to the user.
  • At least two different energy levels may be used to obtain at least two image data sets, and the at least two image data sets may be blended into a single resultant image data set.
  • the single image data set may be used to generate improved image visualization.
  • At least one example embodiment provides a method of organ specific non-linear mixing of image information that improves the blending of both low and high image information to capture the potential benefits of each image data set in a single image.
  • FIG. 1 is a flowchart illustrating a method according to an example embodiment.
  • FIG. 2 is a block diagram illustrating a CT unit according to an example embodiment.
  • the CT unit shown in FIG. 2 may be a dual-energy CT unit and may have similar or substantially similar functionality to CT units as are well-known in the art. But, the CT unit of FIG. 2 may have the additional functionality as described herein.
  • Example embodiments will be described with regard FIGS. 1 and 2 collectively. However, it will be understood that methods according to example embodiments may be implemented in any suitable CT unit.
  • example embodiments will be discussed with regard to being implemented in a CT unit, example embodiments discussed herein may be implemented on any computer.
  • example embodiments may be implemented on a computer completely separate from or integrated with a CT unit.
  • the methods described herein may be implemented on a conventional personal computer or laptop on which the CT data or images may be loaded.
  • CT scanner 22 may obtain multiple energy data using a plurality of energy sources 220 - 1 - 220 -N, where N is a natural number.
  • FIG. 1 illustrates a plurality of energy sources 220 - 1 - 220 -N, multiple energy data may be obtained using a single source as described above.
  • Two or more of the energy sources 220 - 1 - 220 -N may have different X-ray energy levels.
  • the CT scanner 22 may include two energy sources 220 - 1 and 220 - 2 .
  • example embodiments are not limited thereto.
  • the dual energy data may include a plurality of image data sets, each of which may be obtained using a different X-ray energy level.
  • X-ray energy levels e.g., about 80 kV and about 140 kV.
  • the energy levels will be referred to herein as first and second energy levels, and the data sets will sometimes be referred to as first energy image data I 1 and second energy image data I 2 .
  • the first and second energy levels may be different.
  • the first energy level may be a lower energy level such as about 80 kV
  • the second energy level may be a higher energy level, for example, about 140 kV.
  • the dual energy data may be loaded into non-linear blending module 222 , at S 104 .
  • the non-linear blending module 222 may be in the form of a software module or computer executable instructions running on the CT unit 22 .
  • the non-linear blending module 222 may be in the form of a software module or computer executable instructions running on a personal computer separate from the CT unit 22 .
  • the non-linear blending module 222 may generate resultant image data by blending the first energy image data and the second energy image data according to a non-linear blending function. A method for doing so will be described in more detail below.
  • the resultant image data may be generated based on the first energy image data and the second energy image data using a modified sigmoid function, wherein the first and second energy image data are different.
  • the modified sigmoid function will be referred to as a moidal function
  • the method for blending using the moidal function will be referred to as moidal blending.
  • a sigmoid function is a parametric function described by Equation (1) shown below.
  • the moidal blending function may be described by Equation (2) shown below.
  • is referred to as the moidal value
  • I 1 represents the first energy image data (e.g., in Hounsfield Units for image voxels)
  • ⁇ ⁇ represents the moidal level
  • ⁇ ⁇ is the moidal width.
  • the moidal value ⁇ may be referred to as a blending parameter, and may be indicative of a blending ratio for blending image data to generate resultant image data.
  • the blending ratio refers an amount of each of the image data including in the blending of the image data sets. In some examples, a blending ratio may be represented by a percentage.
  • the blending ratio described herein is adaptive, dynamic may vary over time and/or space.
  • the moidal level ⁇ ⁇ and the moidal width ⁇ ⁇ may be collectively referred to as moidal function parameters.
  • the moidal function parameters may be organ specific constants, which may be freely adjusted by a user. According to example embodiments, the moidal function parameters may be determined based on organ of interest, object being scanned (person), the scanner, the energies, etc. In one example, for an average person having an abdominal scans using 140 kV and 80 kV energies, a moidal level of 170 and a moidal width of 60 may be sufficient to begin with.
  • the calculated moidal value may be utilized to generate the resultant image data I Out according to Equation (3) shown below.
  • I 2 represents the second energy image data (e.g., Hounsfield Units for image voxels).
  • the moidal level ⁇ ⁇ and the moidal width ⁇ ⁇ may be organ specific constants, which may be freely adjusted by a user.
  • Equation (3) may be used to generate resultant image data I Out for each voxel of the resultant image.
  • a resultant image may then be generated based on the image data for each voxel.
  • Methods for generating a resultant image based on resultant image data are well-known in the art, and thus, a detailed discussion for doing so will be omitted for the sake of brevity.
  • a resultant image may include a blended image of conventional resolution based on the plurality of images provided to the algorithm.
  • An example resolution is 512 ⁇ 512.
  • the non-linear blending module may calculate resultant voxel image data I Out for each voxel of the resultant image. Per voxel image data generation is well-known in the art, and as such, a detailed discussion will be omitted for the sake of brevity.
  • the output image may be generated by displaying I out as is well-known in the art. For example, if for voxel one I out is 75 HU, voxel one with this value may be displayed on the screen. The same procedure may be performed for each of the 512 ⁇ 512 voxels. In other words, once the I out image is generated, the image may be displayed on a screen using well-known methods including conventional intensity scaling, for example.
  • a resultant image may be generated and displayed to the user based on the resultant image data generated by the non-linear blending module 222 .
  • the resultant image may be displayed to the user via a graphical user interface (GUI).
  • GUI graphical user interface
  • FIG. 3 illustrates a GUI for displaying the resultant image to the user, according to an example embodiment.
  • FIG. 3 illustrates a GUI enabling users to change image parameters in real time.
  • Image parameters may include for example, slice number, viewing window/level and/or moidal function parameters.
  • moidal function parameters may further include moidal level and/or moidal width.
  • Utilizing the GUI to vary one or more of the image parameters in real-time may enable a physician to improve the detection of an anatomy and/or pathology of interest.
  • the image parameters (e.g., slice number, moidal level and/or moidal width) may be changed using input devices 26 - 1 - 26 -K.
  • the input devices 26 - 1 - 26 -K may include at least one of a keyboard, a mouse, etc.
  • the user may select anatomy specific moidal function parameters (e.g., anatomy specific moidal level and/or moidal width) at S 110 as described above.
  • the user may determine whether the default moidal function parameters are sufficient based on the displayed image.
  • the user may determine that an anatomy specific moidal level and/or moidal width is necessary.
  • the user may determine that an anatomy or pathology specific moidal level and/or moidal width is not necessary.
  • Anatomy or pathology specific moidal levels and/or widths may be specified by the user.
  • a slice is a single cross section through the object with acquisition specific resolution. The selection of the slice may be used to further improve the displayed image.
  • the user may continue with diagnosis of the anatomy and pathology of the selected slice.
  • the image quality determination at S 116 may be performed in the same or substantially the same manner as at S 108 .
  • the image parameters may be adjusted at S 118 .
  • One or more of these adjustments may be made selectively via the computer display 24 , for example, using a graphical user interface (GUI).
  • GUI graphical user interface
  • the user may adjust image parameters via the GUI based on the displayed image.
  • FIG. 4 shows two graphs (a) and (b), which illustrate the moidal function parameter dependence of methods according to example embodiments.
  • the high energy image information lies above the moidal curve and the low energy image information lies below the curve.
  • Graph (a) in FIG. 4 illustrates that an increase in moidal level ⁇ ⁇ may provide an output image I Out that utilizes an increased amount of high energy image information.
  • Lowering the moidal level ⁇ ⁇ may increase the amount of information taken from the low energy image information.
  • Graph (b) illustrates the impact of adjusting the moidal Width ⁇ ⁇ .
  • the non-linear blending function or algorithm functions in a thresholding manner. For example, anything above the Moidal Level may be taken (or constitutes a greater contribution) from the first energy image and anything below may be taken from the second energy image.
  • the Moidal Level may function as a separator (or threshold) between first and second energy image information used as input for the I out .
  • a moidal width ⁇ ⁇ of about 100 functions in a linear blending manner.
  • the non-linear function having a moidal width of 100 may be relatively close to the limit of the moidal width approaching infinity.
  • the blending function may seem linear in the range of values common to CT (e.g., about ⁇ 2000 to about 2000).
  • example embodiments may be implemented using alternative non-linear blending functions, such as, a Gaussian, piece-wise linear, parabolic, similar function, composition of several functions or the like.
  • image data may be processed prior to non-linear mixing.
  • the obtained image data may be registered, filtered, reconstructed, etc.
  • the obtained image data is computed tomography derived image data
  • the resultant image is a computed tomography derived image.
  • Methods according to example embodiments may be machine implemented via one or more computers or processors.
  • the systems discussed herein may be embodied in the form of one or more computers configured to carry out methods described herein.
  • Example embodiments may also be implemented, in software, for example, as any suitable computer program.
  • a program in accordance with one or more example embodiments of the present invention may be a computer program product causing a computer to execute one or more of the example methods described herein: a method for determining a parameter in a system for implementing a future clinical study.
  • the computer program product may include a computer-readable medium having computer program logic or code portions embodied thereon for enabling a processor of the apparatus to perform one or more functions in accordance with one or more of the example methodologies described above.
  • the computer program logic may thus cause the processor to perform one or more of the example methodologies, or one or more functions of a given methodology described herein.
  • the computer-readable medium may be a built-in medium installed inside a computer main body or removable medium arranged so that it can be separated from the computer main body.
  • Examples of the built-in medium include, but are not limited to, rewriteable non-volatile memories, such as RAMs, ROMs, flash memories, and hard disks.
  • Examples of a removable medium may include, but are not limited to, optical storage media such as CD-ROMs and DVDs; magneto-optical storage media such as MOs; magnetism storage media such as floppy disks (trademark), cassette tapes, and removable hard disks; media with a built-in rewriteable non-volatile memory such as memory cards; and media with a built-in ROM, such as ROM cassettes.
  • These programs may also be provided in the form of an externally supplied propagated signal and/or a computer data signal (e.g., wireless or terrestrial) embodied in a carrier wave.
  • the computer data signal embodying one or more instructions or functions of an example methodology may be carried on a carrier wave for transmission and/or reception by an entity that executes the instructions or functions of the example methodology.
  • the functions or instructions of the example embodiments may be implemented by processing one or more code segments of the carrier wave, for example, in a computer, where instructions or functions may be executed for determining a parameter in a system for implementing a future clinical study, in accordance with example embodiments described herein.
  • Such programs when recorded on computer-readable storage media, may be readily stored and distributed.
  • the storage medium as it is read by a computer, may enable the methods and/or apparatuses, in accordance with the example embodiments described herein.
  • Example embodiments being thus described, it will be obvious that the same may be varied in many ways.
  • the methods according to example embodiments of the present invention may be implemented in hardware and/or software.
  • the hardware/software implementations may include a combination of processor(s) and article(s) of manufacture.
  • the article(s) of manufacture may further include storage media and executable computer program(s), for example, a computer program product stored on a computer readable medium.
  • the executable computer program(s) may include the instructions to perform the described operations or functions.
  • the computer executable program(s) may also be provided as part of externally supplied propagated signal(s).

Abstract

A method for generating and adjusting an image obtained based on computed tomography data using a non-linear blending function is provided. In one embodiment of the method, first image data is obtained using a first X-ray energy, and second image data is obtained using a second X-ray image energy. An image is generated by blending the first and second image data using the non-linear function. The first X-ray energy and the second X-ray energy are different.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This non-provisional U.S. patent application claims priority under 35 U.S.C. §119(e) to provisional application No. 60/935,675 filed on Aug. 24, 2007, the entire contents of which are hereby incorporated herein by reference.
  • BACKGROUND
  • In conventional methods of X-ray imaging, attenuation depends on the type of body tissue scanned and the average energy level of the X-ray beam. The average energy level of the X-ray beam may be adjusted via an X-ray tube's energy setting. An X-ray tube's energy setting is measured in kilovolts (kV).
  • Conventionally, computed tomography (CT) imaging may be performed using a single energy level (referred to as single energy CT imaging) or dual energy levels (referred to as dual energy imaging). Dual energy images may be acquired using two or more scans of different energies during a single procedure or using one or more energy sources.
  • In conventional single energy CT imaging, image data is obtained using a single energy value, for example, 120 kV. In conventional dual energy CT imaging, dual image data (e.g., two image data sets or two images) is obtained using two different energy levels (e.g., 80 kV and 140 kV). Dual image data may be obtained concurrently, simultaneously or sequentially. If two different energy levels are used to acquire dual energy images, each of the two sets of image data may have different attenuation characteristics. The difference in attenuation levels allows for classification of elemental chemical compositions of imaged tissues.
  • Different energy levels may also impact contrast resolution and/or noise characteristics of respective image data. For example, 80 kV image data may provide greater contrast resolution than 140 kV image data. But, the 80 kV image data may be noisier than the 140 kV image data. To exploit potential advantages of, for example, 80 kV image data and 140 kV, in conventional dual energy CT systems, the higher and lower image data may be combined into resultant image data using a linear mixing ratio.
  • In one example, a conventional linear mixing ratio may be 70/30. In this case, resultant image data may be obtained by blending 70% of a 140 kV image data with a 30% 80 kV image data. Methods for linear blending of image data are well-known in the art. Thus, a detailed discussion will be omitted for the sake of brevity.
  • In a more specific example, dual energy image data for a pancreas may have somewhat grainy 80 kV images lacking sharp contours, which may be shown in, for example, 120-140 kV images. However, the 80 kV images may have a better contrast resolution than the 120-140 kV images. The better contrast resolution may enable physicians to differentiate between tissues. Thus, in conventional methods of linear combining, benefits of the 80 kV image data and the 120-140 kV image data may be at least partially offset by the drawbacks (e.g., noise) due to the linear nature of the combination.
  • Conventional linear blending may also provide additional diagnostic information to a viewing physician. However, illustrating the additional diagnostic information to the physician may be potentially problematic because of the drawbacks of the linear combination.
  • SUMMARY
  • Example embodiments provide improved blending of at least two sets of image information (e.g., lower and higher image information) such that the benefits of each image information may be appreciated and visualized in a single resultant image. A tunable user interface for physicians (e.g., radiologists) may further enhance diagnosis capabilities. Example embodiments also provide organ and/or pathology specific presets, which may simplify case review.
  • Example embodiments provide improved blending of image information (e.g., image data) obtained using a plurality of X-ray sources such that the benefits of each image information may be appreciated and/or visualized in a single resultant image.
  • In at least one example embodiment, the plurality of image data sets may include a first image data set obtained using a first X-ray energy level (e.g., between about 120 kV-about 140 kV, inclusive, however, energy levels higher than 140 kV or lower than 120 kV may be used as well) and a second image data set obtained using a second X-ray energy level (e.g., less than or equal to about 100 kV) The first and second energy levels may be different. The first and second image data may be blended together or combined using a non-linear function to generate resultant image data. A resultant image may be displayed to a user (e.g., a physician, such as a radiologist) via a display.
  • Example embodiments provide a method for non-linear blending of a plurality of image data sets using a parametric or non-linear function or algorithm.
  • Methods according to example embodiments provide a temporally and spatially independent resultant image data for generating an image. Each resultant image is generated based on a combination or blend of plurality of image data sets at each voxel. That is, for example, each voxel of a resultant image may have its individual blend of the plurality of image data sets. Each image voxel may be generated independently, and thus, the resultant image may be independent of dimensionality.
  • Example embodiments also provide a resultant image needing only fine adjustments for viewing different anatomy regions.
  • At least one example embodiment provides a method for generating an image based on computed tomography data According to at least this example embodiment, a computed tomography image may be generated by blending image data associated with a plurality of X-ray energy levels. The blending may be performed using a non-linear blending function. The computed tomography image may be displayed to a user and/or stored in a memory.
  • At least one other example embodiment provides a computed tomography (CT) apparatus. The CT apparatus may include a CT unit and a display. The CT unit may be configured to generate a computed tomography image by blending image data associated with a plurality of X-ray energy levels. The blending may be performed using a non-linear blending function. The display may be for displaying the resultant image to the user.
  • According to at least some example embodiments, first image data may be obtained using X-rays having a first X-ray energy level, and second image data may be obtained using X-rays having a second X-ray energy level. The computed tomography image may be generated by blending the first and second image data using the non-linear function. The first and second X-ray energy levels may be different.
  • The first and second image data may be blended according to a blending ratio, wherein the computed tomography image may be generated by blending unequal portions of the first image data and the second image data.
  • According to at least some example embodiments, a blending parameter may be calculated based on the first image data. The blending parameter may be indicative of a blending ratio for blending first and second image data. The computed tomography image may be adjusted by modifying the blending parameter and/or at least one image parameter associated with the computed tomography image. The blending parameter may be a non-linear blending value. The at least one parameter may include at least one of a slice number, viewing window and at least one non-linear blending function parameter. The at least one parameter may include at least one non-linear blending function parameter, the at least one non-linear blending function parameter including at least one of a non-linear blending width and a non-linear blending level of the computed tomography image. The non-linear blending function, width, and/or level may be a moidal blending function, moidal width, and/or moidal level.
  • According to at least some example embodiments, the first image data may include individual image data associated with each of a plurality of first image voxels, and the second image data may include individual image data associated with each of a plurality of first image voxels. The image data for each first image voxel may be blended with corresponding image data for each second image voxel using the non-linear blending function to generate a plurality of individual resultant image voxels. The computed tomography image may be generated based on the plurality of individual resultant image voxels. The non-linear blending function may be a moidal blending function.
  • According to at least some example embodiments, the CT unit may further include a first energy source and a second energy source. The first energy source may emit X-rays at a first energy level, and the second energy source may emit X-rays at a second energy level. The CT unit may be further configured to obtain first image data based on X-rays emitted from the first energy source, obtain second image data based on X-rays emitted from the second energy source, and generate the CT image by blending the first and second image data using the non-linear function.
  • The CT unit may further include a non-linear blending module. The non-linear blending module may be configured to calculate a blending parameter based on the first image data. The blending parameter may be indicative of a blending ratio for blending first and second image data. The non-linear blending module may also generate computed tomography image data by blending the first image data and the second image data based on the blending parameter. The CT unit may generate the computed tomography image based on the computed tomography image data.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Example embodiments will become more apparent by describing in detail the example embodiments shown in the attached drawings in which:
  • FIG. 1 is a flowchart illustrating a method according to an example embodiment;
  • FIG. 2 is a block diagram illustrating a computed tomography (CT) unit according to an example embodiment;
  • FIG. 3 illustrates an example graphical user interface (GUI) capable of enabling users to change slice number, moidal level and/or moidal width in real time; and
  • FIG. 4 shows two graphs (a) and (b), which illustrate the moidal level and/or moidal width dependence of example embodiments.
  • DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
  • Various example embodiments will now be described more fully with reference to the accompanying drawings in which only some example embodiments are shown. Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. The present invention, however, may be embodied in many alternate forms and should not be construed as limited to only the example embodiments set forth herein.
  • Accordingly, while example embodiments of the invention are capable of various modifications and alternative forms, embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit example embodiments of the present invention to the particular forms disclosed. On the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of the invention. Like numbers refer to like elements throughout the description of the figures.
  • It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention. As used herein, the term “and/or,” includes any and all combinations of one or more of the associated listed items.
  • It will be understood that when an element is referred to as being “connected,” or “coupled,” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected,” or “directly coupled,” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.).
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
  • Although not required, example embodiments will be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computer processors or CPUs. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The program modules discussed herein may be implemented using existing hardware in existing CT scanners.
  • Example embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
  • The acts and symbolic representations of operations described herein may be performed by one or more processors, unless indicated otherwise. As such, it will be understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processor of electrical signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the computer in a manner well understood by those skilled in the art.
  • The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while example embodiments are described in the foregoing context, it is not meant to be limiting as those of skill in the art will appreciate that various acts and operations described hereinafter may also be implemented in hardware.
  • Example embodiments provide methods for combining a plurality of image data sets obtained using a plurality of energy levels to generate improved image visualization. As described herein, image information, image data and image data set may be used interchangeably, and may refer to image data used to generate and display a corresponding image to a user. Image, on the other hand, may refer to the image displayed to the user.
  • According to at least one example embodiment, at least two different energy levels may be used to obtain at least two image data sets, and the at least two image data sets may be blended into a single resultant image data set. The single image data set may be used to generate improved image visualization. At least one example embodiment provides a method of organ specific non-linear mixing of image information that improves the blending of both low and high image information to capture the potential benefits of each image data set in a single image.
  • FIG. 1 is a flowchart illustrating a method according to an example embodiment. FIG. 2 is a block diagram illustrating a CT unit according to an example embodiment. The CT unit shown in FIG. 2 may be a dual-energy CT unit and may have similar or substantially similar functionality to CT units as are well-known in the art. But, the CT unit of FIG. 2 may have the additional functionality as described herein. Example embodiments will be described with regard FIGS. 1 and 2 collectively. However, it will be understood that methods according to example embodiments may be implemented in any suitable CT unit.
  • Although example embodiments will be discussed with regard to being implemented in a CT unit, example embodiments discussed herein may be implemented on any computer. For example, example embodiments may be implemented on a computer completely separate from or integrated with a CT unit. In one example, the methods described herein may be implemented on a conventional personal computer or laptop on which the CT data or images may be loaded.
  • Referring to FIGS. 1 and 2, at S102, CT scanner 22 may obtain multiple energy data using a plurality of energy sources 220-1-220-N, where N is a natural number. Although FIG. 1 illustrates a plurality of energy sources 220-1-220-N, multiple energy data may be obtained using a single source as described above.
  • Two or more of the energy sources 220-1-220-N may have different X-ray energy levels. In one example embodiment, the CT scanner 22 may include two energy sources 220-1 and 220-2. However, example embodiments are not limited thereto.
  • The dual energy data may include a plurality of image data sets, each of which may be obtained using a different X-ray energy level. For example purposes, methods according to example embodiments will be described herein with regard to two image data sets obtained using two different X-ray energy sources 220-1 and 220-2 emitting different X-ray energy levels (e.g., about 80 kV and about 140 kV). The energy levels will be referred to herein as first and second energy levels, and the data sets will sometimes be referred to as first energy image data I1 and second energy image data I2. In example embodiments described herein, the first and second energy levels may be different. For example, the first energy level may be a lower energy level such as about 80 kV, whereas the second energy level may be a higher energy level, for example, about 140 kV.
  • Returning back to FIGS. 1 and 2, after obtaining the dual energy data, the dual energy data may be loaded into non-linear blending module 222, at S104. The non-linear blending module 222 may be in the form of a software module or computer executable instructions running on the CT unit 22. Alternatively, the non-linear blending module 222 may be in the form of a software module or computer executable instructions running on a personal computer separate from the CT unit 22.
  • The non-linear blending module 222 may generate resultant image data by blending the first energy image data and the second energy image data according to a non-linear blending function. A method for doing so will be described in more detail below.
  • In one example, the resultant image data may be generated based on the first energy image data and the second energy image data using a modified sigmoid function, wherein the first and second energy image data are different. Hereinafter, the modified sigmoid function will be referred to as a moidal function, and the method for blending using the moidal function will be referred to as moidal blending. As is well-known in the art, a sigmoid
    Figure US20090052727A1-20090226-P00001
    function is a parametric function described by Equation (1) shown below.
  • ς = 1 1 + - x ( 1 )
  • According to example embodiments, the moidal blending function may be described by Equation (2) shown below.
  • μ = 1 1 + - I 1 - λ μ ω μ ( 2 )
  • In equation (2), μ is referred to as the moidal value, I1 represents the first energy image data (e.g., in Hounsfield Units for image voxels), λμ represents the moidal level and ωμ is the moidal width. The moidal value μ may be referred to as a blending parameter, and may be indicative of a blending ratio for blending image data to generate resultant image data. The blending ratio refers an amount of each of the image data including in the blending of the image data sets. In some examples, a blending ratio may be represented by a percentage. The blending ratio described herein is adaptive, dynamic may vary over time and/or space.
  • The moidal level λμ and the moidal width ωμ may be collectively referred to as moidal function parameters. The moidal function parameters may be organ specific constants, which may be freely adjusted by a user. According to example embodiments, the moidal function parameters may be determined based on organ of interest, object being scanned (person), the scanner, the energies, etc. In one example, for an average person having an abdominal scans using 140 kV and 80 kV energies, a moidal level of 170 and a moidal width of 60 may be sufficient to begin with.
  • The calculated moidal value may be utilized to generate the resultant image data IOut according to Equation (3) shown below.

  • I Out =I 1*(1−μ)+I 2*(μ)   (3)
  • In equation (3), I2 represents the second energy image data (e.g., Hounsfield Units for image voxels). As discussed above, the moidal level λμ and the moidal width ωμ may be organ specific constants, which may be freely adjusted by a user.
  • According to example embodiments, Equation (3) may be used to generate resultant image data IOut for each voxel of the resultant image. A resultant image may then be generated based on the image data for each voxel. Methods for generating a resultant image based on resultant image data are well-known in the art, and thus, a detailed discussion for doing so will be omitted for the sake of brevity.
  • In one example, a resultant image may include a blended image of conventional resolution based on the plurality of images provided to the algorithm. An example resolution is 512×512. In this example, the non-linear blending module may calculate resultant voxel image data IOut for each voxel of the resultant image. Per voxel image data generation is well-known in the art, and as such, a detailed discussion will be omitted for the sake of brevity.
  • Once Iout is obtained, the output image may be generated by displaying Iout as is well-known in the art. For example, if for voxel one Iout is 75 HU, voxel one with this value may be displayed on the screen. The same procedure may be performed for each of the 512×512 voxels. In other words, once the Iout image is generated, the image may be displayed on a screen using well-known methods including conventional intensity scaling, for example.
  • Returning back to FIGS. 1 and 2, at S106 a resultant image may be generated and displayed to the user based on the resultant image data generated by the non-linear blending module 222. In at least one example embodiment, the resultant image may be displayed to the user via a graphical user interface (GUI). FIG. 3 illustrates a GUI for displaying the resultant image to the user, according to an example embodiment.
  • As noted above, FIG. 3 illustrates a GUI enabling users to change image parameters in real time. Image parameters may include for example, slice number, viewing window/level and/or moidal function parameters. As discussed above, moidal function parameters may further include moidal level and/or moidal width.
  • Utilizing the GUI to vary one or more of the image parameters in real-time may enable a physician to improve the detection of an anatomy and/or pathology of interest. The image parameters (e.g., slice number, moidal level and/or moidal width) may be changed using input devices 26-1-26-K. The input devices 26-1-26-K may include at least one of a keyboard, a mouse, etc.
  • Returning to FIGS. 1 and 2, if the user is not satisfied with the default moidal function parameters (e.g., the moidal level and/or moidal width) at S108, the user may select anatomy specific moidal function parameters (e.g., anatomy specific moidal level and/or moidal width) at S110 as described above. The user may determine whether the default moidal function parameters are sufficient based on the displayed image.
  • For example, if the image is not sufficiently clear for the intended anatomy or pathology diagnosis, the user may determine that an anatomy specific moidal level and/or moidal width is necessary. On the other hand, if the image is sufficiently clear for the intended anatomy or pathology diagnosis, the user may determine that an anatomy or pathology specific moidal level and/or moidal width is not necessary. Anatomy or pathology specific moidal levels and/or widths may be specified by the user.
  • Still referring to FIGS. 1 and 2, after selecting an anatomy specific moidal level and/or moidal width at S110, the user may select a desired slice number at S112 via the GUI as described above. As is well-known in the art, a slice is a single cross section through the object with acquisition specific resolution. The selection of the slice may be used to further improve the displayed image.
  • If the image quality is acceptable at S116, the user may continue with diagnosis of the anatomy and pathology of the selected slice. The image quality determination at S116 may be performed in the same or substantially the same manner as at S108.
  • Returning to S116, if the image quality is unacceptable, the image parameters (e.g., moidal level, moidal width, slice number and/or viewing window) may be adjusted at S118. One or more of these adjustments may be made selectively via the computer display 24, for example, using a graphical user interface (GUI). For example, the user may adjust image parameters via the GUI based on the displayed image.
  • After adjusting the image parameters, the method proceeds to S116 and continues as described above.
  • Returning to S108 in FIG. 1, if the default moidal function parameters are sufficient, the method proceeds to S112, and continues as described above.
  • FIG. 4 shows two graphs (a) and (b), which illustrate the moidal function parameter dependence of methods according to example embodiments. As will be appreciated from graphs (a) and (b), the high energy image information lies above the moidal curve and the low energy image information lies below the curve.
  • Graph (a) in FIG. 4 illustrates that an increase in moidal level λμ may provide an output image IOut that utilizes an increased amount of high energy image information. Lowering the moidal level λμ, on the other hand, may increase the amount of information taken from the low energy image information.
  • Graph (b) illustrates the impact of adjusting the moidal Width ωμ. As shown, if the Moidal Width ωμ is about 0, the non-linear blending function or algorithm functions in a thresholding manner. For example, anything above the Moidal Level may be taken (or constitutes a greater contribution) from the first energy image and anything below may be taken from the second energy image. The Moidal Level may function as a separator (or threshold) between first and second energy image information used as input for the Iout. On the other hand, a moidal width ωμ of about 100 functions in a linear blending manner. That is, for example, the non-linear function having a moidal width of 100 may be relatively close to the limit of the moidal width approaching infinity. In this case, the blending function may seem linear in the range of values common to CT (e.g., about −2000 to about 2000).
  • Although described herein with regard to an example moidal blending technique, example embodiments may be implemented using alternative non-linear blending functions, such as, a Gaussian, piece-wise linear, parabolic, similar function, composition of several functions or the like.
  • Although not specifically discussed herein, image data may be processed prior to non-linear mixing. For example, the obtained image data may be registered, filtered, reconstructed, etc. In this case, the obtained image data is computed tomography derived image data, and the resultant image is a computed tomography derived image.
  • Methods according to example embodiments may be machine implemented via one or more computers or processors. In addition, the systems discussed herein may be embodied in the form of one or more computers configured to carry out methods described herein.
  • Example embodiments may also be implemented, in software, for example, as any suitable computer program. For example, a program in accordance with one or more example embodiments of the present invention may be a computer program product causing a computer to execute one or more of the example methods described herein: a method for determining a parameter in a system for implementing a future clinical study.
  • The computer program product may include a computer-readable medium having computer program logic or code portions embodied thereon for enabling a processor of the apparatus to perform one or more functions in accordance with one or more of the example methodologies described above. The computer program logic may thus cause the processor to perform one or more of the example methodologies, or one or more functions of a given methodology described herein.
  • The computer-readable medium may be a built-in medium installed inside a computer main body or removable medium arranged so that it can be separated from the computer main body. Examples of the built-in medium include, but are not limited to, rewriteable non-volatile memories, such as RAMs, ROMs, flash memories, and hard disks. Examples of a removable medium may include, but are not limited to, optical storage media such as CD-ROMs and DVDs; magneto-optical storage media such as MOs; magnetism storage media such as floppy disks (trademark), cassette tapes, and removable hard disks; media with a built-in rewriteable non-volatile memory such as memory cards; and media with a built-in ROM, such as ROM cassettes.
  • These programs may also be provided in the form of an externally supplied propagated signal and/or a computer data signal (e.g., wireless or terrestrial) embodied in a carrier wave. The computer data signal embodying one or more instructions or functions of an example methodology may be carried on a carrier wave for transmission and/or reception by an entity that executes the instructions or functions of the example methodology. For example, the functions or instructions of the example embodiments may be implemented by processing one or more code segments of the carrier wave, for example, in a computer, where instructions or functions may be executed for determining a parameter in a system for implementing a future clinical study, in accordance with example embodiments described herein.
  • Further, such programs, when recorded on computer-readable storage media, may be readily stored and distributed. The storage medium, as it is read by a computer, may enable the methods and/or apparatuses, in accordance with the example embodiments described herein.
  • Example embodiments being thus described, it will be obvious that the same may be varied in many ways. For example, the methods according to example embodiments of the present invention may be implemented in hardware and/or software. The hardware/software implementations may include a combination of processor(s) and article(s) of manufacture. The article(s) of manufacture may further include storage media and executable computer program(s), for example, a computer program product stored on a computer readable medium.
  • The executable computer program(s) may include the instructions to perform the described operations or functions. The computer executable program(s) may also be provided as part of externally supplied propagated signal(s). Such variations are not to be regarded as departure from the spirit and scope of the example embodiments, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
  • The present invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the invention, and all such modifications are intended to be included within the scope of the present invention.

Claims (25)

1. A method for generating an image based on computed tomography data, the method comprising:
generating the image by blending computed tomography image data associated with a plurality of X-ray energy levels, the blending being performed using a non-linear blending function.
2. The method of claim 1, further comprising:
obtaining first image data using X-rays having a first X-ray energy level; and
obtaining second image data using X-rays having a second X-ray energy level, wherein the generating generates the image by blending the first and second image data using the non-linear function.
3. The method of claim 2, wherein the first and second X-ray energy levels are different.
4. The method of claim 2, wherein the first and second image data are blended according to a non-linear blending ratio, wherein the generating generates the computed tomography image by blending unequal portions of the first image data and the second image data.
5. The method of claim 2, further comprising:
calculating a non-linear blending parameter based on the first image data, the non-linear blending parameter being indicative of a non-linear blending ratio for blending first and second image data, wherein the generating generates the image by blending the first image data and the second image data based on the non-linear blending parameter.
6. The method of claim 5, further comprising:
adjusting the image by modifying the non-linear blending parameter.
7. The method of claim 5, wherein the non-linear blending parameter is a non-linear value μ, the non-linear value μ being calculated according to the following equation:
μ = 1 1 + - I 1 - λ μ ω μ ; wherein
I1 represents the first image data, λμ represents a non-linear level and ωμ represents a non-linear width associated with the first image data.
8. The method of claim 5, wherein the generating generates the image IOut based on the following equation:

I Out =I 1*(1−μ)+I 2*(μ); wherein
I1 represents the first image data, I2 represents the second image data, and μ represents the non-linear blending parameter.
9. The method of claim 2, wherein the first image data includes individual image data associated with each of a plurality of first image voxels, and the second image data includes individual image data associated with each of a plurality of second image voxels, the generating further comprising:
blending image data for each first image voxel with corresponding image data for each second image voxel using the non-linear blending function to generate a plurality of individual resultant image voxels; and
generating the image based on the plurality of individual resultant image voxels.
10. The method of claim 9, further comprising:
adjusting the image by modifying a non-linear blending parameter associated with each individual resultant image voxel.
11. The method of claim 9, further comprising:
adjusting the image by modifying at least one image parameter associated with each individual resultant image voxel.
12. The method of claim 1, further comprising:
adjusting the image by modifying at least one image parameter associated with the image.
13. The method of claim 12, wherein the at least one parameter includes at least one of a slice number, viewing window or level and at least one non-linear function parameter.
14. The method of claim 12, wherein the at least one parameter includes at least one non-linear function parameter, the at least one non-linear function parameter including at least one of a non-linear width and a non-linear level of the image.
15. The method of claim 1, further comprising:
at least one of storing or displaying the image to a user.
16. An apparatus comprising:
a processing unit to generate an image by blending computed tomography image data associated with a plurality of X-ray energy levels, the blending being performed using a non-linear blending function.
17. The apparatus of claim 16, further comprising:
a CT unit useable to,
obtain first image data based on X-rays emitted at a first energy level, and
obtain second image data based on X-rays emitted at a second energy level, wherein
the processing unit generates the image by blending the first and second image data using the non-linear function.
18. The apparatus of claim 17, wherein the first and second energy levels are different.
19. The apparatus of claim 17, wherein the first and second image data are blended according to a blending ratio, the processor unit being further useable to generate the image by blending unequal portions of the first image data and the second image data.
20. The apparatus of claim 17, wherein the processor unit further comprises:
a non-linear blending module to,
calculate a non-linear blending parameter based on the first image data, the non-linear blending parameter being indicative of a blending ratio for blending first and second image data, and
generate resultant image data by blending the first image data and the second image data based on the blending parameter, wherein
the processor unit is useable to generate the image based on the resultant image data.
21. The apparatus of claim 20, wherein the non-linear blending module is useable to generate the resultant image data IOut based on the following equation:

I Out =I 1*(1−μ)+I 2*(μ); wherein
I1 represents the first image data, I2 represents the second image data, and μ represents the non-linear blending parameter.
22. The apparatus of claim 17, wherein the processor unit is further useable to adjust the image by modifying at least one image parameter associated with the computed tomography image via an input device.
23. The apparatus of claim 17, wherein the first image data includes individual image data associated with a plurality of first image voxels, and the second image data includes individual image data associated with a plurality of second image voxels, the processor unit being further useable to,
blend image data for each of the first image voxels with corresponding image data for each of the second image voxels using the non-linear blending function to generate a plurality of individual resultant image voxels, and
generate the image based on the plurality of individual resultant image voxels.
24. The apparatus of claim 16, wherein the processor unit is further useable to adjust the image by modifying at least one of a non-linear blending parameter and an image parameter associated with the image.
25. The apparatus of claim 16, further comprising:
a display to display the generated image.
US11/976,129 2007-08-24 2007-10-22 Methods for non-linear image blending, adjustment and display Active 2031-05-02 US8233683B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/976,129 US8233683B2 (en) 2007-08-24 2007-10-22 Methods for non-linear image blending, adjustment and display
US13/528,159 US8712128B2 (en) 2007-08-24 2012-06-20 Methods for non-linear image blending, adjustment and display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US93567507P 2007-08-24 2007-08-24
US11/976,129 US8233683B2 (en) 2007-08-24 2007-10-22 Methods for non-linear image blending, adjustment and display

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/528,159 Continuation US8712128B2 (en) 2007-08-24 2012-06-20 Methods for non-linear image blending, adjustment and display

Publications (2)

Publication Number Publication Date
US20090052727A1 true US20090052727A1 (en) 2009-02-26
US8233683B2 US8233683B2 (en) 2012-07-31

Family

ID=40382197

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/976,129 Active 2031-05-02 US8233683B2 (en) 2007-08-24 2007-10-22 Methods for non-linear image blending, adjustment and display
US13/528,159 Active US8712128B2 (en) 2007-08-24 2012-06-20 Methods for non-linear image blending, adjustment and display

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/528,159 Active US8712128B2 (en) 2007-08-24 2012-06-20 Methods for non-linear image blending, adjustment and display

Country Status (3)

Country Link
US (2) US8233683B2 (en)
JP (1) JP2009050706A (en)
DE (1) DE102008038555A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025691A1 (en) * 2009-07-30 2011-02-03 Siemens Aktiengesellschaft Method and device for displaying computed-tomography examination data from an examination object
US20110038452A1 (en) * 2009-08-12 2011-02-17 Kabushiki Kaisha Toshiba Image domain based noise reduction for low dose computed tomography fluoroscopy
WO2012069990A1 (en) * 2010-11-26 2012-05-31 Koninklijke Philips Electronics N.V. Image processing apparatus
US10546370B2 (en) 2014-09-05 2020-01-28 Koninklijke Philips N.V. Visualization of spectral image data
US11361415B2 (en) * 2017-10-09 2022-06-14 Koninklijke Philips N.V. Material-selective adaptive blending of volumeiric image data

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5643805B2 (en) * 2009-03-26 2014-12-17 コーニンクレッカ フィリップス エヌ ヴェ Perfusion image
WO2022268618A1 (en) 2021-06-24 2022-12-29 Koninklijke Philips N.V. Multi-energy x-ray imaging with anatomical intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101104A1 (en) * 2002-11-27 2004-05-27 Avinash Gopal B. Method and apparatus for soft-tissue volume visualization
US20050110802A1 (en) * 2003-11-26 2005-05-26 Avinash Gopal B. Method and apparatus for segmentation-based image operations
US20060228036A1 (en) * 2005-03-29 2006-10-12 Avinash Gopal B Volumetric image enhancement system and method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0599099B1 (en) * 1992-11-24 1999-03-31 Eastman Kodak Company Tonal consistency in a radiographic image network
JP2004173076A (en) * 2002-11-21 2004-06-17 Canon Inc Gray-scale transformation processing method and image processing system using the same
US7272429B2 (en) * 2002-11-27 2007-09-18 Ge Medical Systems Global Technology Company, Llc Methods and apparatus for facilitating a reduction in artifacts
JP4579559B2 (en) * 2004-03-03 2010-11-10 株式会社日立メディコ X-ray equipment
WO2006123581A1 (en) 2005-05-18 2006-11-23 Hitachi Medical Corporation Radiograph and image processing program
JP4794238B2 (en) * 2005-08-10 2011-10-19 株式会社日立メディコ Multi-energy X-ray CT system
US7515682B2 (en) 2006-02-02 2009-04-07 General Electric Company Method and system to generate object image slices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101104A1 (en) * 2002-11-27 2004-05-27 Avinash Gopal B. Method and apparatus for soft-tissue volume visualization
US20050110802A1 (en) * 2003-11-26 2005-05-26 Avinash Gopal B. Method and apparatus for segmentation-based image operations
US20060228036A1 (en) * 2005-03-29 2006-10-12 Avinash Gopal B Volumetric image enhancement system and method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025691A1 (en) * 2009-07-30 2011-02-03 Siemens Aktiengesellschaft Method and device for displaying computed-tomography examination data from an examination object
US20110038452A1 (en) * 2009-08-12 2011-02-17 Kabushiki Kaisha Toshiba Image domain based noise reduction for low dose computed tomography fluoroscopy
US8687871B2 (en) * 2009-08-12 2014-04-01 Kabushiki Kaisha Toshiba Image domain based noise reduction for low dose computed tomography fluoroscopy
WO2012069990A1 (en) * 2010-11-26 2012-05-31 Koninklijke Philips Electronics N.V. Image processing apparatus
US20130236124A1 (en) * 2010-11-26 2013-09-12 Koninklijke Philips Electronics N.V. Image processing apparatus
US8934697B2 (en) * 2010-11-26 2015-01-13 Koninklijke Philips N.V. Image processing apparatus
RU2596995C2 (en) * 2010-11-26 2016-09-10 Конинклейке Филипс Электроникс Н.В. Image processing device
US10546370B2 (en) 2014-09-05 2020-01-28 Koninklijke Philips N.V. Visualization of spectral image data
US11361415B2 (en) * 2017-10-09 2022-06-14 Koninklijke Philips N.V. Material-selective adaptive blending of volumeiric image data

Also Published As

Publication number Publication date
US20120321164A1 (en) 2012-12-20
JP2009050706A (en) 2009-03-12
US8233683B2 (en) 2012-07-31
US8712128B2 (en) 2014-04-29
DE102008038555A1 (en) 2009-06-25

Similar Documents

Publication Publication Date Title
US10147168B2 (en) Spectral CT
US8712128B2 (en) Methods for non-linear image blending, adjustment and display
US9916655B2 (en) Image fusion scheme for differential phase contrast imaging
CN105025794B (en) Structural propagation recovery for spectral CT
US7515682B2 (en) Method and system to generate object image slices
Forghani An update on advanced dual-energy CT for head and neck cancer imaging
Clark et al. Hybrid spectral CT reconstruction
EP2923332B1 (en) Projection data de-noising
Forghani Advanced dual-energy CT for head and neck cancer imaging
US7920669B2 (en) Methods, apparatuses and computer readable mediums for generating images based on multi-energy computed tomography data
EP3213298B1 (en) Texture analysis map for image data
Forghani et al. Applications of dual-energy computed tomography for the evaluation of head and neck squamous cell carcinoma
Noda et al. Deep learning image reconstruction algorithm for pancreatic protocol dual-energy computed tomography: image quality and quantification of iodine concentration
Pérez-Lara et al. Spectral computed tomography: technique and applications for head and neck cancer
US10169848B2 (en) Restoration of low contrast structure in de-noise image data
Xu et al. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
Sananmuang et al. Dual energy computed tomography in head and neck imaging: pushing the envelope
US9159124B2 (en) Contrast to noise ratio (CNR) enhancer
Sugrue et al. Virtual monochromatic reconstructions of dual energy CT in abdominal trauma: optimization of energy level improves pancreas laceration conspicuity and diagnostic confidence
Hashemi et al. Optimal image reconstruction for detection and characterization of small pulmonary nodules during low-dose CT
Dorn et al. Towards context‐sensitive CT imaging—organ‐specific image formation for single (SECT) and dual energy computed tomography (DECT)
CN111201452B (en) Material selective adaptive blending of volumetric image data
US20200342639A1 (en) Single ct backprojector with one geometry calculation per voxel for multiple different types of projection data
US20200334870A1 (en) Image reconstruction employing tailored edge preserving regularization
Lin et al. Insights about cervical lymph nodes: Evaluating deep learning–based reconstruction for head and neck computed tomography scan

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIEMENS MEDICAL SOLUTIONS USA, INC., PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EUSEMANN, CHRISTIAN;REEL/FRAME:022243/0713

Effective date: 20081212

Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS MEDICAL SOLUTIONS USA, INC.;REEL/FRAME:022242/0030

Effective date: 20090109

Owner name: MAYO FOUNDATION FOR MEDICAL EDUCATION AND RESEARCH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HOLMES, DAVID;REEL/FRAME:022243/0707

Effective date: 20080904

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: SIEMENS HEALTHCARE GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS AKTIENGESELLSCHAFT;REEL/FRAME:039011/0366

Effective date: 20160610

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: SIEMENS HEALTHINEERS AG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS HEALTHCARE GMBH;REEL/FRAME:066088/0256

Effective date: 20231219