WO2008065594A1 - Variable alpha blending of anatomical images with functional images - Google Patents

Variable alpha blending of anatomical images with functional images Download PDF

Info

Publication number
WO2008065594A1
WO2008065594A1 PCT/IB2007/054782 IB2007054782W WO2008065594A1 WO 2008065594 A1 WO2008065594 A1 WO 2008065594A1 IB 2007054782 W IB2007054782 W IB 2007054782W WO 2008065594 A1 WO2008065594 A1 WO 2008065594A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blend
factor
blending
pixel
Prior art date
Application number
PCT/IB2007/054782
Other languages
French (fr)
Inventor
Ronaldus F. J. Holthuizen
Arianne Van Muiswinkel
Frank G. C. Hoogenraad
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2008065594A1 publication Critical patent/WO2008065594A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the invention relates to the field of visualizing image data and more specifically to blending multiple images.
  • Blending multiple images allows visualizing these images in one image.
  • the intensity cntl(i, j) of a pixel of a control image cntl at a location (i, j) comprises a blend factor, also referred to as alpha factor or just as alpha, for blending a corresponding pixel of the first source image srcl at the location (i, j) and a corresponding pixel of the second source image src2 at the location (i, j) to produce a corresponding pixel of the destination dst image at the location (i, j).
  • a blend factor also referred to as alpha factor or just as alpha
  • the control image is a mask, which assigns different degrees of visibility to different areas in the first source image and in the second source image.
  • pixels representing a car in the first source image are assigned a blend factor equal to 1 and other pixels are assigned a blend factor equal to 0.
  • the destination image represents the car of the first source image superposed on a background scene of the second source image.
  • the control image pixel values near an edge of the car would have fractional values to make the edge formed by the car and the background scene appear more realistic.
  • a limitation of blending multiple images described in Ref. 1 is that obtaining a control image based on a source image may be a tedious task involving delineating an object in the source image. This task has to be repeated for every new source image.
  • a physician needs to analyze sometimes tens or hundreds of new medical images a day. Under such circumstances, constructing a control image for every set of images for blending is not possible. Therefore, blending medical images is carried out using one blend factor for all blended pixels.
  • a system for blending a first image and a second image, based on a reference image comprises: a computation unit for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and - a blend unit for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
  • the first image, the second image, and the reference image describe the same area of human anatomy, e.g. the brain.
  • the first image may represent a scan of a first contrast and the second image may represent a scan of a second contrast.
  • the reference image typically is an image related to the first image and/or to the second image and is used for computing an optimal blend factor for blending pixels of the first image and the second image.
  • the reference image may be an anatomical image of the brain.
  • the computation unit is arranged to compute a blend factor for each pair of pixels of the first image and of the second image, which are to be blended.
  • the blend- factor value is computed based on a blend-factor map for mapping pixels of the reference image into a range of values, typically a closed interval [0, I].
  • the blend- factor map may be predetermined or may be determined based on a user input, for example.
  • the computed blend- factor value may depend on a location of a corresponding pixel of the reference image, i.e. a pixel corresponding to the blended pixels of the first image and the second image, on an intensity of the corresponding pixel of the reference image and/or on intensities of pixels of the reference image surrounding the corresponding pixel of the reference image.
  • the blend unit is arranged to blend the first image and the second image, using the computed blend factors.
  • the system for blending the first image and the second image is capable of computing blend factors for blending the two images on a per-pixel basis. This allows for selectively showing important regions of the first image and of the second image in the blended image.
  • the blend- factor map depends only on the intensities of pixels of the reference image.
  • the blend- factor map does not depend on a location of the pixel.
  • the blend- factor map can be defined based on a map of a range of intensities of the reference image into a range of blend-factor values.
  • this map may be determined based on the histogram of the reference image.
  • the blend- factor map is a power of an affine map.
  • the affine map determined by two parameters, a slope and a y- intercept, maps an intensity of a pixel of the reference image into the range of blend- factor values.
  • the affine map may be advantageously interpreted as a map for adjusting contrast and brightness of the reference image and mapping the contrast-adjusted and brightness-adjusted reference image into the range of blend- factor values.
  • the slope describes a contrast adjustment parameter and the y- intercept describes a brightness adjustment parameter.
  • the power function is defined by an exponent.
  • the system further comprises a determination unit for determining a value of at least one parameter defining the blend- factor map.
  • the determination unit may be arranged to accept a user input, and to determine a parameter of the blend- factor map based on the user input.
  • the user input may comprise a value of the slope, a value of the y- intercept, and a value of the exponent for determining the power of the affine map.
  • a value of a parameter of the blend- factor map may be computed by the system, based on the reference image.
  • the reference image is the first image. This allows blending the reference image and the second image.
  • the first image is a grayscale image and the second image is a color image.
  • the blended image is a color image.
  • Each color component of the blended image is obtained by blending the grayscale image and a respective color component of the color image using the blend- factor map based on a grayscale reference image. The embodiment allows blending a grayscale image and a color image.
  • the system further comprises a combination unit for creating the second image by combining a plurality of grayscale images into the color image. For example, two or three images may be combined into one color image using red (R), green (G), and/or blue (B) primary colors of the RGB color model.
  • R red
  • G green
  • B blue
  • the blended image visualizes information comprised in the plurality of grayscale images.
  • system according to the invention is comprised in an image acquisition apparatus.
  • system according to the invention is comprised in a workstation.
  • a method of blending a first image and a second image based on a reference image comprises: - a computation step for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend-factor map for mapping pixels of the reference image into a range of values; and a blend step for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
  • a computer program product to be loaded by a computer arrangement comprises instructions for blending a first image and a second image, based on a reference image, the computer arrangement comprising a processing unit and a memory, the computer program product, after being loaded, providing said processing unit with the capability to carry out the following tasks: computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
  • Fig. 1 schematically shows a block diagram of an exemplary embodiment of the system
  • Fig. 2 illustrates an embodiment of the system showing an anatomical image, a color fractional anisotropy image, and an exemplary blended image
  • Fig. 3 illustrates an embodiment of the system 100 showing an anatomical image, a perfusion image, an apparent diffusion coefficient image, and an exemplary blended image
  • Fig. 4 illustrates an embodiment of the system showing a grayscale goodness of fit image, a color cerebral blood volume image, and two exemplary blended images
  • Fig. 5 shows a flowchart of an exemplary implementation of the method
  • Fig. 6 schematically shows an exemplary embodiment of the image acquisition apparatus
  • Fig. 7 schematically shows an exemplary embodiment of the workstation.
  • Fig. 1 schematically shows a block diagram of an exemplary embodiment of the system 100 for blending a first image and a second image, based on a reference image, the system 100 comprising: a computation unit 120 for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend-factor map for mapping pixels of the reference image into a range of values; and a blend unit 130 for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
  • the exemplary embodiment of the system 100 further comprises the following units: a combination unit 110 for creating the second image by combining a plurality of grayscale images into a color image; - a determination unit 115 for determining a value of at least one parameter defining the blend- factor map; a control unit 160 for controlling the workflow in the system 100; a user interface 165 for communicating with a user of the system 100; and a memory unit 170 for storing data.
  • the first input connector 181 is arranged to receive data coming in from a data storage means such as, but not limited to, a hard disk, a magnetic tape, a flash memory, or an optical disk.
  • the second input connector 182 is arranged to receive data coming in from a user input device such as, but not limited to, a mouse or a touch screen.
  • the third input connector 183 is arranged to receive data coming in from a user input device such as a keyboard.
  • the input connectors 181, 182 and 183 are connected to an input control unit 180.
  • the first output connector 191 is arranged to output the data to a data storage means such as a hard disk, a magnetic tape, a flash memory, or an optical disk.
  • the second output connector 192 is arranged to output the data to a display device.
  • the output connectors 191 and 192 receive the respective data via an output control unit 190.
  • the system 100 comprises a memory unit 170.
  • the system 100 is arranged to receive input data from external devices via any of the input connectors 181, 182, and 183 and to store the received input data in the memory unit 170.
  • the input data may comprise, for example, the first image, the second image, and the reference image.
  • the memory unit 170 may be implemented by devices such as, but not limited to, a Random Access Memory (RAM) chip, a Read Only Memory (ROM) chip, and/or a hard disk drive and a hard disk.
  • the memory unit 170 may be further arranged to store the output data.
  • the output data may comprise, for example, a blended image data.
  • the memory unit 170 is also arranged to receive data from and deliver data to the units of the system 100 comprising the computation unit 120, the blend unit 130, the determination unit 115, the combination unit 110, the control unit 160, and the user interface 165, via a memory bus 175.
  • the memory unit 170 is further arranged to make the output data available to external devices via any of the output connectors 191 and 192. Storing data from the units of the system 100 in the memory unit 170 may advantageously improve performance of the units of the system 100 as well as the rate of transfer of the output data from the units of the system 100 to external devices.
  • the system 100 may not comprise the memory unit 170 and the memory bus 175.
  • the input data used by the system 100 may be supplied by at least one external device, such as an external memory or a processor, connected to the units of the system 100.
  • the output data produced by the system 100 may be supplied to at least one external device, such as an external memory or a processor, connected to the units of the system 100.
  • the units of the system 100 may be arranged to receive the data from each other via internal connections or via a data bus. In the exemplary embodiment of the system 100 shown in Fig. 1, the system
  • control unit 160 for controlling the workflow in the system 100.
  • the control unit may be arranged to receive control data from and provide control data to the units of the system 100.
  • the computation unit 120 may be arranged to pass a control data "blend factors computed" to the control unit 160 and the control unit 160 may be arranged to provide a control data "blend pixels of the first and second image" to the blend unit 130, requesting the blend unit 130 to blend pixels of the first and second image, using the predetermined number of computed blend factors.
  • a control function may be implemented in another unit of the system 100. In the exemplary embodiment of the system 100 shown in Fig. 1, the system
  • the 100 comprises a user interface 165 for communicating with the user of the system 100.
  • the user interface 165 may be arranged to prompt a user for a user input, e.g. for a value of a parameter defining a blend- factor map, and to receive the user input.
  • the user interface may be further arranged for displaying the reference image, the first image, the second image, and the blended image.
  • the user interface may be arranged to receive a user input for selecting a mode of operation of the system 100, such as a mode for using the first image as the reference image.
  • a 2D image data or simply an image, comprises image data elements.
  • Each image data element (i, j, I) comprises a location (i, j), typically represented by two Cartesian coordinates i, j in a display coordinate system, and at least one intensity value I at this location.
  • a 2D image data element may be interpreted as a pixel, i.e. a small area of a display, typically a square or a rectangle, described by a 2D location of the pixel, e.g. by a location (i, j) of a vertex or of the center of the pixel, and an intensity of the pixel, possibly a few color values in the case of color images.
  • a 3D image data sometimes referred to as an image, comprises elements, each data element (x, y, z, I) comprising a 3D location (x, y, z), typically represented by three Cartesian coordinates x, y, z in an image data coordinate system, and at least one intensity I at this location.
  • the 3D image data volume may be defined as a volume comprising all locations (x, y, z) comprised in the image data elements (x, y, z, I).
  • a data element may be interpreted as a 3D pixel, often referred to as a voxel.
  • a voxel is a small volume, typically a cube or a cuboid, at a location (x, y, z), which may be a location of a vertex or of the center of the voxel, for example.
  • the image volume may be interpreted as a union of all voxels.
  • the computation unit 120 of the system 100 is arranged for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values.
  • the first image may be a first contrast scan image and the second image may be a second contrast scan image.
  • the reference image may be an anatomical image.
  • the pixel of the first image and the corresponding pixel of the second image have identical coordinates in their respective coordinate systems.
  • the coordinates of the blended pixels are assumed to be identical.
  • the skilled person will know how to map coordinates of pixels of the first image onto coordinates of pixels of the second image if they are different.
  • the skilled person will also know how to define the correspondence between pixel coordinates of the reference image and pixel coordinates of the first and second image.
  • the blend-factor map bfm assigns a value bfm(i, j, rf(i, j)) to a pixel (i, j, rf(i, j)) of the reference image, based on the pixel location (i, j) and/or on the pixel intensity rf(i, j).
  • the range of values is typically a closed interval [0.1].
  • the blend-factor map may be predetermined. Alternatively, the blend-factor map may be selected from a plurality of predetermined blend- factor maps by the system 100, based on a user input or based on the reference image.
  • the blend factor may be computed based on a set of values bfm(i, j, rf(i, j)) of the blend factor map, e.g. as an average of these values.
  • the blend factor bf(i, j, rf(i, j)) may be different from the value of the blend-factor map bfm(i, j, rf(i, j)).
  • the computation unit 120 is arranged to compute a blend factor for blending a pixel of the first image and the corresponding pixel of the second image, based on a set of values bfm(i, j, rf(i, j)) of the blend factor map.
  • the blend factor bf(i, j, bfm)) for blending pixels (i, j, imgl(i, j)) and (i, j, img2(i, j)) may be a mean of nine blend- factor map values bfm(i+k, j+1, rf(i+k, j+1)), where
  • Another possibility is to determine the blend factor bf(i, j, bfm)), based on a mean intensity ⁇ rf(i, j)> of the reference image at nine pixels rf(i+k, j+1)), where - 1 ⁇ k, 1 ⁇ 1.
  • the system 100 may be arranged to select a first component of the blend- factor map having a domain identical to the first region and a second component of the blend- factor map having a domain identical to the second region.
  • the selection may be based, for example, on the average intensity of the reference image in the first region and on the average intensity of the reference image in the second region. In a further embodiment, the selection may be based on a user input.
  • the blend unit 130 of the system 100 is arranged to blend the pixel of the first image and the corresponding pixel of the second image, based on the formula bf(i, j, bfm)*imgl(i, j) + (1- bf(i, j, bfm))*img2(i, j).
  • the blend- factor map depends only on intensities of pixels of the reference image.
  • a value bfm(i, j, rf(i, j)) of the blend- factor map does not depend on a coordinate (i, j) of the pixel (i, j, rf(i, j)) of the reference image.
  • the value bfm(i, j, rf(i, j)) of the blend-factor map depends only on an intensity rf(i, j) of the pixel (i, j, rf(i, j)) of the reference image.
  • this map may be determined based on the histogram of the reference image.
  • the blend factor value at a pixel may be computed on the basis of at least one of the color values rf re d(i, j), rf gre en(i, j), and rf blue (i, J)).
  • the blend- factor map is a power of an affine map.
  • the affine map is determined by two parameters: a slope m and a y- intercept b.
  • the affine map transforms an intensity rf(i, j) of a pixel (i, j, rf(i, j)) of the reference image into a value m*rf(i, j) + b.
  • the affine map may be advantageously interpreted as a contrast adjustment and brightness adjustment of the reference image.
  • the slope m describes a contrast adjustment parameter and the y- intercept b describes a brightness adjustment parameter.
  • the power function is defined by an exponent n.
  • this value is the blend factor bf(i, j) at the pixel location (i, j).
  • (1/N) is a normalizing factor.
  • N (m*rf_max + b) n , where rf max is the supremum of the range of intensities of the reference image.
  • the computation unit may be arranged to automatically carry out normalization of the power of the affine function. Other ways of normalizing the blend- factor values are also possible. The skilled person will understand that the scope of the claims is independent of the normalization method.
  • Fig. 2 illustrates an embodiment of the system 100 showing an anatomical image 201, a color fractional anisotropy (FA) image 202, and an exemplary blended image 203.
  • the anatomical image 201 is the first image and also a reference image.
  • the FA image 202 is the second image.
  • the intensity of the color FA image 202 acquired using diffusion tensor MR imaging (DTI), describes the amount of anisotropy in the diffusion of water in the brain. Low image intensity corresponds to isotropic diffusion and high image intensity corresponds to anisotropic diffusion.
  • DTI diffusion tensor MR imaging
  • the color describes the direction of the diffusion: a red color describes diffusion in the sagittal direction (left-right), a green color describes coronal diffusion (anterior/posterior), and a blue color describes axial diffusion (feet/head).
  • the anatomical image 201 shows high pixel intensities in certain anatomical areas, e.g. in white matter fiber bundles, which are highly anisotropic structures and represent an area of high clinical interest.
  • the anatomical image 201 further shows low pixel intensities in other areas, which may also be highly anisotropic, but are of lesser clinical interest.
  • the blend- factor map is a power of an affine map and is applied to the intensities of rendered pixels of the anatomical image.
  • the blended image 203 thus shows color information only in the areas that have high fractional anisotropy and still shows a good intensity for visualizing white matter fibers.
  • the system 100 further comprises a combination unit 110 for creating the second image by combining a plurality of grayscale medical images into a color image.
  • a combination unit 110 for creating the second image by combining a plurality of grayscale medical images into a color image.
  • Fig. 3 illustrates an embodiment of the system 100 showing an anatomical image 301, which is the first image and also the reference image, a perfusion image 302, an apparent diffusion coefficient (ADC) image 303, and an exemplary blended image 304. All input images are grayscale images.
  • the anatomical image 301 is a fluid attenuation inversion recovery (FLAIR) image of the brain, which is bright in areas where a stroke occurred some time ago, medium dark in areas where stroke can occur or recently occurred, and dark in areas where stroke cannot occur.
  • FLAIR fluid attenuation inversion recovery
  • the perfusion image 302 shows how well blood reaches the brain.
  • the ADC image 303 the areas where a stroke has occurred are darker.
  • the combination unit 110 is arranged to create a color image, which is the second image, not shown in Fig. 3, using the perfusion image 302 and the ADC image 303.
  • the perfusion image 302 is assigned to the red channel and the ADC image 303 is assigned to the green channel of the combination unit 110, using an RGB color model.
  • another method of combining grayscale images into a color image may be also employed.
  • the color image is then combined by the blend unit 130 of the system 100 with the anatomical image 301.
  • the blend- factor map is an affine map, i.e.
  • the blended image shows all three contrasts to the user, who can make a diagnosis based on the displayed information. For example, a tissue showing an abnormality in the perfusion image combined with a normal ADC image and a normal FLAIR image indicates that the tissue is salvageable. Tissue showing an abnormality in the perfusion image, high diffusion in the ADC image, and normal FLAIR intensity is non- salvageable.
  • Fig. 4 illustrates an embodiment of the system showing a grayscale goodness of fit (GF) image 401, a color cerebral blood volume image (CBV) 402, and two exemplary blended images 403 and 404.
  • the GF image 401 is the first image and the reference image.
  • the color CBV image 402 is the second image.
  • To compute a grayscale GF image 401 and a grayscale CBV image a patient is injected with a contrast bolus.
  • An MR T2* image is repeatedly acquired while the contrast bolus passes through the patient. From the acquired time-sequence of images, a curve showing varying pixel intensity as the bolus contrast passes through the brain can be obtained. From these curves one can calculate various images, e.g.
  • the grayscale CBV image 402 is obtained from a grayscale CBV image, using a color lookup table, which assigns a color to each value of intensity of the grayscale image.
  • the first exemplary blended image 403 and the second exemplary blended image 404 are obtained by blending the GF image 401 and the color CBV image 402, using the GF image as the reference image and using two different blend- factor maps.
  • the hue of the color of each pixel in the first blended image 403 and in the second blended image 404 is identical to the hue of the color of the CBV image.
  • the two blended images 403 and 404 illustrate the CBV.
  • the intensity of each pixel in the first blended image 403 and in the second blended image 404 is determined by the intensity of the respective pixel of the GF image.
  • this intensity illustrates the confidence that the CBV value at a pixel in the first blended image 403 and in the second blended image 404 is correct.
  • the system 100 described in the current document may be a valuable tool for assisting a physician in medical diagnosing and therapy planning, in particular interpreting and extracting information from medical images.
  • system 100 is also possible. It is possible, among other things, to redefine the units of the system and to redistribute their functions. For example, in an embodiment of the system 100, there can be a plurality of computation units replacing the computation unit 120. Each computation unit of the plurality of computation units may be arranged to employ a different method of computing the blend factors. The method employed by the system 100 may be based on a user selection.
  • the units of the system 100 may be implemented using a processor. Normally, their functions are performed under control of a software program product. During execution, the software program product is normally loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetic and/or optical storage means, or may be loaded via a network like the Internet. Optionally an application-specific integrated circuit may provide the described functionality.
  • Fig. 5 shows a flowchart of an exemplary implementation of the method 500 of blending a first image and a second image, based on a reference image. The method 500 has three possible entry points.
  • the first entry point is a combination step 510 for creating the second image by combining a plurality of grayscale images into a color image.
  • the method continues to a determination step 515 for determining a value of at least one parameter defining the blend- factor map or to a computation step 520 for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend- factor map for mapping pixels of the reference image into a range of values.
  • the second entry point of the method 500 is the determination step 515.
  • the method continues to the computation step 520.
  • the computation step 520 is the third possible entry point of the method 500.
  • the blend factors for all pixels of the first image and of the second image, which are to be blended are computed.
  • the method 500 continues to a blend step 530 for blending pixels of the first image and corresponding pixels of the second image, based on the computed blend factors.
  • the method 500 terminates.
  • the computation step 520 and the blend step 530 may be carried out concurrently.
  • blend factors for a group of pixels of the first image and of the second image are computed in the computation step 520, these pixels may be blended in the blend step 530 without waiting for computation of the blend factors for the remaining pixels of the first image and of the second image.
  • the combination step 510 may be carried out separately from other steps, at another point in time, possibly by another system.
  • steps in the method 500 is not mandatory; the skilled person may change the order of some steps or perform some steps concurrently using threading models, multi-processor systems or multiple processes without departing from the concept as intended by the present invention.
  • two or more steps of the method 500 of the current invention may be combined into one step.
  • a step of the method 500 of the current invention may be split into a plurality of steps.
  • Fig. 6 schematically shows an exemplary embodiment of the image acquisition apparatus 600 employing the system 100, said image acquisition apparatus 600 comprising an image acquisition unit 610 connected via an internal connection with the system 100, an input connector 601, and an output connector 602.
  • This arrangement advantageously increases the capabilities of the image acquisition apparatus 600, providing said image acquisition apparatus 600 with advantageous capabilities of the system 100 for blending a first image and a second image, based on a reference image.
  • Examples of image acquisition apparatus comprise, but are not limited to, a CT system, an X-ray system, an MRI system, a US system, a PET system, a SPECT system, and a NM system.
  • Fig. 7 schematically shows an exemplary embodiment of the workstation 700.
  • the workstation comprises a system bus 701.
  • a processor 710, a memory 720, a disk input/output (I/O) adapter 730, and a user interface (UI) 740 are operatively connected to the system bus 701.
  • a disk storage device 731 is operatively coupled to the disk I/O adapter 730.
  • a keyboard 741, a mouse 742, and a display 743 are operatively coupled to the UI 740.
  • the system 100 of the invention, implemented as a computer program, is stored in the disk storage device 731.
  • the workstation 700 is arranged to load the program and input data into memory 720 and execute the program on the processor 710.
  • the user can input information to the workstation 700, using the keyboard 741 and/or the mouse 742.
  • the workstation is arranged to output information to the display device 743 and/or to the disk 731.
  • the skilled person will understand that there are numerous other embodiments of the workstation 700 known in the art and that the present embodiment serves the purpose of illustrating the invention and must not be interpreted as limiting the invention to this particular embodiment. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim or in the description.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention can be implemented by means of hardware comprising several distinct elements and by means of a programmed computer. In the system claims enumerating several units, several of these units can be embodied by one and the same item of hardware or software.
  • the usage of the words first, second and third, et cetera does not indicate any ordering. These words are to be interpreted as names.

Abstract

The invention relates to a system (100) for blending a first image and a second image, based on a reference image, the system comprising a computation unit (120) and a blend unit (130). Typically, all three images describe the same area of human anatomy, e.g. the brain. The first and the second images may comprise scans of two different contrasts. The reference image may be an anatomical image of the brain. The computation unit (120) is arranged to compute a blend factor for each pair of pixels of the first image and of the second image, which are to be blended. The blend factor value is computed based on a blend-factor map for mapping pixels of the reference image into a range of values. The blend unit (130) is arranged to blend the first image and the second image, using the computed blend factors. The system (100) for blending the first image and the second image is capable of computing blend factors for blending the two images on a per-pixel basis. This allows for selectively showing important regions of the first image and of the second image in the blended image.

Description

Variable Alpha Blending of Anatomical Images with Functional Images
FIELD OF THE INVENTION
The invention relates to the field of visualizing image data and more specifically to blending multiple images.
BACKGROUND OF THE INVENTION
Blending multiple images allows visualizing these images in one image. European patent application EP 0790581 entitled "Method for alpha blending images utilizing a visual instruction set", hereinafter referred to as Ref. 1, describes blending a first source image srcl and a second source image src2 into a destination image dst, based on a control image cntl, also referred to as an alpha channel. The intensity cntl(i, j) of a pixel of a control image cntl at a location (i, j) comprises a blend factor, also referred to as alpha factor or just as alpha, for blending a corresponding pixel of the first source image srcl at the location (i, j) and a corresponding pixel of the second source image src2 at the location (i, j) to produce a corresponding pixel of the destination dst image at the location (i, j). The intensity dst(i, j) of the corresponding pixel of the destination image dst at the location (i, j) is computed according to the following formula: dst(i, j) = cntl(i, j)*srcl(i, j) + (1 - cntl(i, j))*src2(i, j), where cntl(i, j), srcl(i, j), and src2(i, j) are the intensities at the location (i, j) of the control image, of the first source image, and of the second source image, respectively. In principle, the control image is a mask, which assigns different degrees of visibility to different areas in the first source image and in the second source image. In the example described in Ref. 1 , pixels representing a car in the first source image are assigned a blend factor equal to 1 and other pixels are assigned a blend factor equal to 0. As a result, the destination image represents the car of the first source image superposed on a background scene of the second source image. In practice, the control image pixel values near an edge of the car would have fractional values to make the edge formed by the car and the background scene appear more realistic.
A limitation of blending multiple images described in Ref. 1 is that obtaining a control image based on a source image may be a tedious task involving delineating an object in the source image. This task has to be repeated for every new source image. In the medical realm, for example, a physician needs to analyze sometimes tens or hundreds of new medical images a day. Under such circumstances, constructing a control image for every set of images for blending is not possible. Therefore, blending medical images is carried out using one blend factor for all blended pixels.
SUMMARY OF THE INVENTION
It would be advantageous to have a system for blending multiple images that is capable of computing blend factors for blending multiple images on a per-pixel basis.
To address this concern, in an aspect of the invention, a system for blending a first image and a second image, based on a reference image, comprises: a computation unit for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and - a blend unit for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
Typically, the first image, the second image, and the reference image, describe the same area of human anatomy, e.g. the brain. The first image may represent a scan of a first contrast and the second image may represent a scan of a second contrast. The reference image typically is an image related to the first image and/or to the second image and is used for computing an optimal blend factor for blending pixels of the first image and the second image. For example, the reference image may be an anatomical image of the brain. The computation unit is arranged to compute a blend factor for each pair of pixels of the first image and of the second image, which are to be blended. The blend- factor value is computed based on a blend-factor map for mapping pixels of the reference image into a range of values, typically a closed interval [0, I]. The blend- factor map may be predetermined or may be determined based on a user input, for example. The computed blend- factor value may depend on a location of a corresponding pixel of the reference image, i.e. a pixel corresponding to the blended pixels of the first image and the second image, on an intensity of the corresponding pixel of the reference image and/or on intensities of pixels of the reference image surrounding the corresponding pixel of the reference image. The blend unit is arranged to blend the first image and the second image, using the computed blend factors. Hence, the system for blending the first image and the second image is capable of computing blend factors for blending the two images on a per-pixel basis. This allows for selectively showing important regions of the first image and of the second image in the blended image.
In an embodiment of the system, the blend- factor map depends only on the intensities of pixels of the reference image. Here, at each pixel of the reference image, the blend- factor map does not depend on a location of the pixel. Thus, the blend- factor map can be defined based on a map of a range of intensities of the reference image into a range of blend-factor values. Advantageously, this map may be determined based on the histogram of the reference image.
In an embodiment of the system, the blend- factor map is a power of an affine map. The affine map, determined by two parameters, a slope and a y- intercept, maps an intensity of a pixel of the reference image into the range of blend- factor values. The affine map may be advantageously interpreted as a map for adjusting contrast and brightness of the reference image and mapping the contrast-adjusted and brightness-adjusted reference image into the range of blend- factor values. The slope describes a contrast adjustment parameter and the y- intercept describes a brightness adjustment parameter. The power function is defined by an exponent. The main advantage of this simple blend- factor map is that the definition of this blend-factor map is very intuitive. Thus, the blend-factor map generates predictable blending results.
In an embodiment of the system, the system further comprises a determination unit for determining a value of at least one parameter defining the blend- factor map. The determination unit may be arranged to accept a user input, and to determine a parameter of the blend- factor map based on the user input. For example, the user input may comprise a value of the slope, a value of the y- intercept, and a value of the exponent for determining the power of the affine map. Alternatively, a value of a parameter of the blend- factor map may be computed by the system, based on the reference image.
In an embodiment of the system, the reference image is the first image. This allows blending the reference image and the second image.
In an embodiment of the system, the first image is a grayscale image and the second image is a color image. The blended image is a color image. Each color component of the blended image is obtained by blending the grayscale image and a respective color component of the color image using the blend- factor map based on a grayscale reference image. The embodiment allows blending a grayscale image and a color image.
In an embodiment of the system, the system further comprises a combination unit for creating the second image by combining a plurality of grayscale images into the color image. For example, two or three images may be combined into one color image using red (R), green (G), and/or blue (B) primary colors of the RGB color model. Thus, the blended image visualizes information comprised in the plurality of grayscale images.
In a further aspect of the invention, the system according to the invention is comprised in an image acquisition apparatus.
In a further aspect of the invention, the system according to the invention is comprised in a workstation.
In a further aspect of the invention, a method of blending a first image and a second image based on a reference image comprises: - a computation step for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend-factor map for mapping pixels of the reference image into a range of values; and a blend step for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
In a further aspect of the invention, a computer program product to be loaded by a computer arrangement comprises instructions for blending a first image and a second image, based on a reference image, the computer arrangement comprising a processing unit and a memory, the computer program product, after being loaded, providing said processing unit with the capability to carry out the following tasks: computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
Modifications and variations thereof, of the image acquisition apparatus, of the workstation, of the method, and/or of the computer program product, which correspond to the described modifications of the system and variations thereof, can be carried out by a skilled person on the basis of the present description. The skilled person will appreciate that the method may be applied to planar, i.e. two-dimensional (2D) images, and to volumetric, i.e. three-dimensional (3D), images, based on image data acquired by various acquisition modalities such as, but not limited to, standard X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
BRIEF DESCRIPTION OF THE DRAWINGS These and other aspects of the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein:
Fig. 1 schematically shows a block diagram of an exemplary embodiment of the system; Fig. 2 illustrates an embodiment of the system showing an anatomical image, a color fractional anisotropy image, and an exemplary blended image;
Fig. 3 illustrates an embodiment of the system 100 showing an anatomical image, a perfusion image, an apparent diffusion coefficient image, and an exemplary blended image; Fig. 4 illustrates an embodiment of the system showing a grayscale goodness of fit image, a color cerebral blood volume image, and two exemplary blended images; Fig. 5 shows a flowchart of an exemplary implementation of the method; Fig. 6 schematically shows an exemplary embodiment of the image acquisition apparatus; and Fig. 7 schematically shows an exemplary embodiment of the workstation.
The same reference numerals are used to denote similar parts throughout the Figures.
DETAILED DESCRIPTION OF EMBODIMENTS Fig. 1 schematically shows a block diagram of an exemplary embodiment of the system 100 for blending a first image and a second image, based on a reference image, the system 100 comprising: a computation unit 120 for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend-factor map for mapping pixels of the reference image into a range of values; and a blend unit 130 for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor. The exemplary embodiment of the system 100 further comprises the following units: a combination unit 110 for creating the second image by combining a plurality of grayscale images into a color image; - a determination unit 115 for determining a value of at least one parameter defining the blend- factor map; a control unit 160 for controlling the workflow in the system 100; a user interface 165 for communicating with a user of the system 100; and a memory unit 170 for storing data. In the exemplary embodiment of the system 100, there are three input connectors 181, 182 and 183 for the incoming data. The first input connector 181 is arranged to receive data coming in from a data storage means such as, but not limited to, a hard disk, a magnetic tape, a flash memory, or an optical disk. The second input connector 182 is arranged to receive data coming in from a user input device such as, but not limited to, a mouse or a touch screen. The third input connector 183 is arranged to receive data coming in from a user input device such as a keyboard. The input connectors 181, 182 and 183 are connected to an input control unit 180.
In the exemplary embodiment of the system 100, there are two output connectors 191 and 192 for the outgoing data. The first output connector 191 is arranged to output the data to a data storage means such as a hard disk, a magnetic tape, a flash memory, or an optical disk. The second output connector 192 is arranged to output the data to a display device. The output connectors 191 and 192 receive the respective data via an output control unit 190.
The skilled person will understand that there are many ways to connect input devices to the input connectors 181, 182 and 183 and the output devices to the output connectors 191 and 192 of the system 100. These ways comprise, but are not limited to, a wired and a wireless connection, a digital network such as, but not limited to, a Local Area Network (LAN) and a Wide Area Network (WAN), the Internet, a digital telephone network, and an analog telephone network. In the exemplary embodiment of the system 100, the system 100 comprises a memory unit 170. The system 100 is arranged to receive input data from external devices via any of the input connectors 181, 182, and 183 and to store the received input data in the memory unit 170. Loading the input data into the memory unit 170 allows quick access to relevant data portions by the units of the system 100. The input data may comprise, for example, the first image, the second image, and the reference image. The memory unit 170 may be implemented by devices such as, but not limited to, a Random Access Memory (RAM) chip, a Read Only Memory (ROM) chip, and/or a hard disk drive and a hard disk. The memory unit 170 may be further arranged to store the output data. The output data may comprise, for example, a blended image data. The memory unit 170 is also arranged to receive data from and deliver data to the units of the system 100 comprising the computation unit 120, the blend unit 130, the determination unit 115, the combination unit 110, the control unit 160, and the user interface 165, via a memory bus 175. The memory unit 170 is further arranged to make the output data available to external devices via any of the output connectors 191 and 192. Storing data from the units of the system 100 in the memory unit 170 may advantageously improve performance of the units of the system 100 as well as the rate of transfer of the output data from the units of the system 100 to external devices.
Alternatively, the system 100 may not comprise the memory unit 170 and the memory bus 175. The input data used by the system 100 may be supplied by at least one external device, such as an external memory or a processor, connected to the units of the system 100. Similarly, the output data produced by the system 100 may be supplied to at least one external device, such as an external memory or a processor, connected to the units of the system 100. The units of the system 100 may be arranged to receive the data from each other via internal connections or via a data bus. In the exemplary embodiment of the system 100 shown in Fig. 1, the system
100 comprises a control unit 160 for controlling the workflow in the system 100. The control unit may be arranged to receive control data from and provide control data to the units of the system 100. For example, after computing a predetermined number of blend factors, the computation unit 120 may be arranged to pass a control data "blend factors computed" to the control unit 160 and the control unit 160 may be arranged to provide a control data "blend pixels of the first and second image" to the blend unit 130, requesting the blend unit 130 to blend pixels of the first and second image, using the predetermined number of computed blend factors. Alternatively, a control function may be implemented in another unit of the system 100. In the exemplary embodiment of the system 100 shown in Fig. 1, the system
100 comprises a user interface 165 for communicating with the user of the system 100. The user interface 165 may be arranged to prompt a user for a user input, e.g. for a value of a parameter defining a blend- factor map, and to receive the user input. The user interface may be further arranged for displaying the reference image, the first image, the second image, and the blended image. Optionally, the user interface may be arranged to receive a user input for selecting a mode of operation of the system 100, such as a mode for using the first image as the reference image. The skilled person will understand that more functions may be advantageously implemented in the user interface 165 of the system 100. A 2D image data, or simply an image, comprises image data elements. Each image data element (i, j, I) comprises a location (i, j), typically represented by two Cartesian coordinates i, j in a display coordinate system, and at least one intensity value I at this location. The skilled person will understand that a 2D image data element may be interpreted as a pixel, i.e. a small area of a display, typically a square or a rectangle, described by a 2D location of the pixel, e.g. by a location (i, j) of a vertex or of the center of the pixel, and an intensity of the pixel, possibly a few color values in the case of color images.
A 3D image data, sometimes referred to as an image, comprises elements, each data element (x, y, z, I) comprising a 3D location (x, y, z), typically represented by three Cartesian coordinates x, y, z in an image data coordinate system, and at least one intensity I at this location. The 3D image data volume may be defined as a volume comprising all locations (x, y, z) comprised in the image data elements (x, y, z, I). A data element may be interpreted as a 3D pixel, often referred to as a voxel. A voxel is a small volume, typically a cube or a cuboid, at a location (x, y, z), which may be a location of a vertex or of the center of the voxel, for example. The image volume may be interpreted as a union of all voxels. Although the system 100 is described using 2D images for illustrating the embodiments of the system 100, the skilled person will understand that blending 3D image data is also contemplated. The exemplary embodiments should not be construed as limiting the scope of the claims.
The computation unit 120 of the system 100 is arranged for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, the blend factor being computed based on a blend- factor map for mapping pixels of the reference image into a range of values. The first image may be a first contrast scan image and the second image may be a second contrast scan image. The reference image may be an anatomical image. Typically, the pixel of the first image and the corresponding pixel of the second image have identical coordinates in their respective coordinate systems. Hereinafter, the coordinates of the blended pixels are assumed to be identical. The skilled person will know how to map coordinates of pixels of the first image onto coordinates of pixels of the second image if they are different. The skilled person will also know how to define the correspondence between pixel coordinates of the reference image and pixel coordinates of the first and second image.
The blend-factor map bfm assigns a value bfm(i, j, rf(i, j)) to a pixel (i, j, rf(i, j)) of the reference image, based on the pixel location (i, j) and/or on the pixel intensity rf(i, j). The range of values is typically a closed interval [0.1]. The blend-factor map may be predetermined. Alternatively, the blend-factor map may be selected from a plurality of predetermined blend- factor maps by the system 100, based on a user input or based on the reference image. The value bfm(i, j, rf(i, j)) of the blend- factor map may be the blend factor bf(i, j, bfm) for blending a pixel (i, j, imgl(i, j)) of the first image and a pixel (i, j, img2(i, j)) of the second image, i.e. bf(i, j, bfm) = bfm(i, j, rf(i, j)), where imgl(i, j) and img2(i, j) denote the intensities of the first image and of the second image, respectively.
Alternatively, in an embodiment of the system, the blend factor may be computed based on a set of values bfm(i, j, rf(i, j)) of the blend factor map, e.g. as an average of these values. Here the blend factor bf(i, j, rf(i, j)) may be different from the value of the blend-factor map bfm(i, j, rf(i, j)). The computation unit 120 is arranged to compute a blend factor for blending a pixel of the first image and the corresponding pixel of the second image, based on a set of values bfm(i, j, rf(i, j)) of the blend factor map. This embodiment may be especially useful for noisy images, wherein intensities of individual pixels are distorted by a noise. For example, the blend factor bf(i, j, bfm)) for blending pixels (i, j, imgl(i, j)) and (i, j, img2(i, j)) may be a mean of nine blend- factor map values bfm(i+k, j+1, rf(i+k, j+1)), where
- 1 < k, 1 < 1. Another possibility is to determine the blend factor bf(i, j, bfm)), based on a mean intensity <rf(i, j)> of the reference image at nine pixels rf(i+k, j+1)), where - 1 < k, 1 < 1.
In an embodiment, there may be a first region of the reference image and a second region of the reference image, and the system 100 may be arranged to select a first component of the blend- factor map having a domain identical to the first region and a second component of the blend- factor map having a domain identical to the second region. The selection may be based, for example, on the average intensity of the reference image in the first region and on the average intensity of the reference image in the second region. In a further embodiment, the selection may be based on a user input.
When a blend factor value bf(i, j, bfm) for blending a pixel (i, j, imgl(i, j)) of the first image and a corresponding pixel (i, j, img2(i, j)) of the second image is computed, the blend unit 130 of the system 100 is arranged to blend the pixel of the first image and the corresponding pixel of the second image, based on the formula bf(i, j, bfm)*imgl(i, j) + (1- bf(i, j, bfm))*img2(i, j). The actual computation may be carried out by a general purpose processor or by a dedicated processor of the system 100 such as a graphics processor. In an embodiment of the system, the blend- factor map depends only on intensities of pixels of the reference image. Here, a value bfm(i, j, rf(i, j)) of the blend- factor map does not depend on a coordinate (i, j) of the pixel (i, j, rf(i, j)) of the reference image. The value bfm(i, j, rf(i, j)) of the blend-factor map depends only on an intensity rf(i, j) of the pixel (i, j, rf(i, j)) of the reference image. Thus, the blend- factor map can be defined based on a map bfm of a range of intensities into the range of values, bfm(rf(i, j)) = bfm(i, j, rf(i, j)), which is hereinafter also referred to as the blend density map and denoted bfm. Advantageously, this map may be determined based on the histogram of the reference image. When the reference image is a color image, the blend factor value at a pixel (i, j, rfred(i, j), rfgreen(i, j), rfblue(i, j)) may be computed on the basis of at least one of the color values rfred(i, j), rfgreen(i, j), and rfblue(i, J)).
In an embodiment of the system, the blend- factor map is a power of an affine map. The affine map is determined by two parameters: a slope m and a y- intercept b. The affine map transforms an intensity rf(i, j) of a pixel (i, j, rf(i, j)) of the reference image into a value m*rf(i, j) + b. The affine map may be advantageously interpreted as a contrast adjustment and brightness adjustment of the reference image. The slope m describes a contrast adjustment parameter and the y- intercept b describes a brightness adjustment parameter. The power function is defined by an exponent n. The blend- factor map at a pixel (i, j, rf(i, j)) of the reference image may be computed using the following formula: bfm(rf(i, j)) = (1/N)*(m*rf(i, j) + b)n. Typically, this value is the blend factor bf(i, j) at the pixel location (i, j). Here (1/N) is a normalizing factor. Since a power of an affine function is a monotone increasing function for m > 0 and n > 0, the normalization constant N may be defined as N = (m*rf_max + b)n, where rf max is the supremum of the range of intensities of the reference image. The normalization factor may be included in the definition of the slope and of the y-intercept: m'=m/(N)1/n and b'=b/(N)1/n. The blend-factor map can be rewritten as bfm(rf(i, j)) = (m'*rf(i, j) + b')n. The computation unit may be arranged to automatically carry out normalization of the power of the affine function. Other ways of normalizing the blend- factor values are also possible. The skilled person will understand that the scope of the claims is independent of the normalization method.
Fig. 2 illustrates an embodiment of the system 100 showing an anatomical image 201, a color fractional anisotropy (FA) image 202, and an exemplary blended image 203. The anatomical image 201 is the first image and also a reference image. The FA image 202 is the second image. The intensity of the color FA image 202, acquired using diffusion tensor MR imaging (DTI), describes the amount of anisotropy in the diffusion of water in the brain. Low image intensity corresponds to isotropic diffusion and high image intensity corresponds to anisotropic diffusion. The color describes the direction of the diffusion: a red color describes diffusion in the sagittal direction (left-right), a green color describes coronal diffusion (anterior/posterior), and a blue color describes axial diffusion (feet/head). The anatomical image 201 shows high pixel intensities in certain anatomical areas, e.g. in white matter fiber bundles, which are highly anisotropic structures and represent an area of high clinical interest. The anatomical image 201 further shows low pixel intensities in other areas, which may also be highly anisotropic, but are of lesser clinical interest. In the case illustrated in Fig. 2, the blend- factor map is a power of an affine map and is applied to the intensities of rendered pixels of the anatomical image. The slope m = 1, the y- intercept b=0, and the exponent n = 2 are set by the user. The blend factor values are equal to the respective values of the blend factor map, i.e. bf(i, j) = bfm(rf(i, j)). The blended image 203 thus shows color information only in the areas that have high fractional anisotropy and still shows a good intensity for visualizing white matter fibers.
In an embodiment of the system 100, the system 100 further comprises a combination unit 110 for creating the second image by combining a plurality of grayscale medical images into a color image. This embodiment is explained with reference to Fig. 3. Fig. 3 illustrates an embodiment of the system 100 showing an anatomical image 301, which is the first image and also the reference image, a perfusion image 302, an apparent diffusion coefficient (ADC) image 303, and an exemplary blended image 304. All input images are grayscale images. The anatomical image 301 is a fluid attenuation inversion recovery (FLAIR) image of the brain, which is bright in areas where a stroke occurred some time ago, medium dark in areas where stroke can occur or recently occurred, and dark in areas where stroke cannot occur. The perfusion image 302 shows how well blood reaches the brain. In the ADC image 303, the areas where a stroke has occurred are darker. First, the combination unit 110 is arranged to create a color image, which is the second image, not shown in Fig. 3, using the perfusion image 302 and the ADC image 303. The perfusion image 302 is assigned to the red channel and the ADC image 303 is assigned to the green channel of the combination unit 110, using an RGB color model. Alternatively, another method of combining grayscale images into a color image may be also employed. The color image is then combined by the blend unit 130 of the system 100 with the anatomical image 301. The blend- factor map is an affine map, i.e. a power of an affine map with the exponent n=l, and is applied to the intensities of rendered pixels of the anatomical image. The slope m = 1 and the y-intercept b=0 are set by the user. The blend factor values are equal to the respective values of the blend factor map, i.e. bf(i, j) = bfm(rf(i, j)). The blended image shows all three contrasts to the user, who can make a diagnosis based on the displayed information. For example, a tissue showing an abnormality in the perfusion image combined with a normal ADC image and a normal FLAIR image indicates that the tissue is salvageable. Tissue showing an abnormality in the perfusion image, high diffusion in the ADC image, and normal FLAIR intensity is non- salvageable.
Fig. 4 illustrates an embodiment of the system showing a grayscale goodness of fit (GF) image 401, a color cerebral blood volume image (CBV) 402, and two exemplary blended images 403 and 404. The GF image 401 is the first image and the reference image. The color CBV image 402 is the second image. To compute a grayscale GF image 401 and a grayscale CBV image, a patient is injected with a contrast bolus. An MR T2* image is repeatedly acquired while the contrast bolus passes through the patient. From the acquired time-sequence of images, a curve showing varying pixel intensity as the bolus contrast passes through the brain can be obtained. From these curves one can calculate various images, e.g. said grayscale CBV image and said grayscale GF image 401. A bright pixel in the GF image 401 corresponds to a good fit of a curve and a dark pixel corresponds to a poor fit of a curve. The color CBV image 402 is obtained from a grayscale CBV image, using a color lookup table, which assigns a color to each value of intensity of the grayscale image. The first exemplary blended image 403 and the second exemplary blended image 404 are obtained by blending the GF image 401 and the color CBV image 402, using the GF image as the reference image and using two different blend- factor maps. The first blend- factor map for computing the first blended image 403 is an affine map with the slope m = 1 and the y- intercept b=0. The second blend-factor map for computing the second blended image 404 is an affine map with the slope m = 2 and the y-intercept b=0.15. The blend- factor values are equal to the respective values of the blend- factor map, i.e. bf(i, j) = bfm(rf(i, j)). The hue of the color of each pixel in the first blended image 403 and in the second blended image 404 is identical to the hue of the color of the CBV image. Hence, the two blended images 403 and 404 illustrate the CBV. The intensity of each pixel in the first blended image 403 and in the second blended image 404 is determined by the intensity of the respective pixel of the GF image. Hence, this intensity illustrates the confidence that the CBV value at a pixel in the first blended image 403 and in the second blended image 404 is correct. The higher the intensity, the higher is the confidence level. The skilled person will understand that the system 100 described in the current document may be a valuable tool for assisting a physician in medical diagnosing and therapy planning, in particular interpreting and extracting information from medical images.
The skilled person will further understand that other embodiments of the system 100 are also possible. It is possible, among other things, to redefine the units of the system and to redistribute their functions. For example, in an embodiment of the system 100, there can be a plurality of computation units replacing the computation unit 120. Each computation unit of the plurality of computation units may be arranged to employ a different method of computing the blend factors. The method employed by the system 100 may be based on a user selection.
The units of the system 100 may be implemented using a processor. Normally, their functions are performed under control of a software program product. During execution, the software program product is normally loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetic and/or optical storage means, or may be loaded via a network like the Internet. Optionally an application-specific integrated circuit may provide the described functionality. Fig. 5 shows a flowchart of an exemplary implementation of the method 500 of blending a first image and a second image, based on a reference image. The method 500 has three possible entry points. The first entry point is a combination step 510 for creating the second image by combining a plurality of grayscale images into a color image. After the combination step 510 the method continues to a determination step 515 for determining a value of at least one parameter defining the blend- factor map or to a computation step 520 for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend- factor map for mapping pixels of the reference image into a range of values. The second entry point of the method 500 is the determination step 515. After the determination step 515 the method continues to the computation step 520. The computation step 520 is the third possible entry point of the method 500. In the computation step 520, the blend factors for all pixels of the first image and of the second image, which are to be blended, are computed. After the computation step 520 the method 500 continues to a blend step 530 for blending pixels of the first image and corresponding pixels of the second image, based on the computed blend factors. After blending all pixels of the first image and of the second image, which are to be blended in the blend step 530, the method 500 terminates. Optionally, the computation step 520 and the blend step 530 may be carried out concurrently. For example, when blend factors for a group of pixels of the first image and of the second image are computed in the computation step 520, these pixels may be blended in the blend step 530 without waiting for computation of the blend factors for the remaining pixels of the first image and of the second image. The combination step 510 may be carried out separately from other steps, at another point in time, possibly by another system.
The order of steps in the method 500 is not mandatory; the skilled person may change the order of some steps or perform some steps concurrently using threading models, multi-processor systems or multiple processes without departing from the concept as intended by the present invention. Optionally, two or more steps of the method 500 of the current invention may be combined into one step. Optionally, a step of the method 500 of the current invention may be split into a plurality of steps.
Fig. 6 schematically shows an exemplary embodiment of the image acquisition apparatus 600 employing the system 100, said image acquisition apparatus 600 comprising an image acquisition unit 610 connected via an internal connection with the system 100, an input connector 601, and an output connector 602. This arrangement advantageously increases the capabilities of the image acquisition apparatus 600, providing said image acquisition apparatus 600 with advantageous capabilities of the system 100 for blending a first image and a second image, based on a reference image. Examples of image acquisition apparatus comprise, but are not limited to, a CT system, an X-ray system, an MRI system, a US system, a PET system, a SPECT system, and a NM system.
Fig. 7 schematically shows an exemplary embodiment of the workstation 700. The workstation comprises a system bus 701. A processor 710, a memory 720, a disk input/output (I/O) adapter 730, and a user interface (UI) 740 are operatively connected to the system bus 701. A disk storage device 731 is operatively coupled to the disk I/O adapter 730. A keyboard 741, a mouse 742, and a display 743 are operatively coupled to the UI 740. The system 100 of the invention, implemented as a computer program, is stored in the disk storage device 731. The workstation 700 is arranged to load the program and input data into memory 720 and execute the program on the processor 710. The user can input information to the workstation 700, using the keyboard 741 and/or the mouse 742. The workstation is arranged to output information to the display device 743 and/or to the disk 731. The skilled person will understand that there are numerous other embodiments of the workstation 700 known in the art and that the present embodiment serves the purpose of illustrating the invention and must not be interpreted as limiting the invention to this particular embodiment. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim or in the description. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a programmed computer. In the system claims enumerating several units, several of these units can be embodied by one and the same item of hardware or software. The usage of the words first, second and third, et cetera does not indicate any ordering. These words are to be interpreted as names.

Claims

CLAIMS:
1. A system (100) for blending a first image and a second image, based on a reference image, the system comprising: a computation unit (120) for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and a blend unit (130) for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
2. A system (100) as claimed in claim 1, wherein the blend-factor map depends only on the intensities of pixels of the reference image.
3. A system (100) as claimed in claim 2, wherein the blend- factor map is a power of an affϊne map.
4. A system (100) as claimed in claim 1, further comprising a determination unit (115) for determining a value of at least one parameter defining the blend-factor map.
5. A system (100) as claimed in claim 1, wherein the reference image is the first image.
6. A system (100) as claimed in claim 1, wherein the first image is a grayscale image and the second image is a color image.
7. A system (100) as claimed in claim 6, further comprising a combination unit
(110) for creating the second image by combining a plurality of grayscale images into the color image.
8. An image acquisition apparatus (500) comprising the system (100) as claimed in claim 1.
9. A workstation (600) comprising the system (100) as claimed in claim 1.
10. A method (400) of blending a first image and a second image, based on a reference image, the method comprising: a computation unit (420) for computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and a blend unit (430) for blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
11. A computer program product to be loaded by a computer arrangement, comprising instructions for blending a first image and a second image, based on a reference image, the computer arrangement comprising a processing unit and a memory, the computer program product, after being loaded, providing said processing unit with the capability to carry out the tasks of: - computing a blend factor for blending a pixel of the first image and a corresponding pixel of the second image, wherein the blend factor is computed based on a blend- factor map for mapping pixels of the reference image into a range of values; and blending the pixel of the first image and the corresponding pixel of the second image, based on the computed blend factor.
PCT/IB2007/054782 2006-11-30 2007-11-26 Variable alpha blending of anatomical images with functional images WO2008065594A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06125105.4 2006-11-30
EP06125105 2006-11-30

Publications (1)

Publication Number Publication Date
WO2008065594A1 true WO2008065594A1 (en) 2008-06-05

Family

ID=39199381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/054782 WO2008065594A1 (en) 2006-11-30 2007-11-26 Variable alpha blending of anatomical images with functional images

Country Status (1)

Country Link
WO (1) WO2008065594A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774485B2 (en) 2012-07-26 2014-07-08 General Electric Company Systems and methods for performing segmentation and visualization of multivariate medical images
US20220237753A1 (en) * 2021-01-22 2022-07-28 Apical Limited Image adjustment based on local contrast

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1453009A2 (en) * 2003-03-01 2004-09-01 The Boeing Company Systems and methods for providing enhanced vision imaging

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1453009A2 (en) * 2003-03-01 2004-09-01 The Boeing Company Systems and methods for providing enhanced vision imaging

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Adobe Photoshop 5.0 User Guide for Macintosh and Windows", ADOBE PHOTOSHOP 5.0 USER GUIDE, 1998, pages 259 - 288, XP002339731 *
FRANK R J ET AL: "Brainvox: an interactive, multimodal visualization and analysis system for neuroanatomical imaging", NEUROIMAGE, vol. 5, no. 1, January 1997 (1997-01-01), pages 13 - 30, XP009098163 *
WUENSCHE ET AL: "The 3D visualization of brain anatomy from diffusion-weighted magnetic resonance imaging data", PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES IN AUSTRALASIA AND SOUTH EAST ASIA, 2004, pages 74 - 83, XP009098161 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774485B2 (en) 2012-07-26 2014-07-08 General Electric Company Systems and methods for performing segmentation and visualization of multivariate medical images
WO2014018865A3 (en) * 2012-07-26 2015-07-16 General Electric Company Systems and methods for performing segmentation and visualization of multivariate medical images
US20220237753A1 (en) * 2021-01-22 2022-07-28 Apical Limited Image adjustment based on local contrast

Similar Documents

Publication Publication Date Title
US8019142B2 (en) Superimposing brain atlas images and brain images with delineation of infarct and penumbra for stroke diagnosis
US7860331B2 (en) Purpose-driven enhancement filtering of anatomical data
US10275930B2 (en) Combined intensity projection
EP3493161B1 (en) Transfer function determination in medical imaging
US20100254584A1 (en) Automated method for assessment of tumor response to therapy with multi-parametric mri
US9171377B2 (en) Automatic point-wise validation of respiratory motion estimation
JP2008526382A (en) Blood flow display method and system
US10188361B2 (en) System for synthetic display of multi-modality data
US20080118182A1 (en) Method of Fusing Digital Images
JP6564075B2 (en) Selection of transfer function for displaying medical images
US8848998B1 (en) Automated method for contrast media arrival detection for dynamic contrast enhanced MRI
CN101802877B (en) Path proximity rendering
US8873817B2 (en) Processing an image dataset based on clinically categorized populations
WO2008065594A1 (en) Variable alpha blending of anatomical images with functional images
US20100265252A1 (en) Rendering using multiple intensity redistribution functions
US7280681B2 (en) Method and apparatus for generating a combined parameter map
EP1923838A1 (en) Method of fusing digital images
WO2017198518A1 (en) Image data processing device
Valli Computer-Assisted Integration and Display of Diagnostic Features in MR Spin Echo Multi Echo Sequences
Gróf Volume data fusions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07849248

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07849248

Country of ref document: EP

Kind code of ref document: A1