WO2020257800A1 - System and method for improving fidelity in images - Google Patents

System and method for improving fidelity in images Download PDF

Info

Publication number
WO2020257800A1
WO2020257800A1 PCT/US2020/039017 US2020039017W WO2020257800A1 WO 2020257800 A1 WO2020257800 A1 WO 2020257800A1 US 2020039017 W US2020039017 W US 2020039017W WO 2020257800 A1 WO2020257800 A1 WO 2020257800A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
region
interest
image
pixels
Prior art date
Application number
PCT/US2020/039017
Other languages
French (fr)
Inventor
Duy Ba TRAN
Original Assignee
Thomas Jefferson University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomas Jefferson University filed Critical Thomas Jefferson University
Publication of WO2020257800A1 publication Critical patent/WO2020257800A1/en

Links

Classifications

    • G06T5/94
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0037Performing a preliminary scan, e.g. a prescan for identifying a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/486Diagnostic techniques involving generating temporal series of image data
    • A61B6/487Diagnostic techniques involving generating temporal series of image data involving fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Definitions

  • Lymphoscintigraphy of the head and neck with SPECT/CT is one example of a nuclear medicine exam that can pose some challenges for interpreting physicians.
  • a hot injection site and associated“star artifact” can cause the look up table (LUT) to be scaled in such a way that lower activity lesions are not visible, because of the extremely high counts emitted from the injection site.
  • Some anatomical features for example the sentential lymph node (SLN), usually contains less activity and emits lower counts, leading to them becoming“hidden” by the very bright injection site.
  • SSN sentential lymph node
  • a method of improving fidelity in an image comprises obtaining a plurality of images comprising a plurality of pixels, identifying a region of interest comprising a subset of the plurality of pixels in at least one of the images, modifying the at least one of images by reducing an intensity value of the subset of pixels in the region of interest, and normalizing the at least one image, thereby improving the fidelity of at least one feature in the plurality of images.
  • the images are medical images selected from the group consisting of SPECT/CT images, PET/CT images, CT images, X-rays, and Ultrasound. In one embodiment, the images are selected from the group consisting of backscatter X-ray images, analog photographs, and digital photographs.
  • the method further comprises the step of assembling the normalized images into a three-dimensional representation of an imaged region of a subject. In one embodiment, the method further comprises the steps of identifying a single-plane region of interest in at least a subset of the plurality of images, and constructing a three-dimensional region of interest from the single-plane regions of interest.
  • the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles.
  • the method further comprises identifying a centroid of the region of interest, and reducing an intensity value of the subset of pixels in the region of interest in a gradient extending outward from the centroid.
  • the region of interest is identified by a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle.
  • the region of interest is identified by drawing a border around the region of interest.
  • a system for improving fidelity in an image comprises a non-transitory computer-readable medium with instructions stored thereon, that when executed by a processor perform steps comprising obtaining a plurality of images comprising a plurality of pixels, prompting a user to select a region of interest comprising a subset of the plurality of pixels in at least one of the plurality of images, modifying the at least one image by reducing an intensity value of the subset of pixels in the region of interest, normalizing the plurality of images, thereby improving the fidelity of at least one feature in the plurality of images, and displaying the normalized images.
  • the steps further comprising suggesting a region of interest for the user to select based on the intensity values of the pixels in the at least one image.
  • the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles.
  • the steps further comprise constructing a three-dimensional representation of the body region of the subject from the multiple perspective views.
  • the images are medical images and are components of a three-dimensional representation of a subject, and the region of interest can be defined in three dimensions.
  • the system further comprises a user interface for displaying the normalized images, the user interface further comprising a lookup table display indicating the colors corresponding to the intensity values.
  • the user interface further comprises a second original view of the plurality of images.
  • the user interface further comprises a tool for selecting the region of interest by defining a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle.
  • the user interface further comprises a tool for selecting the region of interest by drawing the region of interest over the plurality of images.
  • Fig. l is a set of photographs of CT images with and without the disclosed masking technique applied;
  • Fig. 2 is a pair of CT images including a LUT scale, with and without the disclosed masking technique applied;
  • Fig. 3 is a pair of CT images with and without the disclosed masking technique applied
  • Fig. 4 is a set of two dimensional images and a set of views of three dimensional representations of a subject
  • Fig. 5 is a method of the invention.
  • Fig. 6 is a graph of experimental data.
  • articles“a” and“an” are used herein to refer to one or to more than one (i.e., to at least one) of the grammatical object of the article.
  • “an element” means one element or more than one element.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6 and any whole and partial increments therebetween.
  • software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
  • aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof.
  • Software executing the algorithms described herein may be written in any programming language known in the art, compiled or interpreted, including but not limited to C, C++, C#, Objective-C, Java, JavaScript, Python, PHP, Perl, Ruby, or Visual Basic.
  • elements of the present invention may be executed on any acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
  • Parts of this invention are described as software running on a computing device. Though software described herein may be disclosed as operating on one particular computing device (e.g. a dedicated server or a workstation), it is understood in the art that software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art.
  • a dedicated server e.g. a dedicated server or a workstation
  • software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art
  • parts of this invention are described as communicating over a variety of wireless or wired computer networks.
  • the words“network”, “networked”, and“networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3 G or 4G/LTE networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another.
  • elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
  • VPN Virtual Private Network
  • One aspect of the present invention relates to a 3D post-processing method that improves medical image quality for example by increasing contrast resolution, improving low count studies, reducing oversaturation of an image, and/or helping to detect small, less visible or invisible lesions.
  • the term“masking” refers to reducing the intensity of one or more pixel values in a two-dimensional or three-dimensional image.
  • masking may comprise the step of setting one or more pixel values to zero.
  • masking may comprise the step of calculating a centroid of a feature in an image and reducing one or more pixel values by a value in a gradient extending outward from the centroid of the feature.
  • Suitable imaging techniques for use with the present invention include, but are not limited to, X-ray, CT/CAT scan, MRI, Ultrasound, proton-emission tomography, backscatter X- ray, and analog or digital photography.
  • the re-normalized image data permit otherwise less visible or invisible lesions to be seen quite easily, even though the underlying counts have not changed. In the example shown in Fig. 2, the counts differ by only 2 but the formerly hidden lesion becomes easily detected following application of the 3D masking technique.
  • ROI are drawn manually in all 3 planes (axial, coronal, and sagittal) and the pixel data in the ROI are reduced in intensity or removed from the SPECT (single photon emission computed tomography) data.
  • This causes the software to re-normalize the data so hidden lesions may become visible.
  • This is not the same as drawing a region of interest over a 2 dimensional (2D) image (Fig. 4) because in that case, the data does not get renormalized.
  • 2D 2 dimensional
  • ROIs may either be defined as a fixed, scalable shape (for example an ellipsoid defined either by two foci or major and minor axes), a rectangle, a circle/sphere, etc.
  • an ROI may be drawn manually in order to conform to an irregular shape, for example where a high count region has a shape not easily defined as a fixed shape.
  • 3D digital masking can be applied to many types of nuclear medicine SPECT/CT exams.
  • Clinical SPECT applications include tumor localization, parathyroid, abscess localization, liver, lymphoscintigraphy, bone, and metaiodobenzylguanidine (MIBG).
  • MIBG metaiodobenzylguanidine
  • tumor localization for example Melanoma, squamous cell carcinoma of oral cavity, merkel cell carcinoma, thyroid cancers, neuroendocrine tumor, pancreatic tumor, breast cancer, carcinoid tumors, parathyroid
  • Image enhancement methods disclosed herein may also be used in abscess localization, for example infections, fever of unknown origin, inflammatory process, osteomyelitis; bone scintigraphy, for example fractures, infection, and cancer, skeletal disease, and bone lesions; or MIBG, for example adrenal gland scintigraphy, Pheochromcytomas, or paragangliomas.
  • a 3D Masking technique includes the steps of scanning the subject and reconstructing raw projection data to create SPECT images, viewing SPECT images in 3 planes to identify one or more candidate regions for masking, and performing a 3D masking technique by using the 3D ellipsoid or freehand masking tool to segment and remove injection site or organs with very high normal uptake.
  • the technique includes the step of normalizing or re-normalizing the SPECT image color map.
  • the renormalization is performed by the image viewing software . Once properly normalized, the viewer may observe the re-normalized image and observe if the 3D masking technique resulted in any additional previously hidden lesions becoming visible.
  • FIG. 1 a star artifact 111 in a patient caused by a hot injection site is shown in photograph 101.
  • Photographs 102 and 103 have the 3D digital masking technique applied. After the removal of the star artifact by the disclosed masking technique, a hidden lesion 104 can be visualized in photograph 103.
  • the look up table (LUT) 203 is shown at the right side of both images, with the top color indicating 100%.
  • the top image 201 is shown with a conventional technique.
  • the bottom image 202 is shown with the 3D digital masking technique.
  • the counts only differ by 2 between the images with (202) and without (201) 3D digital masking, the feature of interest 205 is much more visible in image 202 than the feature of interest 204 in image 201, due to the improved fidelity offered by the 3D digital masking technique.
  • feature 204 in image 201 is measured at 117 counts and is nearly invisible, while feature 205 in image 202 is measured at 119 counts and is conspicuous.
  • feature 207 in image 201 is measured at 151 counts, and is nearly invisible, while feature 208 in image 202 is measured at 153 counts, but is conspicuous.
  • the disclosed method does not adjust the counts of the features of interest, but rather how regions in the image having different counts are mapped to the LUT and displayed.
  • the scale of the LUT 203 it is possible to adjust the scale of the LUT 203, for example to map colors to a smaller portion of the total range of values in the image.
  • performing this step fails to display the proper values because the image display bins the counted regions of the image into too small a number of bins. This in turn can lead to significant artifacts, because regions having a slightly lower count will be binned in with regions of interest (for example lesions) and may be displayed in the same color.
  • the disclosed method removes regions of the image having extremely high counts and normalizes the resulting image, thereby lowering the maximum count value in the image and binning the different regions of the image more appropriately.
  • images 401 and 402 are two-dimensional images.
  • Image 401 has a region of interest 411 drawn to cover brightest spot.
  • Image 402 has no region of interest drawn.
  • Image 403 shows the result of applying the 3D digital masking technique, and shows multiple right-sided lymph nodes 413.
  • Image 404 shows the result of the conventional imaging method without the disclosed 3D digital masking technique, and only shows the bright injection site 414.
  • a method of improving fidelity in a medical image includes the steps of obtaining a plurality of medical images comprising a plurality of pixels in step 501, identifying a region of interest comprising a subset of the plurality of pixels in at least one of the medical images in step 502, modifying the at least one of medical images by reducing an intensity value of the subset of pixels in the region of interest in step 503, and normalizing the at least one medical image in step 505, arriving at an image or a plurality of images having at least one feature with improved fidelity 506.
  • SPECT/CT was performed on a Siemens’s Symbia Intevo camera.
  • the software was processed on Siemen’s Syngo application.
  • the settings used were as follows: SPECT/CT: 30 sec/ view; 48 views; Collimator: LEHR; Matrix: 128x128; Orbit: Non Circular; Mode: Step and shoot; 1.0 Zoom; Care Dose4D- For CT; Slice 5mm.
  • ROIs were drawn manually in all 3 planes (axial, coronal, and sagittal) over the hottest site (in lymphoscintigraphy, the hottest site is typically the injection site) and the intensity data for the pixels within the ROI were removed from the image data.
  • the image LUT lookup table
  • scale gray or color
  • lymphoscintigraphy from 09/01/2017 - 12/31/2018 were included.
  • the images in which an injection site hot spot was digitally removed improved the conspicuity of nearby nodes containing lower activity. Measured counts generally stayed the same with and without masking ( ⁇ 2 cts difference) but the renormalized LUT significantly changed the visibility of small lesions after the injection site was digitally removed by using the masking technique.
  • Digital masking has improved contrast resolution and sentinel lymph node detection significantly in H/N lymphoscintigraphy.

Abstract

A method of improving fidelity in an image comprises obtaining a plurality of images comprising a plurality of pixels, identifying a region of interest comprising a subset of the plurality of pixels in at least one of the images, modifying the at least one of images by reducing an intensity value of the subset of pixels in the region of interest, and normalizing the at least one image, thereby improving the fidelity of at least one feature in the plurality of images. A system for improving fidelity in an image is also described.

Description

SYSTEM AND METHOD FOR IMPROVING FIDELITY IN IMAGES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to US Provisional Patent Application No. 62/865,125, filed on June 21, 2019, incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION
[0002] Lymphoscintigraphy of the head and neck with SPECT/CT is one example of a nuclear medicine exam that can pose some challenges for interpreting physicians. A hot injection site and associated“star artifact” can cause the look up table (LUT) to be scaled in such a way that lower activity lesions are not visible, because of the extremely high counts emitted from the injection site. Some anatomical features, for example the sentential lymph node (SLN), usually contains less activity and emits lower counts, leading to them becoming“hidden” by the very bright injection site.
[0003] Thus there is a need in the art for an improved visualization method for medical images in order to compensate for bright objects in the field of view. The present invention satisfies this need.
SUMMARY OF THE INVENTION
[0004] In one aspect, a method of improving fidelity in an image comprises obtaining a plurality of images comprising a plurality of pixels, identifying a region of interest comprising a subset of the plurality of pixels in at least one of the images, modifying the at least one of images by reducing an intensity value of the subset of pixels in the region of interest, and normalizing the at least one image, thereby improving the fidelity of at least one feature in the plurality of images.
[0005] In one embodiment, the images are medical images selected from the group consisting of SPECT/CT images, PET/CT images, CT images, X-rays, and Ultrasound. In one embodiment, the images are selected from the group consisting of backscatter X-ray images, analog photographs, and digital photographs. In one embodiment, the method further comprises the step of assembling the normalized images into a three-dimensional representation of an imaged region of a subject. In one embodiment, the method further comprises the steps of identifying a single-plane region of interest in at least a subset of the plurality of images, and constructing a three-dimensional region of interest from the single-plane regions of interest.
[0006] In one embodiment, the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles. In one embodiment, the method further comprises identifying a centroid of the region of interest, and reducing an intensity value of the subset of pixels in the region of interest in a gradient extending outward from the centroid. In one embodiment, the region of interest is identified by a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle. In one
embodiment, the region of interest is identified by drawing a border around the region of interest.
[0007] In one aspect, a system for improving fidelity in an image comprises a non-transitory computer-readable medium with instructions stored thereon, that when executed by a processor perform steps comprising obtaining a plurality of images comprising a plurality of pixels, prompting a user to select a region of interest comprising a subset of the plurality of pixels in at least one of the plurality of images, modifying the at least one image by reducing an intensity value of the subset of pixels in the region of interest, normalizing the plurality of images, thereby improving the fidelity of at least one feature in the plurality of images, and displaying the normalized images.
[0008] In one embodiment, the steps further comprising suggesting a region of interest for the user to select based on the intensity values of the pixels in the at least one image. In one embodiment, the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles. In one embodiment, the steps further comprise constructing a three-dimensional representation of the body region of the subject from the multiple perspective views. In one embodiment, the images are medical images and are components of a three-dimensional representation of a subject, and the region of interest can be defined in three dimensions.
[0009] In one embodiment, the system further comprises a user interface for displaying the normalized images, the user interface further comprising a lookup table display indicating the colors corresponding to the intensity values. In one embodiment, the user interface further comprises a second original view of the plurality of images. In one embodiment, the user interface further comprises a tool for selecting the region of interest by defining a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle. In one embodiment, the user interface further comprises a tool for selecting the region of interest by drawing the region of interest over the plurality of images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The foregoing purposes and features, as well as other purposes and features, will become apparent with reference to the description and accompanying figures below, which are included to provide an understanding of the invention and constitute a part of the specification, in which like numerals represent like elements, and in which:
Fig. l is a set of photographs of CT images with and without the disclosed masking technique applied;
Fig. 2 is a pair of CT images including a LUT scale, with and without the disclosed masking technique applied;
Fig. 3 is a pair of CT images with and without the disclosed masking technique applied;
Fig. 4 is a set of two dimensional images and a set of views of three dimensional representations of a subject;
Fig. 5 is a method of the invention; and
Fig. 6 is a graph of experimental data.
DETAILED DESCRIPTION
[0011] It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purpose of clarity, many other elements found in related systems and methods. Those of ordinary skill in the art may recognize that other elements and/or steps are desirable and/or required in implementing the present invention. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements and steps is not provided herein. The disclosure herein is directed to all such variations and modifications to such elements and methods known to those skilled in the art.
[0012] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, exemplary methods and materials are described.
[0013] As used herein, each of the following terms has the meaning associated with it in this section.
[0014] The articles“a” and“an” are used herein to refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. By way of example,“an element” means one element or more than one element.
[0015]“About” as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, is meant to encompass variations of ±20%, ±10%, ±5%, ±1%, and ±0.1% from the specified value, as such variations are appropriate.
[0016] Throughout this disclosure, various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6 and any whole and partial increments therebetween. This applies regardless of the breadth of the range. [0017] In some aspects of the present invention, software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
[0018] Aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof. Software executing the algorithms described herein may be written in any programming language known in the art, compiled or interpreted, including but not limited to C, C++, C#, Objective-C, Java, JavaScript, Python, PHP, Perl, Ruby, or Visual Basic. It is further understood that elements of the present invention may be executed on any acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
[0019] Parts of this invention are described as software running on a computing device. Though software described herein may be disclosed as operating on one particular computing device (e.g. a dedicated server or a workstation), it is understood in the art that software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art.
[0020] Similarly, parts of this invention are described as communicating over a variety of wireless or wired computer networks. For the purposes of this invention, the words“network”, “networked”, and“networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3 G or 4G/LTE networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another. In some embodiments, elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
[0021] One aspect of the present invention relates to a 3D post-processing method that improves medical image quality for example by increasing contrast resolution, improving low count studies, reducing oversaturation of an image, and/or helping to detect small, less visible or invisible lesions. As used herein, the term“masking” refers to reducing the intensity of one or more pixel values in a two-dimensional or three-dimensional image. In some embodiments, masking may comprise the step of setting one or more pixel values to zero. In some
embodiments, masking may comprise the step of calculating a centroid of a feature in an image and reducing one or more pixel values by a value in a gradient extending outward from the centroid of the feature.
[0022] Although certain exemplary embodiments are presented using one particular medical imaging technique (for example, SPECT/CT images), it is understood that the disclosed systems and methods may be used with a wide variety of medical and non-medical images to enhance fidelity. Suitable imaging techniques for use with the present invention include, but are not limited to, X-ray, CT/CAT scan, MRI, Ultrasound, proton-emission tomography, backscatter X- ray, and analog or digital photography.
[0023] In one embodiment, following masking of the injection site by segmenting the injection site in 3 dimensions and zeroing the included pixel values (3 dimensional (3D) digital masking), the re-normalized image data permit otherwise less visible or invisible lesions to be seen quite easily, even though the underlying counts have not changed. In the example shown in Fig. 2, the counts differ by only 2 but the formerly hidden lesion becomes easily detected following application of the 3D masking technique.
[0024] In one embodiment, ROI are drawn manually in all 3 planes (axial, coronal, and sagittal) and the pixel data in the ROI are reduced in intensity or removed from the SPECT (single photon emission computed tomography) data. This causes the software to re-normalize the data so hidden lesions may become visible. This is not the same as drawing a region of interest over a 2 dimensional (2D) image (Fig. 4) because in that case, the data does not get renormalized. In Fig 3, the image on the left is the conventional technique used in nuclear medicine and the image on the right is the disclosed technique with 3D digital masking.
[0025] In various embodiments, ROIs may either be defined as a fixed, scalable shape (for example an ellipsoid defined either by two foci or major and minor axes), a rectangle, a circle/sphere, etc. In some embodiments, an ROI may be drawn manually in order to conform to an irregular shape, for example where a high count region has a shape not easily defined as a fixed shape.
[0026] 3D digital masking can be applied to many types of nuclear medicine SPECT/CT exams. Clinical SPECT applications include tumor localization, parathyroid, abscess localization, liver, lymphoscintigraphy, bone, and metaiodobenzylguanidine (MIBG).
[0027] Other applications include, but are not limited to, tumor localization, for example Melanoma, squamous cell carcinoma of oral cavity, merkel cell carcinoma, thyroid cancers, neuroendocrine tumor, pancreatic tumor, breast cancer, carcinoid tumors, parathyroid
hyperplasia, parathyroid adenoma, hyperparathyroidism, differentiated thyroid cancer, hepatocellular cancer, Avid tumor, and sentinel lymph node mapping. Image enhancement methods disclosed herein may also be used in abscess localization, for example infections, fever of unknown origin, inflammatory process, osteomyelitis; bone scintigraphy, for example fractures, infection, and cancer, skeletal disease, and bone lesions; or MIBG, for example adrenal gland scintigraphy, Pheochromcytomas, or paragangliomas.
[0028] In one embodiment, a 3D Masking technique includes the steps of scanning the subject and reconstructing raw projection data to create SPECT images, viewing SPECT images in 3 planes to identify one or more candidate regions for masking, and performing a 3D masking technique by using the 3D ellipsoid or freehand masking tool to segment and remove injection site or organs with very high normal uptake. After masking, in some embodiments the technique includes the step of normalizing or re-normalizing the SPECT image color map. In some embodiments, the renormalization is performed by the image viewing software . Once properly normalized, the viewer may observe the re-normalized image and observe if the 3D masking technique resulted in any additional previously hidden lesions becoming visible. [0029] Referring now to Fig. 1, a star artifact 111 in a patient caused by a hot injection site is shown in photograph 101. Photographs 102 and 103 have the 3D digital masking technique applied. After the removal of the star artifact by the disclosed masking technique, a hidden lesion 104 can be visualized in photograph 103.
[0030] Referring now to Fig. 2, the look up table (LUT) 203 is shown at the right side of both images, with the top color indicating 100%. The top image 201 is shown with a conventional technique. The bottom image 202 is shown with the 3D digital masking technique. Although the counts only differ by 2 between the images with (202) and without (201) 3D digital masking, the feature of interest 205 is much more visible in image 202 than the feature of interest 204 in image 201, due to the improved fidelity offered by the 3D digital masking technique.
Specifically, feature 204 in image 201 is measured at 117 counts and is nearly invisible, while feature 205 in image 202 is measured at 119 counts and is conspicuous. Similarly, feature 207 in image 201 is measured at 151 counts, and is nearly invisible, while feature 208 in image 202 is measured at 153 counts, but is conspicuous. This example shows that in one embodiment, the disclosed method does not adjust the counts of the features of interest, but rather how regions in the image having different counts are mapped to the LUT and displayed.
[0031] In some interfaces, it is possible to adjust the scale of the LUT 203, for example to map colors to a smaller portion of the total range of values in the image. However, in some embodiments performing this step fails to display the proper values because the image display bins the counted regions of the image into too small a number of bins. This in turn can lead to significant artifacts, because regions having a slightly lower count will be binned in with regions of interest (for example lesions) and may be displayed in the same color. By contrast, the disclosed method removes regions of the image having extremely high counts and normalizes the resulting image, thereby lowering the maximum count value in the image and binning the different regions of the image more appropriately.
[0032] Referring now to Fig. 3, the left image 301 is shown with the conventional technique vs image 302 on the right is shown with the 3D digital masking technique. Feature 303 in image 301 is invisible, but became visible (304) when high intensity artifacts were masked and the image renormalized using the disclosed 3D digital masking technique. [0033] Referring now to Fig. 4, images 401 and 402 are two-dimensional images. Image 401 has a region of interest 411 drawn to cover brightest spot. Image 402 has no region of interest drawn. Image 403 shows the result of applying the 3D digital masking technique, and shows multiple right-sided lymph nodes 413. Image 404 shows the result of the conventional imaging method without the disclosed 3D digital masking technique, and only shows the bright injection site 414.
[0034] With reference now to Fig. 5, a method of improving fidelity in a medical image is shown. The method includes the steps of obtaining a plurality of medical images comprising a plurality of pixels in step 501, identifying a region of interest comprising a subset of the plurality of pixels in at least one of the medical images in step 502, modifying the at least one of medical images by reducing an intensity value of the subset of pixels in the region of interest in step 503, and normalizing the at least one medical image in step 505, arriving at an image or a plurality of images having at least one feature with improved fidelity 506.
EXPERIMENTAL EXAMPLES
[0035] The invention is further described in detail by reference to the following experimental examples. These examples are provided for purposes of illustration only, and are not intended to be limiting unless otherwise specified. Thus, the invention should in no way be construed as being limited to the following examples, but rather, should be construed to encompass any and all variations which become evident as a result of the teaching provided herein.
[0036] Without further description, it is believed that one of ordinary skill in the art can, using the preceding description and the following illustrative examples, make and utilize the system and method of the present invention. The following working examples therefore, specifically point out the exemplary embodiments of the present invention, and are not to be construed as limiting in any way the remainder of the disclosure.
Materials and Methods
[0037] SPECT/CT was performed on a Siemens’s Symbia Intevo camera. The software was processed on Siemen’s Syngo application. [0038] The settings used were as follows: SPECT/CT: 30 sec/ view; 48 views; Collimator: LEHR; Matrix: 128x128; Orbit: Non Circular; Mode: Step and shoot; 1.0 Zoom; Care Dose4D- For CT; Slice 5mm.
[0039] ROIs were drawn manually in all 3 planes (axial, coronal, and sagittal) over the hottest site (in lymphoscintigraphy, the hottest site is typically the injection site) and the intensity data for the pixels within the ROI were removed from the image data. Following this digital masking, the image LUT (lookup table) scale (gray or color) was re-normalized by the application software. Three board certified nuclear medicine physicians reviewed the pairs of SPECT/CT exams and compared them to determine if the masking operation affected their interpretation.
Results
[0040] Forty consecutive patients who underwent SPECT/CT of head and neck
lymphoscintigraphy from 09/01/2017 - 12/31/2018 were included. In all 40 patients, the images in which an injection site hot spot was digitally removed improved the conspicuity of nearby nodes containing lower activity. Measured counts generally stayed the same with and without masking (<2 cts difference) but the renormalized LUT significantly changed the visibility of small lesions after the injection site was digitally removed by using the masking technique.
Results from H/N lymphoscintigraphy with SPECT/CT experiment:
Attending physician (A)
[0041] In 10 out of the 46 patients (20%), more lymph nodes were visible in the SPECT/CT images enhanced with the disclosed masking technique compared to the original technique.
[0042] In 3 out of the 46 patients (6%), more lymph nodes were visible in the SPECT/CT images taken with the conventional method compared to images enhanced with the disclosed masking technique.
[0043] In 12 out of 46 patients (26%), the same number of lymph nodes were visible in both the masked and unmasked images.. Attending physician (B)
[0044] In 15 out of 46 patients (33%), more lymph nodes were visible in the SPECT/CT images enhanced with the disclosed masking technique compared to the original technique.
[0045] In 1 out of 46 patients (2%), more lymph nodes were visible in the SPECT/CT images taken with the conventional method compared to images enhanced with the disclosed masking technique.
[0046] In 6 out of 46 patients (13%), the same number of lymph nodes were visible in both the masked and unmasked images.
[0047] A graphical representation of the experimental results is shown in Fig. 6 Conclusion
[0048] Digital masking has improved contrast resolution and sentinel lymph node detection significantly in H/N lymphoscintigraphy.
[0049] The disclosures of each and every patent, patent application, and publication cited herein are hereby incorporated herein by reference in their entirety. While this invention has been disclosed with reference to specific embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention. The appended claims are intended to be construed to include all such embodiments and equivalent variations.

Claims

CLAIMS What is claimed is:
1. A method of improving fidelity in an image, comprising:
obtaining a plurality of images comprising a plurality of pixels;
identifying a region of interest comprising a subset of the plurality of pixels in at least one of the images;
modifying the at least one of images by reducing an intensity value of the subset of pixels in the region of interest; and
normalizing the at least one image, thereby improving the fidelity of at least one feature in the plurality of images.
2. The method of claim 1, wherein the images are medical images selected from the group consisting of SPECT/CT images, PET/CT images, CT images, X-rays, and Ultrasound.
3. The method of claim 1, wherein the images are selected from the group consisting of backscatter X-ray images, analog photographs, and digital photographs.
4. The method of claim 1, further comprising the step of assembling the normalized images into a three-dimensional representation of an imaged region of a subject.
5. The method of claim 4, further comprising the steps of identifying a single-plane region of interest in at least a subset of the plurality of images; and
constructing a three-dimensional region of interest from the single-plane regions of interest.
6. The method of claim 1, wherein the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles.
7. The method of claim 1, further comprising identifying a centroid of the region of interest, and
reducing an intensity value of the subset of pixels in the region of interest in a gradient extending outward from the centroid.
8. The method of claim 1, wherein the region of interest is identified by a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle.
9. The method of claim 1, wherein the region of interest is identified by drawing a border around the region of interest.
10. A system for improving fidelity in an image, comprising a non-transitory computer- readable medium with instructions stored thereon, that when executed by a processor perform steps comprising:
obtaining a plurality of images comprising a plurality of pixels;
prompting a user to select a region of interest comprising a subset of the plurality of pixels in at least one of the plurality of images;
modifying the at least one image by reducing an intensity value of the subset of pixels in the region of interest;
normalizing the plurality of images, thereby improving the fidelity of at least one feature in the plurality of images; and
displaying the normalized images.
11. The system of claim 7, the steps further comprising suggesting a region of interest for the user to select based on the intensity values of the pixels in the at least one image.
12. The system of claim 7, wherein the images are medical images comprising multiple perspective views of a body region of a subject taken from different angles.
13. The system of claim 12, wherein the steps further comprise constructing a three- dimensional representation of the body region of the subject from the multiple perspective views.
14. The system of claim 7, wherein the images are medical images and are components of a three-dimensional representation of a subject, and the region of interest can be defined in three dimensions.
15. The system of claim 7, further comprising a user interface for displaying the normalized images, the user interface further comprising a lookup table display indicating the colors corresponding to the intensity values.
16. The system of claim 15, the user interface further comprising a second original view of the plurality of images.
17. The system of claim 15, the user interface further comprising a tool for selecting the region of interest by defining a standard shape selected from the group consisting of an ellipse, and ellipsoid, a rectangle, a square, and a circle.
18. The system of claim 15, the user interface further comprising a tool for selecting the region of interest by drawing the region of interest over the plurality of images.
PCT/US2020/039017 2019-06-21 2020-06-22 System and method for improving fidelity in images WO2020257800A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962865125P 2019-06-21 2019-06-21
US62/865,125 2019-06-21

Publications (1)

Publication Number Publication Date
WO2020257800A1 true WO2020257800A1 (en) 2020-12-24

Family

ID=74040684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/039017 WO2020257800A1 (en) 2019-06-21 2020-06-22 System and method for improving fidelity in images

Country Status (1)

Country Link
WO (1) WO2020257800A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110091130A1 (en) * 2008-06-09 2011-04-21 Universite De Montreal Method and module for improving image fidelity
US20120243761A1 (en) * 2011-03-21 2012-09-27 Senzig Robert F System and method for estimating vascular flow using ct imaging
US20160005154A1 (en) * 2011-09-28 2016-01-07 U.S. Army Research Laboratory Attn: Rdrl-Loc-I System and processor implemented method for improved image quality and generating an image of a target illuminated by quantum particles

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110091130A1 (en) * 2008-06-09 2011-04-21 Universite De Montreal Method and module for improving image fidelity
US20120243761A1 (en) * 2011-03-21 2012-09-27 Senzig Robert F System and method for estimating vascular flow using ct imaging
US20160005154A1 (en) * 2011-09-28 2016-01-07 U.S. Army Research Laboratory Attn: Rdrl-Loc-I System and processor implemented method for improved image quality and generating an image of a target illuminated by quantum particles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BROCK ET AL.: "Large scale GAN training for high fidelity natural image syntheis", ARXIV:1809.11096V2, 25 February 2019 (2019-02-25), XP081088369, Retrieved from the Internet <URL:https://arxiv.org/pdf/1809.11096.pdf> [retrieved on 20200813] *
SMELYANSKIY ET AL.: "Mapping High-Fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, vol. 15, no. 6, November 2009 (2009-11-01), XP011278792, Retrieved from the Internet <URL:https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.460.3466&rep=rep1&type=pdf> [retrieved on 20200813], DOI: 10.1109/TVCG.2009.164 *

Similar Documents

Publication Publication Date Title
US11935654B2 (en) Systems and methods for image processing
US10417767B2 (en) Systems and methods for image segmentation
WO2020220208A1 (en) Systems and methods for object positioning and image-guided surgery
US20130190602A1 (en) 2d3d registration for mr-x ray fusion utilizing one acquisition of mr data
US20170032527A1 (en) Method and system for head digitization and co-registration of medical imaging data
JP2016526962A5 (en)
JP2020500085A (en) Image acquisition system and method
US9361726B2 (en) Medical image diagnostic apparatus, medical image processing apparatus, and methods therefor
CN107895390B (en) Data processing system and method for electronic computer tomography image
KR20130018168A (en) Method and device for visualizing the registration quality of medical image data sets
CN107220646B (en) Medical image character recognition enhancing method for removing background interference
CN104240271A (en) Image processing apparatus and method
US10433796B2 (en) Selecting transfer functions for displaying medical images
TWI494086B (en) Data processing device for medical treatment and radiation tomography device having the same
US11200727B2 (en) Method and system for fusing image data
CN111243082A (en) Method, system, device and storage medium for obtaining digital image reconstruction image
EP3170145B1 (en) Imaging data statistical testing including a stereotactical normalization with a personalized template image
US9014448B2 (en) Associating acquired images with objects
JP6526428B2 (en) Medical image processing apparatus, medical image processing method and medical image diagnostic apparatus
JP6060173B2 (en) Image processing device
US9514548B2 (en) Method and system for presenting and using four dimensional data from a medical imaging system
US20160307358A1 (en) Medical image processing device, medical image processing method and computer readable medium
WO2020257800A1 (en) System and method for improving fidelity in images
US9349177B2 (en) Extracting bullous emphysema and diffuse emphysema in E.G. CT volume images of the lungs
EP3423968B1 (en) Medical image navigation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20826584

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20826584

Country of ref document: EP

Kind code of ref document: A1