US20120229490A1 - Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies - Google Patents
Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies Download PDFInfo
- Publication number
- US20120229490A1 US20120229490A1 US13/044,122 US201113044122A US2012229490A1 US 20120229490 A1 US20120229490 A1 US 20120229490A1 US 201113044122 A US201113044122 A US 201113044122A US 2012229490 A1 US2012229490 A1 US 2012229490A1
- Authority
- US
- United States
- Prior art keywords
- function
- pixel value
- calibration function
- calibration
- caused
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/10—Intensity circuits
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0271—Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping
- G09G2320/0276—Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping for the purpose of adaptation to the characteristics of a display device, i.e. gamma correction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0693—Calibration of display systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2380/00—Specific applications
- G09G2380/08—Biomedical applications
Definitions
- the present invention generally relates to medical imaging, and more particularly, to compensating for image-quality discrepancies between an imaging modality and a viewing station.
- Medical imaging often includes creating images of the human body or parts of the human body for clinical purposes such as examination, diagnosis and/or treatment. These images may be acquired by a number of different imaging modalities including, for example, ultrasound (US), magnetic resonance (MR), positron emission tomography (PET), computed tomography (CT), mammograms (MG) digital radiology (DR), computed radiology (CR) or the like.
- US ultrasound
- MR magnetic resonance
- PET positron emission tomography
- CT computed tomography
- MG mammograms
- DR digital radiology
- CR computed radiology
- an acquired image may be reviewed by a technician of the imaging modality, and then sent to a viewing station where the image may be reviewed by a medical professional such as a radiologist. This is the case, for example, in a picture archiving and communication system (PACS).
- PACS picture archiving and communication system
- the viewing station in this example may apply a second, different calibration function to the acquired image viewed by a monitor of the viewing station—the second calibration function in one example being the DICOM GSDF.
- the DICOM GSDF National Electrical Manufacturers Association (NEMA), PS 3.14-2009, entitled: Digital Imaging and Communications in Medicine ( DICOM )— Part 14 : Grayscale Standard Display Function , the content of which is hereby incorporated by reference in its entirety.
- NEMA National Electrical Manufacturers Association
- PS 3.14-2009 entitled: Digital Imaging and Communications in Medicine ( DICOM )— Part 14 : Grayscale Standard Display Function , the content of which is hereby incorporated by reference in its entirety.
- an imaging modality may have a particular gamma value (the value that describes the relationship between the varying levels of luminance that a monitor can display). This gamma value may differ from one imaging modality to another imaging modality, which may compound the undesirability of differences in monitor calibration functions in various instances in which a viewing station may receive images from different modalities.
- an apparatus includes a processor and a memory storing executable instructions that in response to execution by the processor cause the apparatus to at least perform a number of operations.
- the apparatus is caused to receive a digital image including a plurality of pixels each of which has a pixel value of a plurality of pixel values, where the pixel value of each pixel has been calibrated according to a first calibration function for calibrating an image for display by a first monitor, such as the gamma function.
- the apparatus is caused to transform the pixel value of each of at least some of the pixels to a corresponding transformed pixel value calibrated according to a different, second calibration function for calibrating an image for display by a second monitor, such as the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF).
- the apparatus is also caused to cause output of the digital image including the plurality of pixels each of at least some of which has a transformed pixel value, where the respective digital image is displayable by the second monitor.
- the apparatus is caused to transform the pixel value according to a lookup table that relates pixel values calibrated according to the first calibration function to corresponding pixel values calibrated according to the second calibration function.
- the first and second calibration functions may be respective functions for calculating luminance as a function of pixel value.
- the first calibration function and second calibration function may be described by the following functions for calculating luminance ML and SL, respectively, as a function of pixel value x:
- the apparatus may be caused to calculate a transformed pixel value LUT as a function of pixel value x in accordance with the following:
- the first calibration function may be described by the following function for calculating luminance ML; as a function of pixel value x i :
- the second calibration function is described by the following function for calculating luminance SL j as a function of pixel value x j :
- the apparatus may be caused to transform the pixel value according to a lookup table that relates (x i , x j ) where
- the apparatus may be caused to receive a digital image from an imaging modality of a plurality of different types of modalities each of which has a respective first calibration function for calibrating an image for display by a first monitor.
- the memory may further store executable instructions that in response to execution by the processor cause the apparatus to further determine a type of modality from which the digital image is received.
- the apparatus may then be caused to transform the pixel value based on the determined type of modality.
- FIG. 1 is a schematic block diagram of a system configured to operate in accordance with exemplary embodiments of the present invention
- FIG. 2 is a schematic block diagram of an apparatus that may be configured to operate as or otherwise perform one or more functions of one or more of the components of the system of FIG. 1 , in accordance with embodiments of the present invention.
- FIG. 3 is a flowchart illustrating various operations in a method according to exemplary embodiments of the present invention.
- FIG. 4 is a graph that illustrates lookup table (LUT) values for an example image, according to one example embodiment of the present invention
- FIG. 5 is a graph that illustrates a change in image contrast due to different calibration functions, according to example embodiments of the present invention.
- FIG. 6 is a graph that illustrates display characteristics of a PACS monitor and three ultrasound devices, according to one example embodiment of the present invention.
- FIG. 1 illustrates a system 10 that may benefit from exemplary embodiments of the present invention (“exemplary” as used herein referring to “serving as an example, instance or illustration”).
- the system includes one or more imaging modalities 12 (three example modalities being shown as modalities 12 a , 12 b and 12 c ) for acquiring an image, such as an image of the human body or parts of the human body for clinical purposes such as examination, diagnosis and/or treatment.
- imaging modalities 12 three example modalities being shown as modalities 12 a , 12 b and 12 c ) for acquiring an image, such as an image of the human body or parts of the human body for clinical purposes such as examination, diagnosis and/or treatment.
- suitable modalities include, for example, ultrasound (US), magnetic resonance (MR), positron emission tomography (PET), computed tomography (CT), mammograms (MG) digital radiology (DR), computed radiology (CR) or the like.
- the system also includes a viewing station 14 configured to receive an image from one or more modalities 12 , and present the image such as for review by a medical professional such as a radiologist.
- the viewing station may be a picture archiving and communication system (PACS) viewing station (or workstation).
- PPS picture archiving and communication system
- a modality 12 and viewing station 14 may apply different monitor calibration functions to images presented by monitors of the respective apparatuses.
- a modality may apply the gamma correction function
- the viewing station may apply the DICOM GSDF.
- This difference in monitor calibration functions may lead to an undesirable visual discrepancy between an image presented by the modality, and the same image presented by the viewing station.
- the system of example embodiments of the present invention therefore further includes a computing apparatus 16 configured to transform pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF). In this manner, the computing apparatus may compensate for visual discrepancies otherwise due to the different calibration functions.
- a first calibration function e.g., gamma correction
- DICOM GSDF second, different calibration function
- the imaging modality 12 , viewing station 14 and/or computing apparatus 16 may be configured to directly and/or indirectly communicate with one another in any of a number of different manners including, for example, any of a number of wireline or wireless communication or networking techniques. Examples of such techniques include, without limitation, Universal Serial Bus (USB), radio frequency (RF), Bluetooth (BT), infrared (IrDA), any of a number of different cellular (wireless) communication techniques such as any of a number of 2G, 2.5G or 3G communication techniques, local area network (LAN), wireless LAN (WLAN) techniques or the like. In accordance with various ones of these techniques, the imaging modality, viewing station 14 and/or computing apparatus can be coupled to and configured to communicate across one or more networks.
- USB Universal Serial Bus
- RF radio frequency
- BT Bluetooth
- IrDA infrared
- any of a number of different cellular (wireless) communication techniques such as any of a number of 2G, 2.5G or 3G communication techniques, local area network (LAN), wireless LAN (
- the network(s) can comprise any of a number of different combinations of one or more different types of networks, including data and/or voice networks.
- the network(s) can include one or more data networks, such as a LAN, a metropolitan area network (MAN), and/or a wide area network (WAN) (e.g., Internet), and include one or more voice networks, such as a public-switched telephone network (PSTN).
- PSTN public-switched telephone network
- the network(s) may include one or more apparatuses such as one or more routers, switches or the like for relaying data, information or the like between the imaging modality, viewing station and/or computing apparatus.
- FIG. 2 illustrates a block diagram of an apparatus 18 that may be configured to operate as or otherwise perform one or more functions of an imaging modality 12 , viewing station 14 and/or computing apparatus 16 .
- one or more of the respective apparatuses may support more than one of a modality, viewing station and/or computing apparatus, logically separated but co-located within the entit(ies).
- a single apparatus may support a logically separate, but co-located modality and computing apparatus, or in another example, a single apparatus may support a logically separate, but co-located viewing station and computing apparatus.
- the apparatus of exemplary embodiments of the present invention may comprise, include or be embodied in one or more fixed electronic devices, such as one or more of a laptop computer, desktop computer, workstation computer, server computer or the like. Additionally or alternatively, the apparatus may comprise, include or be embodied in one or more portable electronic devices, such as one or more of a mobile telephone, portable digital assistant (PDA), pager or the like.
- PDA portable digital assistant
- the apparatus of exemplary embodiments of the present invention includes various means for performing one or more functions in accordance with exemplary embodiments of the present invention, including those more particularly shown and described herein. It should be understood, however, that one or more of the entities may include alternative means for performing one or more like functions, without departing from the spirit and scope of the present invention.
- the apparatus may include a processor 20 connected to a memory 22 .
- the memory can comprise volatile and/or non-volatile memory, and typically stores content, data or the like.
- the memory may store one or more software applications 24 , modules, instructions or the like for the processor to perform steps associated with operation of the apparatus in accordance with embodiments of the present invention.
- the memory may also store content transmitted from, and/or received by, the apparatus.
- the software application(s) may each comprise software operated by the apparatus. It should be understood, however, that any one or more of the software applications described herein may alternatively be implemented by firmware, hardware or any combination of software, firmware and/or hardware, without departing from the spirit and scope of the present invention.
- the processor 20 can also be connected to at least one interface or other means for displaying, transmitting and/or receiving data, content or the like, such as in accordance with USB, RF, BT, IrDA, WLAN, LAN, MAN, WAN (e.g., Internet), PSTN techniques or the like.
- the interface(s) can include at least one communication interface 26 or other means for transmitting and/or receiving data, content or the like.
- the interface(s) can also include at least one user interface that can include one or more earphones and/or speakers, a monitor 28 , and/or a user input interface 30 .
- the user input interface can comprise any of a number of devices allowing the apparatus to receive data from a user, such as a microphone, a keypad, a touch-sensitive surface (integral or separate from the monitor), a joystick, or other input device.
- the processor may be directly connected to other components of the apparatus, or may be connected via suitable hardware.
- the processor may be connected to the monitor via a display adapter 32 configured to permit the processor to send graphical information to the monitor.
- the system of example embodiments of the present invention includes a computing apparatus 16 configured to transform pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF).
- a first calibration function e.g., gamma correction
- DICOM GSDF second, different calibration function
- the computing apparatus may be configured to apply the transformation in any of a number of different manners.
- the computing apparatus may be configured to transform pixel values using a lookup table (LUT). It should be understood, however, that the computing apparatus may be equally configured to transform pixel values using an algorithm such as that from which an appropriate LUT may be calculated.
- LUT lookup table
- the method may include calculating or otherwise retrieving a LUT for transformation of pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF)—these corresponding pixel values at times being referred to as LUT values. More particularly, for example, consider pixel values calibrated according to a first calibration function.
- a first calibration function e.g., gamma correction
- DICOM GSDF second, different calibration function
- a corresponding LUT value calibrated according to a second calibration function may be determined by first determining the luminance for the pixel value calibrated according to the first calibration function, and then determining the pixel value calibrated according to the second calibration function that yields the determined luminance.
- the LUT may be calculated in a number of different manners, which may depend on whether the first and second calibration functions are known or unknown. Examples of calculating a LUT in each instance are presented below.
- the LUT may be calculated based on the respective functions.
- the first calibration function (the function of the imaging modality 12 ) may be described by the following representation of modality luminance (ML)
- the second calibration function (the function of the viewing station 14 ) may be described by the following representation of the station luminance (SL):
- x represents the pixel value that belongs to the domain [ ⁇ 2 n-1 , 2 n-1 ⁇ 1] for a signed image or [0, 2 n ⁇ 1] for an unsigned image (n representing the number of bits of the pixel value).
- the LUT of one example embodiment may be calculated in accordance with the following:
- F ⁇ 1 denotes the inverse function of F.
- F ⁇ 1 denotes the inverse function of F.
- F(x) and G(x) are monotone. This is the case, for example, for both gamma and DICOM GSDF functions.
- the SL range may be equal or larger than the ML range to thereby produce a unique solution, which may be the case with typical PACS monitors.
- the first and second calibration functions are known, consider an instance in which the first calibration function is a gamma correction, and the second calibration function is the DICOM GSDF.
- the gamma function may be described as follows:
- C and B represent the contrast and minimum luminance (brightness) of the monitor of the modality 12 , which may be set by respective monitor controls.
- the variable x represents a normalized pixel value between 0 and 1, which may take into account minimum and the maximum pixel values, and ⁇ (gamma) represents the gamma value of the modality monitor.
- the luminance of the monitor of the viewing station 14 may be derived from the following DICOM GSDF (a monotone function):
- Ln refers to the natural logarithm
- j refers to an index (1 to 1023) of luminance levels L j of the just-noticeable differences (JND), where the JND may be considered the luminance difference of a given target under given viewing conditions that the average human observer can just perceive.
- one step in the JND index j may result in a luminance difference that is a JND.
- the corresponding LUT value may be found. This may be achieved by being given or otherwise determining the minimum and maximum luminance values (ML min and ML max ) and the gamma value ( ⁇ ) of the monitor of the modality 12 , which can be used to determine the parameters C and B of equation (4).
- these values may be substituted in equation (4) to determine the parameters C and B.
- these parameters may be used to determine the minimum and maximum luminance values (ML min and ML max ).
- ML min and ML max may be substituted in equation (6), one may determine the values of j min and j max .
- equation (4) may be used to determine G(x), which may be substituted into equation (6) to determine j(G(x)).
- the value j(G(x)) may be substituted into equation (7) or equation (8) (depending on the signed/unsigned nature of the image) to determine the value x as the corresponding LUT value.
- the LUT value may be determined from value j(G(x)) in accordance with the following:
- the LUT value may be determined from value j(G(x)) in accordance with the following:
- the computing apparatus 16 being configured to transform pixel values calibrated according to gamma correction to corresponding pixel values calibrated according to DICOM GSDF.
- the LUT may be calculated by looking to the display characteristic curves of the modality 12 and viewing station 14 , each of which may be determined or otherwise measured by means of a quantitative procedure.
- the display characteristic curve for a monitor may define the relationship between luminance and pixel values (equation (1) and (2) above).
- One may, for example, use TG18-LN test patterns for this purpose. These test patterns are provided by the American Association of Physicists in Medicine (AAPM), task group (TG) 18, and may be imported to the modality and sent to the viewing station to mimic an image workflow.
- test patterns such as the TG18-LN test patterns
- a number of distinct luminance levels may be measured and the remaining luminance values may be interpolated, such as according to a cubic spline.
- the interpolated display characteristic curves of the modality and viewing station may then be used to determine the LUT, such as by taking equation (3) into consideration.
- x i represents a pixel value calibrated according to the first calibration function
- x j represents a pixel value calibrated according to the second calibration function, each of which may be in the range [ ⁇ 2 n-1 , 2 n-1 ⁇ 1] for a signed image or [0, 2 n ⁇ 1] for an unsigned image
- ML i represents the modality luminance for its pixel value x i
- SL j represents the viewing station luminance for its pixel value x j .
- the LUT may then be calculated according to the following pseudo-algorithm for a signed image:
- the method may include receiving an image including a plurality of pixels each of which has a pixel value calibrated according to the first calibration function, as shown in block 42 .
- the image may be formatted in any of a number of different manners, such as in accordance with the DICOM standard.
- the method may include applying the LUT to determine for each pixel value of each pixel of the image, a corresponding pixel value (LUT value) calibrated according to the second calibration function, as shown in block 44 .
- the thus transformed pixel values may then be further processed, as appropriate, and output to the monitor of the viewing station 14 for display, as shown in block 46 .
- the LUT may be applied in any of a number of different manners, and at a number of different locations in an image workflow from a modality 12 to viewing station 14 .
- the LUT may be applied as a presentation LUT or value-of-interest LUT in accordance with appropriate DICOM standards.
- the LUT may be applied by the computing apparatus 16 which may be implemented as part of the imaging modality 12 or viewing station 14 , or as a separate device between the modality and viewing station.
- the computing apparatus (separate or part of the modality) may be configured to add the LUT as a presentation LUT.
- the LUT values may be burned or otherwise stored with the respective pixels of the image, or the LUT may be passed through with the image.
- image viewer software on the viewing station may be configurable to apply the LUT as a presentation LUT.
- the computing apparatus 16 may be configured to apply different LUTs for different types of modalities (e.g., modalities 12 a , 12 b , 12 c ).
- the computing apparatus may be configured to determine the type of modality that captured an image, and load and apply an appropriate LUT for the respective type of modality.
- the computing apparatus may be configured to determine the type of modality in a number of different manners.
- each imaging modality may have a respective network address (e.g., IP address).
- the computing apparatus may store or otherwise have access to a table that associates the network addresses of the modalities with their modality types.
- the computing apparatus may receive an image across the network, the image or a record referring to the image may identify the network address of its source modality. The computing apparatus may then consult the table based upon the source network address to identify the type of the respective modality.
- the image may be formatted to include a header with one or more tags including respective acquisition parameters that refer to the name of the modality 12 that acquired the image (source modality), its software version or the like.
- the computing apparatus 16 may store or otherwise have access to a table that associates acquisition parameters (tags) with modality types, or may be setup to operate according to logic that specifies application of the LUT in instances in which an image has parameters (tags) with particular values.
- the computing apparatus may be configured to analyze the image's header and its parameters (tags), and apply the LUT in accordance with the table or logic.
- a DICOM image may include a tag (0008, 0070) that identifies the “Manufacturer ID,” tag (0008, 1090) that identifies the “Manufacturer's Model Name ID,” tag (0008, 0060) that identifies the “Modality ID” and tag (0008, 1010) that includes a name identifying the machine that produce images.
- the computing apparatus 16 may be configured to apply the LUT in instances in which the computing apparatus receives an image from US (ultrasound) modality X with the name “koko” and software version “12.1.1.”
- the computing apparatus may be configured to analyze the image's header and its parameters. In an instance in which the “Modality ID” is US and “Manufacturer's Model Name ID” is “koko,” the computing apparatus may apply the LUT; otherwise, the computing apparatus may forego application of the LUT.
- FIG. 5 illustrates the derivatives of equations (4) and (5) for a luminance range of [0.36, 177] for a pixel range [0, 255]. Note that the vertical axis is the degree of luminance change (image contrast d(L)/d(x)), and it is in a logarithmic scale.
- FIG. 6 The display characteristics of all four monitors in this example are shown in FIG. 6 .
- the PACS monitor produces more light.
- pixel values of a smaller range are noise contribution to ultrasound images, and physicians often like noise to be displayed closer to pure black (minimum luminance).
- the derivatives of the display characteristics of the ultrasound devices and PCS monitor show a very similar discrepancy in image contrast as illustrated by the theoretical discrepancy shown in FIG. 5 .
- all or a portion of the modality 12 , viewing station 14 and/or computing apparatus 16 of exemplary embodiments of the present invention generally operate under control of a computer program.
- the computer program for performing the methods of exemplary embodiments of the present invention may include one or more computer-readable program code portions, such as a series of computer instructions, embodied or otherwise stored in a computer-readable storage medium, such as the non-volatile storage medium.
- FIG. 3 is a flowchart reflecting methods, systems and computer programs according to exemplary embodiments of the present invention. It will be understood that each block or step of the flowchart, and combinations of blocks in the flowchart, may be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus (e.g., hardware) create means for implementing the functions specified in the block(s) or step(s) of the flowchart.
- any such computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus (e.g., hardware) create means for implementing the functions specified in the block(s) or step(s) of the flowchart.
- These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the block(s) or step(s) of the flowchart.
- the computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the block(s) or step(s) of the flowchart.
- blocks or steps of the flowchart support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that one or more blocks or steps of the flowchart, and combinations of blocks or steps in the flowchart, may be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
- The present invention generally relates to medical imaging, and more particularly, to compensating for image-quality discrepancies between an imaging modality and a viewing station.
- Medical imaging often includes creating images of the human body or parts of the human body for clinical purposes such as examination, diagnosis and/or treatment. These images may be acquired by a number of different imaging modalities including, for example, ultrasound (US), magnetic resonance (MR), positron emission tomography (PET), computed tomography (CT), mammograms (MG) digital radiology (DR), computed radiology (CR) or the like. In a number of example medical imaging workflows, an acquired image may be reviewed by a technician of the imaging modality, and then sent to a viewing station where the image may be reviewed by a medical professional such as a radiologist. This is the case, for example, in a picture archiving and communication system (PACS).
- Maintaining consistency in the quality of an acquired image through an imaging workflow is often desirable. Due to different monitor calibration functions between imaging modalities (senders) and viewing stations (receivers), however, an undesirable visualization discrepancy may occur. These calibration functions may be described as being performed by a modality or viewing station, but in more particular examples, may be performed by video drivers of the respective apparatuses, software associated with monitors of the respective apparatuses or the like. In one example, an imaging modality may apply a first calibration function such as the gamma correction function (e.g., γ=2.2) to an acquired image viewed by a monitor of the modality. The viewing station in this example, however, may apply a second, different calibration function to the acquired image viewed by a monitor of the viewing station—the second calibration function in one example being the DICOM GSDF. For more information on the DICOM GSDF, see National Electrical Manufacturers Association (NEMA), PS 3.14-2009, entitled: Digital Imaging and Communications in Medicine (DICOM)—Part 14: Grayscale Standard Display Function, the content of which is hereby incorporated by reference in its entirety.
- In the above example, an imaging modality may have a particular gamma value (the value that describes the relationship between the varying levels of luminance that a monitor can display). This gamma value may differ from one imaging modality to another imaging modality, which may compound the undesirability of differences in monitor calibration functions in various instances in which a viewing station may receive images from different modalities.
- In light of the foregoing background, exemplary embodiments of the present invention provide an apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies between an imaging modality and a viewing station. According to one aspect of exemplary embodiments of the present invention, an apparatus is provided that includes a processor and a memory storing executable instructions that in response to execution by the processor cause the apparatus to at least perform a number of operations. In this regard, the apparatus is caused to receive a digital image including a plurality of pixels each of which has a pixel value of a plurality of pixel values, where the pixel value of each pixel has been calibrated according to a first calibration function for calibrating an image for display by a first monitor, such as the gamma function.
- The apparatus is caused to transform the pixel value of each of at least some of the pixels to a corresponding transformed pixel value calibrated according to a different, second calibration function for calibrating an image for display by a second monitor, such as the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF). The apparatus is also caused to cause output of the digital image including the plurality of pixels each of at least some of which has a transformed pixel value, where the respective digital image is displayable by the second monitor. In one example, the apparatus is caused to transform the pixel value according to a lookup table that relates pixel values calibrated according to the first calibration function to corresponding pixel values calibrated according to the second calibration function.
- The first and second calibration functions may be respective functions for calculating luminance as a function of pixel value. In one example, the first calibration function and second calibration function may be described by the following functions for calculating luminance ML and SL, respectively, as a function of pixel value x:
-
ML=G(x) -
SL=F(x) - In this example, the apparatus may be caused to calculate a transformed pixel value LUT as a function of pixel value x in accordance with the following:
-
LUT=F −1(G(x)) - in which F−1 denotes the inverse function of F.
- In another example, the first calibration function may be described by the following function for calculating luminance ML; as a function of pixel value xi:
-
ML i =G(x i) - In this example, the second calibration function is described by the following function for calculating luminance SLj as a function of pixel value xj:
-
SL j =F(x j) - The apparatus, then, may be caused to transform the pixel value according to a lookup table that relates (xi, xj) where |MLi−SLj| has the minimum value.
- The apparatus may be caused to receive a digital image from an imaging modality of a plurality of different types of modalities each of which has a respective first calibration function for calibrating an image for display by a first monitor. In such instances, the memory may further store executable instructions that in response to execution by the processor cause the apparatus to further determine a type of modality from which the digital image is received. The apparatus may then be caused to transform the pixel value based on the determined type of modality.
- Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
-
FIG. 1 is a schematic block diagram of a system configured to operate in accordance with exemplary embodiments of the present invention; -
FIG. 2 is a schematic block diagram of an apparatus that may be configured to operate as or otherwise perform one or more functions of one or more of the components of the system ofFIG. 1 , in accordance with embodiments of the present invention; and -
FIG. 3 is a flowchart illustrating various operations in a method according to exemplary embodiments of the present invention; -
FIG. 4 is a graph that illustrates lookup table (LUT) values for an example image, according to one example embodiment of the present invention; -
FIG. 5 is a graph that illustrates a change in image contrast due to different calibration functions, according to example embodiments of the present invention; and -
FIG. 6 is a graph that illustrates display characteristics of a PACS monitor and three ultrasound devices, according to one example embodiment of the present invention. - The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Further, the apparatus and method of example embodiments of the present invention will be primarily described in conjunction with medical-imaging applications. It should be understood, however, that the apparatus and method can be utilized in conjunction with a variety of other applications, both in the medical industry and outside of the medical industry. Like numbers refer to like elements throughout.
-
FIG. 1 illustrates asystem 10 that may benefit from exemplary embodiments of the present invention (“exemplary” as used herein referring to “serving as an example, instance or illustration”). As shown, the system includes one or more imaging modalities 12 (three example modalities being shown asmodalities - The system also includes a
viewing station 14 configured to receive an image from one or more modalities 12, and present the image such as for review by a medical professional such as a radiologist. In one example embodiment, the viewing station may be a picture archiving and communication system (PACS) viewing station (or workstation). - As explained in the background section, in various instances, a modality 12 and
viewing station 14 may apply different monitor calibration functions to images presented by monitors of the respective apparatuses. For example, a modality may apply the gamma correction function, while the viewing station may apply the DICOM GSDF. This difference in monitor calibration functions may lead to an undesirable visual discrepancy between an image presented by the modality, and the same image presented by the viewing station. As explained in greater detail below, the system of example embodiments of the present invention therefore further includes acomputing apparatus 16 configured to transform pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF). In this manner, the computing apparatus may compensate for visual discrepancies otherwise due to the different calibration functions. - The imaging modality 12,
viewing station 14 and/orcomputing apparatus 16 may be configured to directly and/or indirectly communicate with one another in any of a number of different manners including, for example, any of a number of wireline or wireless communication or networking techniques. Examples of such techniques include, without limitation, Universal Serial Bus (USB), radio frequency (RF), Bluetooth (BT), infrared (IrDA), any of a number of different cellular (wireless) communication techniques such as any of a number of 2G, 2.5G or 3G communication techniques, local area network (LAN), wireless LAN (WLAN) techniques or the like. In accordance with various ones of these techniques, the imaging modality,viewing station 14 and/or computing apparatus can be coupled to and configured to communicate across one or more networks. The network(s) can comprise any of a number of different combinations of one or more different types of networks, including data and/or voice networks. For example, the network(s) can include one or more data networks, such as a LAN, a metropolitan area network (MAN), and/or a wide area network (WAN) (e.g., Internet), and include one or more voice networks, such as a public-switched telephone network (PSTN). Although not shown, the network(s) may include one or more apparatuses such as one or more routers, switches or the like for relaying data, information or the like between the imaging modality, viewing station and/or computing apparatus. - Reference is now made to
FIG. 2 , which illustrates a block diagram of an apparatus 18 that may be configured to operate as or otherwise perform one or more functions of an imaging modality 12,viewing station 14 and/orcomputing apparatus 16. Although shown inFIG. 1 as separate apparatuses, in some embodiments, one or more of the respective apparatuses may support more than one of a modality, viewing station and/or computing apparatus, logically separated but co-located within the entit(ies). For example, a single apparatus may support a logically separate, but co-located modality and computing apparatus, or in another example, a single apparatus may support a logically separate, but co-located viewing station and computing apparatus. - Generally, the apparatus of exemplary embodiments of the present invention may comprise, include or be embodied in one or more fixed electronic devices, such as one or more of a laptop computer, desktop computer, workstation computer, server computer or the like. Additionally or alternatively, the apparatus may comprise, include or be embodied in one or more portable electronic devices, such as one or more of a mobile telephone, portable digital assistant (PDA), pager or the like. The apparatus of exemplary embodiments of the present invention includes various means for performing one or more functions in accordance with exemplary embodiments of the present invention, including those more particularly shown and described herein. It should be understood, however, that one or more of the entities may include alternative means for performing one or more like functions, without departing from the spirit and scope of the present invention.
- As shown in
FIG. 2 , the apparatus may include aprocessor 20 connected to amemory 22. The memory can comprise volatile and/or non-volatile memory, and typically stores content, data or the like. In this regard, the memory may store one ormore software applications 24, modules, instructions or the like for the processor to perform steps associated with operation of the apparatus in accordance with embodiments of the present invention. The memory may also store content transmitted from, and/or received by, the apparatus. As described herein, the software application(s) may each comprise software operated by the apparatus. It should be understood, however, that any one or more of the software applications described herein may alternatively be implemented by firmware, hardware or any combination of software, firmware and/or hardware, without departing from the spirit and scope of the present invention. - In addition to the
memory 22, theprocessor 20 can also be connected to at least one interface or other means for displaying, transmitting and/or receiving data, content or the like, such as in accordance with USB, RF, BT, IrDA, WLAN, LAN, MAN, WAN (e.g., Internet), PSTN techniques or the like. In this regard, the interface(s) can include at least onecommunication interface 26 or other means for transmitting and/or receiving data, content or the like. In addition to the communication interface(s), the interface(s) can also include at least one user interface that can include one or more earphones and/or speakers, amonitor 28, and/or auser input interface 30. The user input interface, in turn, can comprise any of a number of devices allowing the apparatus to receive data from a user, such as a microphone, a keypad, a touch-sensitive surface (integral or separate from the monitor), a joystick, or other input device. As will be appreciated, the processor may be directly connected to other components of the apparatus, or may be connected via suitable hardware. In one example, the processor may be connected to the monitor via adisplay adapter 32 configured to permit the processor to send graphical information to the monitor. - As indicated above, the system of example embodiments of the present invention includes a
computing apparatus 16 configured to transform pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF). In this manner, the computing apparatus may compensate for visual discrepancies otherwise due to the different calibration functions. The computing apparatus may be configured to apply the transformation in any of a number of different manners. As explained below, the computing apparatus may be configured to transform pixel values using a lookup table (LUT). It should be understood, however, that the computing apparatus may be equally configured to transform pixel values using an algorithm such as that from which an appropriate LUT may be calculated. - Reference is now made to
FIG. 3 , which illustrates various operations in a method according to example embodiments of the present invention. As shown inblock 40, the method may include calculating or otherwise retrieving a LUT for transformation of pixel values calibrated according to a first calibration function (e.g., gamma correction) to corresponding pixel values calibrated according to a second, different calibration function (DICOM GSDF)—these corresponding pixel values at times being referred to as LUT values. More particularly, for example, consider pixel values calibrated according to a first calibration function. For each such pixel value, a corresponding LUT value calibrated according to a second calibration function may be determined by first determining the luminance for the pixel value calibrated according to the first calibration function, and then determining the pixel value calibrated according to the second calibration function that yields the determined luminance. The LUT may be calculated in a number of different manners, which may depend on whether the first and second calibration functions are known or unknown. Examples of calculating a LUT in each instance are presented below. - A. Known Calibration Functions
- In an instance in which the first and second calibration functions are known, the LUT may be calculated based on the respective functions. In this instance, assume for example that the first calibration function (the function of the imaging modality 12) may be described by the following representation of modality luminance (ML), and that the second calibration function (the function of the viewing station 14) may be described by the following representation of the station luminance (SL):
-
ML=G(x) (1) -
SL=F(x) (2) - In the preceding, x represents the pixel value that belongs to the domain [−2n-1, 2n-1−1] for a signed image or [0, 2n−1] for an unsigned image (n representing the number of bits of the pixel value).
- Given the above equations (1) and (2), the LUT of one example embodiment may be calculated in accordance with the following:
-
LUT=F −1(G(x)) (3) - where F−1 denotes the inverse function of F. A solution for the above expression of the LUT exists in instances in which both functions F(x) and G(x) are monotone. This is the case, for example, for both gamma and DICOM GSDF functions. In addition, the SL range may be equal or larger than the ML range to thereby produce a unique solution, which may be the case with typical PACS monitors.
- In a more particular example in which the first and second calibration functions are known, consider an instance in which the first calibration function is a gamma correction, and the second calibration function is the DICOM GSDF. In this example, the gamma function may be described as follows:
-
ML=G(x)=C×x γ +B (4) - In equation (4), C and B represent the contrast and minimum luminance (brightness) of the monitor of the modality 12, which may be set by respective monitor controls. The variable x represents a normalized pixel value between 0 and 1, which may take into account minimum and the maximum pixel values, and γ (gamma) represents the gamma value of the modality monitor. For a signed DICOM MONOCHROME2 image with pixel of range [−2n-1, 2n−1], the corresponding ML range may be (MLmin, MLmax), where MLmin=C×(−2n-1)γ+B and MLmax=C×(2n-1−1)γ+B.
- Also in this more particular example in which the second calibration function is the DICOM GSDF, the luminance of the monitor of the
viewing station 14 may be derived from the following DICOM GSDF (a monotone function): -
- In the preceding, Ln refers to the natural logarithm, and j refers to an index (1 to 1023) of luminance levels Lj of the just-noticeable differences (JND), where the JND may be considered the luminance difference of a given target under given viewing conditions that the average human observer can just perceive. In this regard, one step in the JND index j may result in a luminance difference that is a JND. Also in the preceding, the constants a-h, k and m may be set as follows: a=−1.3011877, b=−2.5840191×10−2, c=8.0242636×10−2, d=−1.0320229×10−1, e=1.3646699×10−1, f=2.8745620×10−2, g=−2.5468404×10−2, h=−3.1978977×10−3, k=1.2992634×10−4 and m=1.3635334×10−3.
- The inverse function of equation (5) is as follows:
-
j(L)=A+B·Log10(L)+C·(Log10(L))2 +D·(Log10(L))3 +E·(Log10(L))4 +F·(Log10(L))5 +G·(Log10(L))6 +H·(Log10(L))7 +I·(Log10(L))8 (6) - In equation (6), Log10 represents a logarithm to the
base 10, and the constants A-I may be set as follows: A=71.498068, B=94.593053, C=41.912053, D=9.8247004, E=0.28175407, F=−1.1878455, G=−0.18014349, H=0.14710899 and I=—0.017046845. - Equation (6) permits computing discrete JNDs for the modality luminance range (MLmin, MLmax) as jmin=j(MLmin) and jmax=j(MLmax). In this regard, for a signed image, the span of j values may range from jmin for pixel value x=−2n-1, jmax for pixel value x=2n-1−1, such as according to the following:
-
- For an unsigned image, the span of j values may range from jmin for pixel value x=0, to jmax for pixel value x=2−1−1, such as according to the following:
-
- The corresponding luminance values of each j(x), then, can be calculated by equation (5) as L(j(x)).
- In order to calculate the LUT, for each pixel value x calibrated according to the first calibration function, the corresponding LUT value may be found. This may be achieved by being given or otherwise determining the minimum and maximum luminance values (MLmin and MLmax) and the gamma value (γ) of the monitor of the modality 12, which can be used to determine the parameters C and B of equation (4). Alternatively, in an instance in which the luminance range of the
viewing station 14 monitor (SLmin and SLmax) is known, and the gamma of the modality monitor is known, these values may be substituted in equation (4) to determine the parameters C and B. In another alternative, in an instance in which the parameters C and B and the gamma value of the modality monitor are given, these parameters may be used to determine the minimum and maximum luminance values (MLmin and MLmax). In any instance, by substitution of MLmin and MLmax (or SLmin and SLmax) in equation (6), one may determine the values of jmin and jmax. - For each pixel value x, equation (4) may be used to determine G(x), which may be substituted into equation (6) to determine j(G(x)). The value j(G(x)) may be substituted into equation (7) or equation (8) (depending on the signed/unsigned nature of the image) to determine the value x as the corresponding LUT value. Written notationally, for a signed image, the LUT value may be determined from value j(G(x)) in accordance with the following:
-
- Or for an unsigned image, the LUT value may be determined from value j(G(x)) in accordance with the following:
-
- As an example, consider the
computing apparatus 16 being configured to transform pixel values calibrated according to gamma correction to corresponding pixel values calibrated according to DICOM GSDF. Further consider that the modality monitor (gamma calibrated) has the following parameter values: MLmin=B=0.4 cd/m2, MLmax=178 cd/m2, and γ=2.2. In this example, the LUT values for an 8-bit (n=8) unsigned image may be represented as in the graph ofFIG. 4 . - B. Unknown Calibration Functions
- In an instance in which either or both of the first or second calibration functions are unknown, the LUT may be calculated by looking to the display characteristic curves of the modality 12 and
viewing station 14, each of which may be determined or otherwise measured by means of a quantitative procedure. The display characteristic curve for a monitor may define the relationship between luminance and pixel values (equation (1) and (2) above). One may, for example, use TG18-LN test patterns for this purpose. These test patterns are provided by the American Association of Physicists in Medicine (AAPM), task group (TG) 18, and may be imported to the modality and sent to the viewing station to mimic an image workflow. By using test patterns such as the TG18-LN test patterns, a number of distinct luminance levels may be measured and the remaining luminance values may be interpolated, such as according to a cubic spline. The interpolated display characteristic curves of the modality and viewing station may then be used to determine the LUT, such as by taking equation (3) into consideration. - More particularly, for example, assume that the modality and viewing station transfer functions are measured and described by the following tabulated functions:
-
ML i =G(x i) (11) -
SL j =F(x j) (12) - In equations (11) and (12), xi represents a pixel value calibrated according to the first calibration function, and xj represents a pixel value calibrated according to the second calibration function, each of which may be in the range [−2n-1, 2n-1−1] for a signed image or [0, 2n−1] for an unsigned image. MLi represents the modality luminance for its pixel value xi, and SLj represents the viewing station luminance for its pixel value xj. The LUT may then be calculated according to the following pseudo-algorithm for a signed image:
-
For each integer i from range [−2n−1, 2n−1 −1] do For each integer j from range [−2n−1, 2n−1 −1] do Find a single SLj where | MLi − SLj | has the minimum value Save (xi, xj)
A similar pseudo-algorithm may be implemented for an unsigned image by appropriately adjusting the range of i and j values. After execution of above algorithm, the LUT may be uniquely defined by the set of (xi, xj) values. - Returning to
FIG. 3 , after calculating or otherwise retrieving the LUT, the method may include receiving an image including a plurality of pixels each of which has a pixel value calibrated according to the first calibration function, as shown inblock 42. The image may be formatted in any of a number of different manners, such as in accordance with the DICOM standard. The method may include applying the LUT to determine for each pixel value of each pixel of the image, a corresponding pixel value (LUT value) calibrated according to the second calibration function, as shown inblock 44. The thus transformed pixel values may then be further processed, as appropriate, and output to the monitor of theviewing station 14 for display, as shown inblock 46. - The LUT may be applied in any of a number of different manners, and at a number of different locations in an image workflow from a modality 12 to
viewing station 14. In one example in which the second calibration function is the DICOM GSDF, the LUT may be applied as a presentation LUT or value-of-interest LUT in accordance with appropriate DICOM standards. Also, the LUT may be applied by thecomputing apparatus 16 which may be implemented as part of the imaging modality 12 orviewing station 14, or as a separate device between the modality and viewing station. The computing apparatus (separate or part of the modality) may be configured to add the LUT as a presentation LUT. The LUT values may be burned or otherwise stored with the respective pixels of the image, or the LUT may be passed through with the image. In another example in which the computing apparatus is implemented at the viewing station, image viewer software on the viewing station may be configurable to apply the LUT as a presentation LUT. - In various instances, particularly when the
computing apparatus 16 is implemented separate from the imaging modality 12, the computing apparatus may be configured to apply different LUTs for different types of modalities (e.g.,modalities - In one example in which the imaging modalities 12 and
computing apparatus 16 are coupled to and configured to communicate with one another across a network, each imaging modality may have a respective network address (e.g., IP address). In this example, the computing apparatus may store or otherwise have access to a table that associates the network addresses of the modalities with their modality types. When the computing apparatus receives an image across the network, the image or a record referring to the image may identify the network address of its source modality. The computing apparatus may then consult the table based upon the source network address to identify the type of the respective modality. - In another example, the image may be formatted to include a header with one or more tags including respective acquisition parameters that refer to the name of the modality 12 that acquired the image (source modality), its software version or the like. In this other example, the
computing apparatus 16 may store or otherwise have access to a table that associates acquisition parameters (tags) with modality types, or may be setup to operate according to logic that specifies application of the LUT in instances in which an image has parameters (tags) with particular values. When the computing apparatus receives an image, the computing apparatus may be configured to analyze the image's header and its parameters (tags), and apply the LUT in accordance with the table or logic. - In a more particular example, a DICOM image may include a tag (0008, 0070) that identifies the “Manufacturer ID,” tag (0008, 1090) that identifies the “Manufacturer's Model Name ID,” tag (0008, 0060) that identifies the “Modality ID” and tag (0008, 1010) that includes a name identifying the machine that produce images. In this example, the
computing apparatus 16 may be configured to apply the LUT in instances in which the computing apparatus receives an image from US (ultrasound) modality X with the name “koko” and software version “12.1.1.” When the computing apparatus receives a DICOM image for display, the computing apparatus may be configured to analyze the image's header and its parameters. In an instance in which the “Modality ID” is US and “Manufacturer's Model Name ID” is “koko,” the computing apparatus may apply the LUT; otherwise, the computing apparatus may forego application of the LUT. - To further illustrate example embodiments of the present invention, consider a hypothetical image workflow investigation in which the same image was displayed on an image-modality monitor with gamma calibration, and on a PACS viewing station with DICOM GSDF calibration. This investigation indicated that there would be no visual consistency in image display on monitors that were calibrated to gamma and DICOM GSDF calibration functions. As an example,
FIG. 5 illustrates the derivatives of equations (4) and (5) for a luminance range of [0.36, 177] for a pixel range [0, 255]. Note that the vertical axis is the degree of luminance change (image contrast d(L)/d(x)), and it is in a logarithmic scale. The derivative of equation (5) (DICOM GSDF) is close to a linear line indicating perceptual linearity (the thick line in the figure), while the derivative of equation (4) (gamma calibration) is not a line but a logarithmic function in a logarithmic scale. - Even further, consider tests of a real-world workflow in which three ultrasound devices using gamma calibration, noted as USX, USY and USZ, sent images to a PACS viewing station connected to a monitor calibrated to DICOM GSDF. In these tests, patterns similar to TG18-LN were used, and the measured display characteristic curves of ultrasound devices and PACS monitor were determined (including cubic spline interpolation). The gamma values (γ) of the ultrasound device monitors were estimated by minimizing the mean square errors (MSE) of equation (4) and measured display characteristic values. The PACS monitor was a color LG 1200ME monitor with a resolution 280×1024, and with a color temperature set to 6500 K. Its display characteristics curve was evaluated with the ideal DICOM GSDF function.
- The above real-world workflow investigation provided the following results for the measured minimum and maximum luminance values, gamma values (γ) and MSEs:
-
Device Minimum L Maximum L Estimated γ MSE PACS 0.36 177.52 NA 0.268 USX 0.18 153.08 2.27 0.261 USY 0.16 94.71 2.23 0.036 USZ 0.26 95.00 2.42 0.189
The deviation of the PACS monitor from the DICOM GSDF ideal curve was on average 5.5% with a minimum and maximum deviation of 0.5% and 12.3%, respectively. These measurements showed that the monitors were indeed calibrated according to the functions (gamma and DICOM GSDF), with the acceptable tolerance deviation from theoretical models. - The display characteristics of all four monitors in this example are shown in
FIG. 6 . AsFIG. 6 illustrates, for smaller pixel values (<32), the PACS monitor produces more light. Usually pixel values of a smaller range are noise contribution to ultrasound images, and physicians often like noise to be displayed closer to pure black (minimum luminance). In addition, the derivatives of the display characteristics of the ultrasound devices and PCS monitor show a very similar discrepancy in image contrast as illustrated by the theoretical discrepancy shown inFIG. 5 . - According to one aspect of the present invention, all or a portion of the modality 12,
viewing station 14 and/orcomputing apparatus 16 of exemplary embodiments of the present invention, generally operate under control of a computer program. The computer program for performing the methods of exemplary embodiments of the present invention may include one or more computer-readable program code portions, such as a series of computer instructions, embodied or otherwise stored in a computer-readable storage medium, such as the non-volatile storage medium. -
FIG. 3 is a flowchart reflecting methods, systems and computer programs according to exemplary embodiments of the present invention. It will be understood that each block or step of the flowchart, and combinations of blocks in the flowchart, may be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus (e.g., hardware) create means for implementing the functions specified in the block(s) or step(s) of the flowchart. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the block(s) or step(s) of the flowchart. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the block(s) or step(s) of the flowchart. - Accordingly, blocks or steps of the flowchart support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that one or more blocks or steps of the flowchart, and combinations of blocks or steps in the flowchart, may be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
- Many modifications and other embodiments of the invention will come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. It should therefore be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims (21)
ML=G(x)
SL=F(x)
LUT=F −1(G(x))
ML i =G(x i)
SL j =F(x j)
ML=G(x)
SL=F(x)
LUT=F −1(G(x))
ML i =G(x i)
SL j =F(x j)
ML=G(x)
SL=F(x)
LUT=F −1(G(x))
ML i =G(x i)
SL j =F(x j)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/044,122 US8896619B2 (en) | 2011-03-09 | 2011-03-09 | Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/044,122 US8896619B2 (en) | 2011-03-09 | 2011-03-09 | Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120229490A1 true US20120229490A1 (en) | 2012-09-13 |
US8896619B2 US8896619B2 (en) | 2014-11-25 |
Family
ID=46795128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/044,122 Active 2033-08-25 US8896619B2 (en) | 2011-03-09 | 2011-03-09 | Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies |
Country Status (1)
Country | Link |
---|---|
US (1) | US8896619B2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014108731A2 (en) * | 2012-12-21 | 2014-07-17 | Calgary Scientific Inc. | Dynamic generation of test images for ambient light testing |
CN104143322A (en) * | 2013-05-10 | 2014-11-12 | 乐金显示有限公司 | Display apparatus and display apparatus control method |
US20150278442A1 (en) * | 2014-03-27 | 2015-10-01 | Mckesson Financial Holdings | Apparatus, method and computer-readable storage medium for transforming digital images |
WO2016106999A1 (en) * | 2014-12-30 | 2016-07-07 | 南京巨鲨显示科技有限公司 | Method for automatically identifying and calibrating medical color and gray-scale images |
US9411549B2 (en) | 2012-12-21 | 2016-08-09 | Calgary Scientific Inc. | Dynamic generation of test images for ambient light testing |
US9514274B2 (en) * | 2014-11-29 | 2016-12-06 | Infinitt Healthcare Co., Ltd. | Layered medical image forming, receiving, and transmitting methods |
US9520075B2 (en) * | 2013-03-25 | 2016-12-13 | Lg Display Co., Ltd. | Image processing method for display apparatus and image processing apparatus |
CN107146565A (en) * | 2017-04-26 | 2017-09-08 | 广州视源电子科技股份有限公司 | Method and device for calibrating Gamma curve of display based on DICOM curve |
JP7475190B2 (en) | 2020-04-27 | 2024-04-26 | シャープセミコンダクターイノベーション株式会社 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL PROGRAM |
WO2024138793A1 (en) * | 2022-12-28 | 2024-07-04 | 深圳市巨烽显示科技有限公司 | Image correction method and apparatus based on fpga, and device and medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6039316B2 (en) * | 2011-12-06 | 2016-12-07 | キヤノン株式会社 | Image output device, control method thereof, image display device, control method thereof, and storage medium |
CN105069453B (en) * | 2015-08-12 | 2019-03-05 | 青岛海信电器股份有限公司 | A kind of method for correcting image and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040196250A1 (en) * | 2003-04-07 | 2004-10-07 | Rajiv Mehrotra | System and method for automatic calibration of a display device |
US20050174309A1 (en) * | 2003-12-23 | 2005-08-11 | Luc Bouwens | Colour calibration of emissive display devices |
US20070055143A1 (en) * | 2004-11-26 | 2007-03-08 | Danny Deroo | Test or calibration of displayed greyscales |
US20110182501A1 (en) * | 2008-09-05 | 2011-07-28 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method for recognizing shapes and system implementing said method |
US20110257519A1 (en) * | 2010-04-16 | 2011-10-20 | Oslo Universitetssykehus Hf | Estimating and correcting for contrast agent extravasation in tissue perfusion imaging |
US20130187958A1 (en) * | 2010-06-14 | 2013-07-25 | Barco N.V. | Luminance boost method and system |
US20130287313A1 (en) * | 2010-12-21 | 2013-10-31 | Cédric Fabrice Marchessoux | Method and system for improving the visibility of features of an image |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5818624A (en) | 1995-12-27 | 1998-10-06 | Patterson; John | Mask for viewing reduced-size radiographic film, particularly mammography film |
JP2003334183A (en) | 2002-03-11 | 2003-11-25 | Fuji Photo Film Co Ltd | Abnormal shadow-detecting device |
US7469160B2 (en) | 2003-04-18 | 2008-12-23 | Banks Perry S | Methods and apparatus for evaluating image focus |
US7459696B2 (en) | 2003-04-18 | 2008-12-02 | Schomacker Kevin T | Methods and apparatus for calibrating spectral data |
US7136518B2 (en) | 2003-04-18 | 2006-11-14 | Medispectra, Inc. | Methods and apparatus for displaying diagnostic data |
CN101542525B (en) | 2006-08-02 | 2012-12-05 | 皇家飞利浦电子股份有限公司 | 3D segmentation by voxel classification based on intensity histogram thresholding initialized by K-means clustering |
DE102009042129A1 (en) | 2008-12-22 | 2010-07-22 | Siemens Aktiengesellschaft | Method for the differentiation of gray and white brain substance and CT system for carrying out the method |
-
2011
- 2011-03-09 US US13/044,122 patent/US8896619B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040196250A1 (en) * | 2003-04-07 | 2004-10-07 | Rajiv Mehrotra | System and method for automatic calibration of a display device |
US20050174309A1 (en) * | 2003-12-23 | 2005-08-11 | Luc Bouwens | Colour calibration of emissive display devices |
US20070055143A1 (en) * | 2004-11-26 | 2007-03-08 | Danny Deroo | Test or calibration of displayed greyscales |
US20110182501A1 (en) * | 2008-09-05 | 2011-07-28 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method for recognizing shapes and system implementing said method |
US20110257519A1 (en) * | 2010-04-16 | 2011-10-20 | Oslo Universitetssykehus Hf | Estimating and correcting for contrast agent extravasation in tissue perfusion imaging |
US20130187958A1 (en) * | 2010-06-14 | 2013-07-25 | Barco N.V. | Luminance boost method and system |
US20130287313A1 (en) * | 2010-12-21 | 2013-10-31 | Cédric Fabrice Marchessoux | Method and system for improving the visibility of features of an image |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9411549B2 (en) | 2012-12-21 | 2016-08-09 | Calgary Scientific Inc. | Dynamic generation of test images for ambient light testing |
WO2014108731A3 (en) * | 2012-12-21 | 2014-11-13 | Calgary Scientific Inc. | Dynamic generation of test images for ambient light testing |
WO2014108731A2 (en) * | 2012-12-21 | 2014-07-17 | Calgary Scientific Inc. | Dynamic generation of test images for ambient light testing |
US9520075B2 (en) * | 2013-03-25 | 2016-12-13 | Lg Display Co., Ltd. | Image processing method for display apparatus and image processing apparatus |
CN104143322A (en) * | 2013-05-10 | 2014-11-12 | 乐金显示有限公司 | Display apparatus and display apparatus control method |
US9401125B2 (en) | 2013-05-10 | 2016-07-26 | Lg Display Co., Ltd. | Display apparatus and display apparatus control method |
US20150278442A1 (en) * | 2014-03-27 | 2015-10-01 | Mckesson Financial Holdings | Apparatus, method and computer-readable storage medium for transforming digital images |
US9626476B2 (en) * | 2014-03-27 | 2017-04-18 | Change Healthcare Llc | Apparatus, method and computer-readable storage medium for transforming digital images |
US9514274B2 (en) * | 2014-11-29 | 2016-12-06 | Infinitt Healthcare Co., Ltd. | Layered medical image forming, receiving, and transmitting methods |
WO2016106999A1 (en) * | 2014-12-30 | 2016-07-07 | 南京巨鲨显示科技有限公司 | Method for automatically identifying and calibrating medical color and gray-scale images |
US10109068B2 (en) | 2014-12-30 | 2018-10-23 | Nanjiang Jusha Display Technology Co., Ltd. | Method of automatic identification and calibration of color and grayscale medical images |
CN107146565A (en) * | 2017-04-26 | 2017-09-08 | 广州视源电子科技股份有限公司 | Method and device for calibrating Gamma curve of display based on DICOM curve |
JP7475190B2 (en) | 2020-04-27 | 2024-04-26 | シャープセミコンダクターイノベーション株式会社 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL PROGRAM |
WO2024138793A1 (en) * | 2022-12-28 | 2024-07-04 | 深圳市巨烽显示科技有限公司 | Image correction method and apparatus based on fpga, and device and medium |
Also Published As
Publication number | Publication date |
---|---|
US8896619B2 (en) | 2014-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8896619B2 (en) | Apparatus, method and computer-readable storage medium for compensating for image-quality discrepancies | |
US9626476B2 (en) | Apparatus, method and computer-readable storage medium for transforming digital images | |
Norweck et al. | ACR–AAPM–SIIM technical standard for electronic practice of medical imaging | |
US9280943B2 (en) | Devices and methods for reducing artefacts in display devices by the use of overdrive | |
US8890906B2 (en) | Method and system for remotely calibrating display of image data | |
EP3262630B1 (en) | Steady color presentation manager | |
US20080123918A1 (en) | Image processing apparatus | |
US10235968B2 (en) | Medical image display apparatus, medical image adjustment method and recording medium | |
Bevins et al. | Practical application of AAPM Report 270 in display quality assurance: a report of Task Group 270 | |
US8867863B2 (en) | Presentation and manipulation of high depth images in low depth image display systems | |
JP2004159986A (en) | Liquid crystal display device | |
WO2009104584A1 (en) | Quality management system and quality management program of image display system | |
Salazar et al. | DICOM gray-scale standard display function: clinical diagnostic accuracy of chest radiography in medical-grade gray-scale and consumer-grade color displays | |
JP5346718B2 (en) | Monitor management system and brightness information management device | |
US9858892B2 (en) | Method and computing device for identifying a pixel visibility loss condition | |
JP6039316B2 (en) | Image output device, control method thereof, image display device, control method thereof, and storage medium | |
Seto et al. | Image quality assurance of soft copy display systems | |
US20120242682A1 (en) | Apparatus and method for displaying image | |
Thompson et al. | Practical assessment of the display performance of radiology workstations | |
WO2014162555A1 (en) | Display device, greyscale correction device, greyscale correction method, and greyscale correction program | |
JP2006061601A (en) | Medical image display device, medical image display system and medical image display program | |
JP2003337935A (en) | Image display system | |
McIlgorm et al. | Quality of'commercial-off-the-shelf'(COTS) monitors displaying dental radiographs | |
Jones | Utilization of DICOM GSDF to modify lookup tables for images acquired on film digitizers | |
JP2022129407A (en) | Image display system, server device, and image display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MCKESSON FINANCIAL HOLDINGS, BERMUDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REZAEE, MAHMOUD RAMZE;REEL/FRAME:025928/0574 Effective date: 20110307 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
AS | Assignment |
Owner name: MCKESSON FINANCIAL HOLDINGS UNLIMITED COMPANY, BERMUDA Free format text: CHANGE OF NAME;ASSIGNOR:MCKESSON FINANCIAL HOLDINGS;REEL/FRAME:041329/0879 Effective date: 20161130 Owner name: MCKESSON FINANCIAL HOLDINGS UNLIMITED COMPANY, BER Free format text: CHANGE OF NAME;ASSIGNOR:MCKESSON FINANCIAL HOLDINGS;REEL/FRAME:041329/0879 Effective date: 20161130 |
|
AS | Assignment |
Owner name: MCKESSON CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MCKESSON FINANCIAL HOLDINGS UNLIMITED COMPANY;REEL/FRAME:041355/0408 Effective date: 20161219 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:CHANGE HEALTHCARE HOLDINGS, LLC;CHANGE HEALTHCARE, INC.;CHANGE HEALTHCARE HOLDINGS, INC.;AND OTHERS;REEL/FRAME:041858/0482 Effective date: 20170301 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: SECURITY AGREEMENT;ASSIGNORS:CHANGE HEALTHCARE HOLDINGS, LLC;CHANGE HEALTHCARE, INC.;CHANGE HEALTHCARE HOLDINGS, INC.;AND OTHERS;REEL/FRAME:041858/0482 Effective date: 20170301 |
|
AS | Assignment |
Owner name: PF2 IP LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MCKESSON CORPORATION;REEL/FRAME:041938/0501 Effective date: 20170301 |
|
AS | Assignment |
Owner name: CHANGE HEALTHCARE LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PF2 IP LLC;REEL/FRAME:041966/0356 Effective date: 20170301 |
|
AS | Assignment |
Owner name: CHANGE HEALTHCARE LLC, GEORGIA Free format text: CHANGE OF ADDRESS;ASSIGNOR:CHANGE HEALTHCARE LLC;REEL/FRAME:042082/0061 Effective date: 20170323 |
|
AS | Assignment |
Owner name: CHANGE HEALTHCARE HOLDINGS, LLC, TENNESSEE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHANGE HEALTHCARE LLC;REEL/FRAME:046449/0899 Effective date: 20180414 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: CHANGE HEALTHCARE HOLDINGS, LLC, MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE TECHNOLOGIES, LLC (FORMERLY KNOWN AS MCKESSON TECHNOLOGIES LLC), MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE HOLDINGS, INC., MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE OPERATIONS, LLC, MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE PERFORMANCE, INC. (FORMERLY KNOWN AS CHANGE HEALTHCARE, INC.), MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE SOLUTIONS, LLC, MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 Owner name: CHANGE HEALTHCARE RESOURCES, LLC (FORMERLY KNOWN AS ALTEGRA HEALTH OPERATING COMPANY LLC), MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:061620/0054 Effective date: 20221003 |