CN117830183B - Tone mapping method and related device for CT image - Google Patents

Tone mapping method and related device for CT image Download PDF

Info

Publication number
CN117830183B
CN117830183B CN202410245489.2A CN202410245489A CN117830183B CN 117830183 B CN117830183 B CN 117830183B CN 202410245489 A CN202410245489 A CN 202410245489A CN 117830183 B CN117830183 B CN 117830183B
Authority
CN
China
Prior art keywords
dynamic range
range image
low dynamic
mapping
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410245489.2A
Other languages
Chinese (zh)
Other versions
CN117830183A (en
Inventor
王鹏
张芳俊
桂楷钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lima Precision Measurement Technology Suzhou Co ltd
Original Assignee
Lima Precision Measurement Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lima Precision Measurement Technology Suzhou Co ltd filed Critical Lima Precision Measurement Technology Suzhou Co ltd
Priority to CN202410245489.2A priority Critical patent/CN117830183B/en
Publication of CN117830183A publication Critical patent/CN117830183A/en
Application granted granted Critical
Publication of CN117830183B publication Critical patent/CN117830183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a tone mapping method and a related device for CT images, wherein the tone mapping method comprises the following steps: performing convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not; outputting the low dynamic range image if the low dynamic range image is smaller than the first gray threshold; if the low dynamic range image is not smaller than the first gray threshold, continuously judging whether the gray of the low dynamic range image is smaller than the second gray threshold; if the gray level of the low dynamic range image is smaller than the second gray level threshold value, performing gradient pyramid mapping on the low dynamic range image, and outputting a corresponding mapping result; and if the gray level of the low dynamic range image is not less than the second gray level threshold, performing human eye model mapping on the low dynamic range image, and outputting a corresponding mapping result. By the method, whether secondary tone mapping is needed or not can be adaptively judged, and an adaptive secondary tone mapping mode is selected, so that the mapping effect is ensured.

Description

Tone mapping method and related device for CT image
Technical Field
The present application relates to the field of image processing, and in particular, to a tone mapping method and related apparatus for CT images.
Background
In the related art, the dynamic range of a high dynamic range (HIGH DYNAMIC RANGE, HDR) image acquired using an X-ray can reach 16 bits or 24 bits, however, most displays still support only 8 bits of display, and a small number of special displays with improved display brightness can support only 10 bits or 12 bits of display, and the special displays are not suitable for long-term observation by human eyes.
In order to solve the problem that the dynamic range of the display is smaller than that of the high dynamic range image, in the related art, a "window level/width" manner is generally adopted to reduce the dynamic range of the high dynamic range image, but in practical processes, the observation result is easily limited by the experience of the operator.
In order to more reliably display the internal structure of the sample, more detailed information of the sample is presented, and tone mapping is required to be performed on the high dynamic range image acquired by the X-ray, that is, the high dynamic range image needs to be processed into a dynamic range which can be displayed by the low dynamic range device.
Disclosure of Invention
The embodiment of the application at least provides a tone mapping method for CT images, which can adaptively judge whether secondary tone mapping is needed based on a convolution pyramid mapping result, and adaptively judge whether gradient pyramid mapping or human eye model mapping is adopted on the basis that secondary tone mapping is needed, so that the mapping effect can be ensured.
In a first aspect, the present application provides a tone mapping method for a CT image, comprising:
performing convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to the displayable dynamic range;
judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
outputting the low dynamic range image if the low dynamic range image is smaller than the first gray threshold;
If the low dynamic range image is not smaller than the first gray threshold, judging whether the gray level of the low dynamic range image is smaller than the second gray threshold; the second gray threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
if the gray level of the low dynamic range image is smaller than the second gray level threshold value, performing gradient pyramid mapping on the low dynamic range image, and outputting a corresponding mapping result;
And if the gray level of the low dynamic range image is not less than the second gray level threshold, performing human eye model mapping on the low dynamic range image, and outputting a corresponding mapping result.
In a second aspect, the present application also provides a tone mapping apparatus for CT images, comprising:
the first mapping unit is used for carrying out convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to the displayable dynamic range;
A first judging unit for judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
an output unit configured to output a low dynamic range image when the low dynamic range image is smaller than a first gray threshold;
A second judging unit configured to judge whether the gray level of the low dynamic range image is smaller than a second gray level threshold value when the low dynamic range image is not smaller than the first gray level threshold value; the second gray threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
The second mapping unit is used for carrying out gradient pyramid mapping on the low dynamic range image when the gray level of the low dynamic range image is smaller than a second gray level threshold value, and outputting a corresponding mapping result;
and the third mapping unit is used for performing human eye model mapping on the low dynamic range image and outputting a corresponding mapping result when the gray level of the low dynamic range image is not smaller than the second gray level threshold value.
In a third aspect, the present application also provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the machine-readable instructions are executed by the processor to execute the tone mapping method for CT images.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the tone mapping method for CT images provided by the present application.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when run by a processor, performs the tone mapping method for CT images provided by the present application.
In summary, the present application provides a tone mapping method and related apparatus for CT images, the method comprising: firstly, carrying out convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image, wherein the brightness of each pixel in the low dynamic range image subjected to convolution pyramid mapping belongs to a displayable dynamic range; judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not, wherein the first gray level threshold value is used for judging whether the contrast of the low dynamic range image meets the output requirement or not; if the low dynamic range image is smaller than the first gray threshold, the low dynamic range image is indicated to meet the output requirement, and the low dynamic range image can be output; if the low dynamic range image is not smaller than the first gray threshold, indicating that the low dynamic range image does not meet the output requirement, continuously judging whether the gray level of the low dynamic range image is smaller than a second gray threshold, wherein the second gray threshold is used for determining a secondary tone mapping mode of the low dynamic range image; if the gray level of the low dynamic range image is smaller than the second gray level threshold, gradient pyramid mapping suitable for the low dynamic range image can be adopted, and a corresponding mapping result is output; if the gray level of the low dynamic range image is not less than the second gray level threshold, the low dynamic range image can be mapped by adopting a human eye model suitable for the image with higher gray level, and a corresponding mapping result is output. By the method, whether secondary tone mapping is needed or not can be adaptively judged based on the convolution pyramid mapping result, and whether gradient pyramid mapping or human eye model mapping is adopted or not can be adaptively judged on the basis that secondary tone mapping is needed, so that the mapping effect can be ensured.
Other advantages of the present application will be explained in more detail with reference to the following description and accompanying drawings.
It should be understood that the foregoing description is only an overview of the technical solutions of the present application, so that the technical means of the present application may be generally understood and implemented in accordance with the content of the specification. The following specific embodiments of the present application are described in detail to make the above and other objects, features and advantages of the present application more comprehensible.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is appreciated that the drawings illustrate only certain embodiments of the application and are therefore not to be considered limiting of its scope, for the application may admit to other equally relevant drawings without inventive effort by those of ordinary skill in the art. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a tone mapping method for CT images according to an embodiment of the present application;
FIG. 2 is a flowchart of a tone mapping method for CT images according to an embodiment of the present application;
FIG. 3 is a graph showing tone mapping results according to an embodiment of the present application;
FIG. 4 is a graph showing another tone mapping result according to an embodiment of the present application;
fig. 5 is a schematic diagram of an apparatus for tone mapping of CT images according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the application are shown in the drawings, it should be understood that the application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
In describing embodiments of the present application, it will be understood that terms, such as "comprises" or "comprising," and the like, are intended to indicate the presence of the disclosed features, numbers, steps, acts, components, portions, or combinations thereof in the present specification, and do not preclude the presence or addition of one or more other features, numbers, steps, acts, components, portions, or combinations thereof.
Unless otherwise indicated, "/" means or, e.g., A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms "first," "second," and the like are used merely for convenience of description to distinguish between the same or similar technical features, and are not to be construed as indicating or implying a relative importance or quantity of such technical features. Thus, a feature defined by "first," "second," etc. may explicitly or implicitly include one or more such feature. In describing embodiments of the application, the term "plurality" means two or more unless otherwise indicated.
In the related art, the dynamic range of a high dynamic range (HIGH DYNAMIC RANGE, HDR) image acquired using an X-ray can reach 16 bits or 24 bits, however, most displays still support only 8 bits of display, and a small number of special displays with improved display brightness can support only 10 bits or 12 bits of display, and the special displays are not suitable for long-term observation by human eyes.
In order to solve the problem that the dynamic range of the display is smaller than that of the high dynamic range image, in the related art, a "window level/width" manner is generally adopted to reduce the dynamic range of the high dynamic range image, but in practical processes, the observation result is easily limited by the experience of the operator.
In order to more reliably display the internal structure of the sample, more detailed information of the sample is presented, and tone mapping is required to be performed on the high dynamic range image acquired by the X-ray, that is, the high dynamic range image needs to be processed into a dynamic range which can be displayed by the low dynamic range device.
Tone mapping can be classified into local tone mapping and global tone mapping, and global tone mapping only needs to consider information of each pixel in an image, whereas local tone mapping needs to consider information of each pixel periphery.
The global tone mapping does not need to consider information around each pixel, so that the operation speed is high, but the mapping effect is poor, and the local tone mapping does need to consider information around each pixel, so that the mapping effect is good.
The local tone mapping mainly comprises logarithmic mapping, reinhard tone mapping, convolution pyramid mapping, gradient pyramid mapping, human eye model mapping and the like, and has corresponding defects, and particularly, the logarithmic mapping can well process images with medium dynamic range, but when trying to compress the images with high dynamic range, the retention rate of the global contrast of the images with high dynamic range is lower, the obtained mapping result is very dark or very bright, and the mapping effect is poor; reinhard tone mapping tends to be bright in the image and has poor mapping effect on dark parts in the image; the convolution pyramid mapping can inevitably generate halo artifacts in processing non-monotonic mapping images, and meanwhile, the removal of the artifacts can cause the loss of other detail features of the images; the gradient pyramid is more suitable for scenes with more contrast at low frequency to be compressed, and the image obtained by mapping is still blurred although the field contrast with larger pixels and the global contrast of multiple layers of Gaussian pyramid are considered; the human eye model has limited mappable high dynamic range, long time consumption and easy brightness overexposure in the center of the mapped image.
In view of this, the present application provides a tone mapping method for CT images, which can adaptively determine whether secondary tone mapping is required based on a convolution pyramid mapping result, and adaptively determine whether gradient pyramid mapping or human eye model mapping is adopted based on the requirement of secondary tone mapping, so as to ensure the mapping effect.
The tone mapping method for the CT image provided by the embodiment of the present application may be implemented by a computer device, where the computer device may be a terminal device or a server, where the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides a cloud computing service. Terminal devices include, but are not limited to, cell phones, computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircraft, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The following describes a tone mapping method for a CT image according to an embodiment of the present application, as shown in fig. 1, fig. 1 is a flowchart of a method for a tone mapping method for a CT image according to an embodiment of the present application, where the foregoing computer device may be a server, and the method includes:
S101, performing convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image.
The dynamic range refers to the adaptability of the camera to the reflection of the scene illumination in the shooting scene, and specifically refers to the variation range of brightness.
High dynamic range images refer to electronic computed tomography (Computed Tomography, CT) images with pixels up to 16 or 24 bits acquired by X-ray or the like.
Since prior art displays typically only support 8-bit displays, in order to address the problem of the dynamic range of the display being smaller than the dynamic range of the high dynamic range image, tone mapping of the high dynamic range image, i.e., processing the high dynamic range image into a dynamic range that can be displayed by a low dynamic range device, is required.
Because the convolution pyramid mapping mode can perform feature extraction, the local information of each pixel in the image can be obtained, and meanwhile, the global information of the image can be obtained through the pyramid, and the mapping effect is good, the server can perform convolution pyramid mapping on the high dynamic range image to obtain the corresponding low dynamic range image.
It should be noted that, the luminance of each pixel in the low dynamic range image belongs to the displayable dynamic range, and the displayable dynamic range may be 8 bits, that is, each pixel in the low dynamic range image may be 8 bits, that is, after the convolution pyramid mapping, the high dynamic range image displayed by 16 bits is processed into the low dynamic range image displayed by 8 bits.
In one possible implementation manner, performing convolution pyramid mapping on the high dynamic range image in S101 to obtain a corresponding low dynamic range image, including:
Converting the high dynamic range image to a luminance space;
Determining a local adaptive luminance for each pixel in the high dynamic range image; the local adaptive luminance is used to identify an average luminance of a neighborhood of the corresponding pixel;
Mapping the local self-adaptive brightness of each pixel in the high dynamic range image to obtain a corresponding preliminary low dynamic range image; the local self-adaptive brightness of each pixel in the preliminary low dynamic range image belongs to the displayable dynamic range;
Adding the brightness information of the high dynamic range image to the preliminary low dynamic range image to obtain a corresponding low dynamic range image;
The low dynamic range image is converted to RGB space.
In particular, since the high dynamic range image is typically a three primary color (RGB) image, in order to be able to process the high dynamic range image, the server may convert the high dynamic range image into a luminance space, for example, look at the high dynamic range image through hue, saturation, and luminance (Hue Saturation Value, HSV) models, thereby converting the high dynamic range image into the luminance space.
In order to be able to obtain local information for each pixel in the high dynamic range image, the server may determine a local adaptive luminance for each pixel in the high dynamic range image, where the local adaptive luminance is used to identify an average luminance of a neighborhood of the corresponding pixel, i.e. the local adaptive luminance for each pixel may include luminance information for a plurality of pixels in the domain to which the pixel corresponds.
After determining the local adaptive luminance of each pixel in the high dynamic range image, the server may map the local adaptive luminance of each pixel in the high dynamic range image, and map the local adaptive luminance of each pixel into the displayable dynamic range, thereby obtaining a corresponding preliminary low dynamic range image.
After the preliminary low dynamic range image is obtained, in order to ensure details of the image, the server can add the brightness information of the high dynamic range image to the preliminary low dynamic range image, so as to supplement details of the preliminary low dynamic range image and obtain the corresponding low dynamic range image.
After obtaining the corresponding low dynamic range image, the server may convert the low dynamic range image into RGB space again.
In one possible implementation, determining the locally adaptive luminance of each pixel in the high dynamic range image includes:
High-pass filtering is carried out on the high-dynamic-range image through a low-pass filter with a frequency factor of s; high-pass filtering is carried out on the high-dynamic-range image through a low-pass filter with the frequency factor of 2 s;
the locally adaptive luminance of each pixel in the high dynamic range image is determined by the following formula:
Wherein, Low pass filter pair high dynamic range image/>, representing a frequency factor sFiltering results at,/>Low pass filter pair high dynamic range image/>, representing a frequency factor of 2sFiltering results at,/>Representing high dynamic range image/>Locally adaptive luminance at the location.
In one possible implementation, mapping the local adaptive luminance of each pixel in the high dynamic range image to obtain a corresponding preliminary low dynamic range image includes:
mapping the local adaptive luminance of each pixel in the high dynamic range image by the following formula:
Wherein, Representing high dynamic range image/>Locally adaptive luminance at,/>Refers to the minimum local adaptive luminance of high dynamic range images,/>Refers to the maximum locally adaptive luminance of the high dynamic range image,Representing the maximum displayable luminance in the displayable luminance range,/>Representing preliminary low dynamic range imagesLocally adaptive luminance at,/>Is determined by the following formula:
In one possible implementation, adding luminance information of the high dynamic range image to the preliminary low dynamic range image, resulting in a corresponding low dynamic range image, includes:
Determining the brightness of each pixel in the high dynamic range image;
The luminance and the local adaptive luminance of each pixel in the high dynamic range image are added to the preliminary low dynamic range image by the following formula:
Wherein, Representing preliminary low dynamic range image/>Locally adaptive luminance at,/>Representing high dynamic range image/>Locally adaptive luminance at,/>Representing high dynamic range image/>At luminance, I represents the luminance of the low dynamic range image.
S102, judging whether the gray scale of the low dynamic range image is smaller than a first gray scale threshold value.
After the high dynamic range image is subjected to convolution pyramid mapping to obtain the low dynamic range image, the convolution pyramid mapping can inevitably generate halo artifacts in the processing of the non-monotonic mapping image, and meanwhile, the removal of the artifacts can cause the loss of other detail features of the image, so that the contrast of the low dynamic range image obtained through the convolution pyramid mapping does not necessarily meet the output requirement.
In the application, the server can judge whether the contrast of the low dynamic range image meets the output requirement by judging whether the gray level of the low dynamic range image is smaller than the first gray level threshold value, and for the CT image, the higher the gray level is, the lower the contrast of the image is, the lower the gray level is, the higher the contrast of the image is, the first gray level threshold value is used for judging whether the contrast of the low dynamic range image meets the output requirement, when the gray level of the low dynamic range image is smaller than the first gray level threshold value, the contrast of the low dynamic range image is higher, the output requirement is met, and when the gray level of the low dynamic range image is not smaller than the first gray level threshold value, the contrast of the low dynamic range image is lower, and the image quality of the low dynamic range image is poorer, and the output requirement is not met.
In practical application, the server may analyze the gray level of the low dynamic range image according to the gray level distribution histogram of the low dynamic range image, and the corresponding first gray level threshold may be 0.05% of the accumulated value before the gray level distribution histogram of the image, that is, the server may determine whether the accumulated value before the 20% of the gray level distribution histogram of the low dynamic range image is less than 0.05 to determine whether the contrast of the low dynamic range image meets the output requirement.
S103, outputting the low dynamic range image if the low dynamic range image is smaller than the first gray threshold.
When the gray level of the low dynamic range image is smaller than the first gray level threshold, the contrast ratio of the low dynamic range image is higher, and the output requirement is met.
In practical application, when the first 20% accumulated value of the gray distribution histogram of the low dynamic range image is smaller than 0.05, the server may directly output the low dynamic range image.
In addition, in practical applications of the present application, in order to ensure the image quality of the output low dynamic range image, the server may perform gamma (gamma) mapping and median filtering processing on the low dynamic range image.
And S104, if the low dynamic range image is not smaller than the first gray threshold, judging whether the gray level of the low dynamic range image is smaller than the second gray threshold.
When the gray level of the low dynamic range image is not smaller than the first gray level threshold, the contrast of the low dynamic range image is lower, the image quality of the low dynamic range image is poorer, and the output requirement is not met.
In this regard, the server may perform secondary tone mapping on the low dynamic range image, where the secondary tone mapping method includes gradient pyramid mapping and human eye model mapping, and since the gradient pyramid mapping can be applied to an image with a lower dynamic range and the human eye model mapping can be applied to an image with a higher gray level, the server may determine a subsequent secondary tone mapping method by determining whether the gray level of the low dynamic range image is less than a second gray level threshold, which is used to determine the secondary tone mapping method of the low dynamic range image.
In practical applications, the server may analyze the gray level of the low dynamic range image according to the gray level distribution histogram of the low dynamic range image, and the corresponding second gray level threshold may be 0.2 for the front 20% accumulated value of the gray level distribution histogram of the image, that is, the server may determine the secondary tone mapping manner of the low dynamic range image by determining whether the front 20% accumulated value of the gray level distribution histogram of the low dynamic range image is less than 0.2.
S105, if the gray level of the low dynamic range image is smaller than the second gray level threshold value, gradient pyramid mapping is carried out on the low dynamic range image, and a corresponding mapping result is output.
When the gray level of the low dynamic range image is smaller than the second gray level threshold value, the gray level of the low dynamic range image is not high, the characteristic information in the low dynamic range image after the convolution pyramid is hidden is concentrated, the local information of each pixel is more obvious, and in this way, the server can perform gradient pyramid mapping on the low dynamic range image because the local information of the neighborhood contrast and the global information of the Gaussian pyramid can be comprehensively considered for the gradient pyramid mapping with the low dynamic range, so that the contrast details of the low dynamic range image are further enhanced, and the corresponding mapping result is output.
In one possible implementation, gradient pyramid mapping of low dynamic range images includes:
determining a target image corresponding to the low dynamic range image; the brightness of each pixel in the target image belongs to the displayable dynamic range, and the contrast meets the output requirement;
and taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image.
It should be noted that, through gradient pyramid mapping, the server may ensure that the contrast of the low dynamic range image meets the output requirement and ensures the image quality of the output low dynamic range image on the premise that the brightness of each pixel in the low dynamic range image is in a displayable dynamic range.
In one possible implementation, gradient pyramid mapping is performed on the low dynamic range image with the target image as a mapping target, including:
taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image by the following formula:
Wherein, Represents the ith pixel in the first layer of the pyramid, K is the total number of layers of the pyramid, K is the kth layer of the pyramid,/>Is the total number of pixels, i is the ith pixel,/>Representing the neighborhood of the ith pixel, j representing the jth pixel in the neighborhood of the ith pixel,/>Refers to the weight constant factor corresponding to the ith pixel and the jth pixel of the kth layer pyramid of the low dynamic range image,/>Representing the contrast values of the low dynamic range image for the ith and jth pixels of the kth layer pyramid,Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,/>Is determined by the following formula:
Wherein, Representing the luminance value of a low dynamic range image for the ith pixel of the kth layer pyramid,/>Representing a luminance value of the low dynamic range image for a j-th pixel of the k-th layer pyramid;
Is determined by the following formula:
Wherein, Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,Refers to the contrast difference of the target image,/>Is determined by the following formula:
Wherein, Refers to the image contrast of the target image,/>Is determined by the following formula:
Wherein, Refers to the maximum brightness of the target image,/>Refers to the minimum brightness of the target image.
Specifically, by makingThe derivative of (c) is zero, i.e., the contrast of the low dynamic range image is made close to the contrast of the target image, thereby ensuring the image quality of the low dynamic range image.
And S106, if the gray level of the low dynamic range image is not smaller than the second gray level threshold, performing human eye model mapping on the low dynamic range image, and outputting a corresponding mapping result.
When the gray level of the low dynamic range image is not less than the second gray level threshold, the gray level of the low dynamic range image is higher, and in this regard, because the human eye model mapping can be applied to the image with larger gray level, the server can perform the human eye model mapping on the low dynamic range image, thereby further enhancing the contrast details of the low dynamic range image and outputting the corresponding mapping result.
In one possible implementation, the performing the human eye model mapping on the low dynamic range image in S106 includes:
mapping each pixel in the low dynamic range image by the following formula to obtain a corresponding preliminary human eye model mapping result:
Wherein, Representing the preliminary human eye model mapping results/>Luminance value at/>Representing low dynamic range image/>Luminance value at/>Center value representing 4 th order range within view cone,/>Represents the value of the inflection point of two equations, n is constant, m is constant,/>Is used to ensure/>At/>Continuity at, k represents the luminance perceptual amplitude coefficient,/>、/>、/>And/>Is determined by the following formula:
,/>,/>
Wherein, Pixel mean value representing low dynamic range image,/>Pixel median value representing low dynamic range image,/>Intermediate value representing the overall brightness of a low dynamic range image,/>Representing a target luminance factor;
It should be noted that, the above formula may implement preliminary mapping on the low dynamic range image to obtain a corresponding preliminary human eye model mapping result, where the brightness of each pixel in the preliminary human eye model mapping result may be made to belong to a displayable dynamic range by adjusting the size of the target brightness factor, for example, the larger the target brightness factor is, the larger the two equation inflection point value is, and the more pixels in the low dynamic range image may be classified as a low brightness pixel mapping range.
After obtaining the preliminary human eye model mapping result, the server may adjust the luminance information of the preliminary human eye model mapping result based on the luminance information of the low dynamic range image to obtain a corresponding human eye mapping result:
Wherein, Representing the results of the human eye mapping,/>Representing weighted average contrast value,/>Representing the average dispersion value relative to a given median value,/>Mean dispersion value representing a relatively low dynamic range image,/>,/>To specify a constant value,/>Representing mapping results/>, for preliminary human eye modelGaussian kernel radius of local contrast measure at,/>Representing the luminance value at the mapping result x of the preliminary human eye model,/>Representing the luminance value at the mapping result y of the preliminary human eye model,/>Representing the mean of a low dynamic range image,/>Representing low dynamic range image/>A luminance value at.
It should be noted that, through the above formula, the preliminary human eye model mapping result and the original brightness information of the low dynamic range image can be compared, and the brightness information of the preliminary human eye model mapping result is adjusted, so that the local contrast enhancement is performed on the preliminary human eye model mapping result through the original low dynamic range image, and the corresponding human eye mapping result is obtained.
In summary, the present application provides a tone mapping method for CT images, which includes: firstly, carrying out convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image, wherein the brightness of each pixel in the low dynamic range image subjected to convolution pyramid mapping belongs to a displayable dynamic range; judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not, wherein the first gray level threshold value is used for judging whether the contrast of the low dynamic range image meets the output requirement or not; if the low dynamic range image is smaller than the first gray threshold, the low dynamic range image is indicated to meet the output requirement, and the low dynamic range image can be output; if the low dynamic range image is not smaller than the first gray threshold, indicating that the low dynamic range image does not meet the output requirement, continuously judging whether the gray level of the low dynamic range image is smaller than a second gray threshold, wherein the second gray threshold is used for determining a secondary tone mapping mode of the low dynamic range image; if the gray level of the low dynamic range image is smaller than the second gray level threshold, gradient pyramid mapping suitable for the low dynamic range image can be adopted, and a corresponding mapping result is output; if the gray level of the low dynamic range image is not less than the second gray level threshold, the low dynamic range image can be mapped by adopting a human eye model suitable for the image with higher gray level, and a corresponding mapping result is output. By the method, whether secondary tone mapping is needed or not can be adaptively judged based on the convolution pyramid mapping result, and whether gradient pyramid mapping or human eye model mapping is adopted or not can be adaptively judged on the basis that secondary tone mapping is needed, so that the mapping effect can be ensured.
It should be noted that the tone mapping method provided by the application skillfully integrates the advantages of convolution pyramid mapping, gradient pyramid mapping and human eye model mapping, has wider applicability and practicability, and can be applied to different various industrial samples.
For easy understanding, the following describes a specific flow of a tone mapping method for CT images according to an embodiment of the present application, as shown in fig. 2, and fig. 2 is a specific flow chart of a tone mapping method for CT images according to an embodiment of the present application, including:
And firstly reading the high dynamic range image which needs tone mapping, and then carrying out convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image.
To determine whether the low dynamic range image meets the output requirement, the server may determine whether the first 20% accumulation value of the low dynamic range image gray distribution histogram is less than 0.05.
If the image size is smaller than 0.05, determining that the low dynamic range image meets the output requirement, and outputting the corresponding low dynamic range image after gamma mapping and median filtering on the low dynamic range image.
If not less than 0.05, it may be further determined whether the first 20% accumulated value of the low dynamic range image gray distribution histogram is greater than 0.2.
If the image is larger than 0.2, the image with low dynamic range can be subjected to secondary tone mapping through human eye model mapping, and after gamma mapping and median filtering, a corresponding mapping result is output.
If the image is not more than 0.2, the low dynamic range image can be subjected to secondary tone mapping through gradient pyramid mapping, and after gamma mapping and median filtering, a corresponding mapping result is output.
In the following, an evaluation is performed on a tone mapping method for CT images according to the present application by using a specific embodiment, as shown in fig. 3, fig. 3 is a tone mapping result comparison chart according to the embodiment of the present application, fig. 3 (a) is a low dynamic range image obtained by convolution pyramid mapping, fig. 3 (b) is a low dynamic range image obtained by human eye model mapping, fig. 3 (c) is a low dynamic range image obtained by gradient pyramid mapping, fig. 3 (d) is a low dynamic range image obtained by tone mapping method according to the present application, and fig. 3 shows that the tone mapping method according to the present application has a high image quality and can retain more detail features of the image.
Meanwhile, as shown in fig. 4, fig. 4 is another tone mapping result comparison chart provided by the embodiment of the present application, where (a) in fig. 4 refers to another low dynamic range image obtained by convolution pyramid mapping, (b) in fig. 4 refers to another low dynamic range image obtained by human eye model mapping, and (c) in fig. 4 refers to another low dynamic range image obtained by gradient pyramid mapping, and (d) in fig. 4 refers to another low dynamic range image obtained by the tone mapping method of the present application, and as can also be seen from fig. 4, the image quality of the low dynamic range image obtained by the tone mapping method of the present application is higher, the contrast is higher, and more detail features of the image can be retained.
In the description of the present specification, descriptions of terms "some possible embodiments," "some embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application, and that the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
With respect to the method flow diagrams of embodiments of the application, certain operations are described as distinct steps performed in a certain order. Such a flowchart is illustrative and not limiting. Some steps described herein may be grouped together and performed in a single operation, or may be divided into multiple sub-steps and may be performed in an order different than that shown herein. The various steps illustrated in the flowcharts may be implemented in any manner by any circuit structure and/or tangible mechanism (e.g., by software running on a computer device, hardware (e.g., processor or chip implemented logic functions), etc., and/or any combination thereof).
It will be appreciated by those skilled in the art that in the methods described in the above embodiments, the written order of steps does not imply a strict order of execution, and that the specific order of execution of the steps should be determined by its function and possible inherent logic.
Based on the foregoing fig. 1 to fig. 4, the tone mapping apparatus for CT images according to the present application will be described with reference to the embodiment of the apparatus, as shown in fig. 5, fig. 5 is a schematic apparatus diagram of a tone mapping apparatus for CT images according to the embodiment of the present application, where the tone mapping apparatus 500 for CT images includes:
A first mapping unit 501, configured to perform convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to a displayable dynamic range;
a first judging unit 502, configured to judge whether the gray level of the low dynamic range image is smaller than a first gray level threshold; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
an output unit 503 configured to output the low dynamic range image when the low dynamic range image is smaller than the first grayscale threshold;
A second judging unit 504, configured to judge whether the gray level of the low dynamic range image is smaller than a second gray level threshold value when the low dynamic range image is not smaller than the first gray level threshold value; the second gray level threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
A second mapping unit 505, configured to perform gradient pyramid mapping on the low dynamic range image and output a corresponding mapping result when the gray level of the low dynamic range image is less than the second gray level threshold;
and a third mapping unit 506, configured to perform human eye model mapping on the low dynamic range image and output a corresponding mapping result when the gray level of the low dynamic range image is not less than the second gray level threshold.
In one possible implementation, the first mapping unit 501 is configured to:
converting the high dynamic range image to a luminance space;
Determining a local adaptive luminance for each pixel in the high dynamic range image; the local adaptive brightness is used for identifying the average brightness of the neighborhood of the corresponding pixel;
mapping the local self-adaptive brightness of each pixel in the high dynamic range image to obtain a corresponding preliminary low dynamic range image; the local self-adaptive brightness of each pixel in the preliminary low dynamic range image belongs to a displayable dynamic range;
Adding the brightness information of the high dynamic range image to the preliminary low dynamic range image to obtain a corresponding low dynamic range image;
the low dynamic range image is converted to RGB space.
In one possible implementation, the first mapping unit 501 is configured to:
High-pass filtering the high-dynamic-range image through a low-pass filter with a frequency factor s; high-pass filtering the high-dynamic-range image through a low-pass filter with a frequency factor of 2 s;
the locally adaptive luminance of each pixel in the high dynamic range image is determined by the following formula:
Wherein, A low pass filter representing a frequency factor s for the high dynamic range image/>Filtering results at,/>A pair representing a low-pass filter with a frequency factor of 2s for the high dynamic range imageFiltering results at,/>Representing the high dynamic range image/>Locally adaptive luminance at the location.
In one possible implementation, the first mapping unit 501 is configured to:
mapping the local adaptive luminance of each pixel in the high dynamic range image by the following formula:
Wherein, Representing the high dynamic range image/>Locally adaptive luminance at,/>Refers to the minimum local adaptive luminance of the high dynamic range image,/>Refers to the maximum local adaptive luminance of the high dynamic range image,/>Representing the maximum displayable luminance in the displayable luminance range,/>Representing the preliminary low dynamic range image/>Locally adaptive luminance at,/>Is determined by the following formula:
in one possible implementation, the first mapping unit 501 is configured to:
Determining the brightness of each pixel in the high dynamic range image;
adding the luminance and the locally adaptive luminance of each pixel in the high dynamic range image to the preliminary low dynamic range image by the following formula:
Wherein, Representing the preliminary low dynamic range image/>Locally adaptive luminance at the location(s),Representing the high dynamic range image/>Locally adaptive luminance at,/>Representing the high dynamic range image/>At luminance, I represents the luminance of the low dynamic range image.
In one possible implementation, the second mapping unit 505 is configured to:
determining a target image corresponding to the low dynamic range image; the brightness of each pixel in the target image belongs to the displayable dynamic range, and the contrast meets the output requirement;
and taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image.
In one possible implementation, the second mapping unit 505 is configured to:
Taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image by the following formula:
/>
Wherein, Represents the ith pixel in the first layer of the pyramid, K is the total number of layers of the pyramid, K is the kth layer of the pyramid,/>Is the total number of pixels, i is the ith pixel,/>Representing the neighborhood of the ith pixel, j representing the jth pixel in the neighborhood of the ith pixel,/>Means that the low dynamic range image is directed at the weight constant factors corresponding to the ith pixel and the jth pixel of the kth layer pyramid,/>Representing the contrast value of the low dynamic range image for the ith and jth pixels of the kth layer pyramid,/>Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,/>Is determined by the following formula:
Wherein, Representing the luminance value of the low dynamic range image for the ith pixel of the kth layer pyramid,/>Representing a luminance value of the low dynamic range image for a j-th pixel of a k-th layer pyramid;
Is determined by the following formula:
Wherein, Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,/>Refers to the contrast difference of the target image,/>Is determined by the following formula:
Wherein, Refers to the image contrast of the target image,/>Is determined by the following formula:
Wherein, Refers to the maximum brightness of the target image,/>Refers to the minimum brightness of the target image.
In one possible implementation, the third mapping unit 506 is configured to: :
mapping each pixel in the low dynamic range image through the following formula to obtain a corresponding preliminary human eye model mapping result:
Wherein, Representing the mapping result/>, of the preliminary human eye modelLuminance value at/>Representing the low dynamic range image/>Luminance value at/>Center value representing 4 th order range within view cone,/>Represents the value of the inflection point of two equations, n is constant, m is constant,/>Is used to ensure/>At/>Continuity at, k represents the luminance perceptual amplitude coefficient,/>、/>、/>And/>Is determined by the following formula:
,/>
,/>,/>
Wherein, Pixel mean value representing the low dynamic range image,/>Pixel median value representing the low dynamic range image,/>Intermediate value representing the overall brightness of the low dynamic range image,/>Representing a target luminance factor;
Adjusting the brightness information of the preliminary human eye model mapping result based on the brightness information of the low dynamic range image through the following formula to obtain a corresponding human eye mapping result:
Wherein, Representing the human eye mapping result,/>Representing weighted average contrast value,/>Representing the average dispersion value relative to a given median value,/>Representing the average dispersion value relative to the low dynamic range image,,/>To specify a constant value,/>Representing the mapping result/>, for the preliminary human eye modelGaussian kernel radius of local contrast measure at,/>Representing the brightness value at the mapping result x of the preliminary human eye model,/>Representing the brightness value at the mapping result y of the preliminary human eye model,/>Representing the mean value of the low dynamic range image,/>Representing the low dynamic range image/>A luminance value at.
It should be noted that, the device in the embodiment of the present application may implement each process of the embodiment of the foregoing method and achieve the same effects and functions, which are not described herein again.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory, and a bus. The memory stores machine-readable instructions executable by the processor, which when executed by the processor perform the following processes when the electronic device is in operation:
performing convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to a displayable dynamic range;
judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
outputting the low dynamic range image if the low dynamic range image is smaller than the first gray threshold;
If the low dynamic range image is not smaller than the first gray threshold, judging whether the gray of the low dynamic range image is smaller than a second gray threshold; the second gray level threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
If the gray level of the low dynamic range image is smaller than the second gray level threshold value, performing gradient pyramid mapping on the low dynamic range image, and outputting a corresponding mapping result;
And if the gray level of the low dynamic range image is not smaller than the second gray level threshold, performing human eye model mapping on the low dynamic range image, and outputting a corresponding mapping result.
The embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the steps of the tone mapping method for CT images described in the above method embodiment. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present application further provide a computer program product, which includes a computer program, where the computer program product carries a program code, and instructions included in the program code may be used to perform the steps of the tone mapping method for CT images described in the foregoing method embodiments, and specifically, reference may be made to the foregoing method embodiments, which are not repeated herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Various embodiments of the application are described in an incremental manner, and identical or similar parts of the various embodiments are referred to each other, each of which is emphasized by the difference from the other embodiments. In particular, for apparatus, devices and computer readable storage medium embodiments, the description thereof is simplified as it is substantially similar to the method embodiments, as relevant points may be found in part in the description of the method embodiments.
The apparatus, the device, and the computer readable storage medium provided in the embodiments of the present application are in one-to-one correspondence with the methods, and therefore, the apparatus, the device, and the computer readable storage medium also have similar advantageous technical effects as the corresponding methods, and since the advantageous technical effects of the methods have been described in detail above, the advantageous technical effects of the apparatus, the device, and the computer readable storage medium are not repeated herein.
It will be apparent to those skilled in the art that embodiments of the present application may be implemented in methods and apparatus (devices or systems), or as a computer-readable storage medium. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the application may take the form of a computer-readable storage medium embodied in one or more computer-usable storage media including, but not limited to, magnetic disk storage, compact disk read-only memory (CD-ROM), optical storage, and the like, containing computer-usable program code.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices or systems) and computer-readable storage media according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or Flash memory (Flash RAM), among others, in a computer readable medium. Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer-readable storage media include, but are not limited to, phase-change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of random access memory, read only memory, electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations be performed in that particular order, or that all illustrated operations be performed, to achieve desirable results. In addition, some steps may be omitted, multiple steps may be combined into one step to be performed, and/or one step may be decomposed into multiple sub-steps to be performed.
While the spirit and principles of the present application have been described above with reference to several embodiments, it should be understood that the application is not limited to the particular embodiments disclosed nor does the division of aspects mean that features in these aspects cannot be combined. The application is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (12)

1. A tone mapping method for a CT image, the method comprising:
performing convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to a displayable dynamic range;
judging whether the gray level of the low dynamic range image is smaller than a first gray level threshold value or not; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
outputting the low dynamic range image if the low dynamic range image is smaller than the first gray threshold;
If the low dynamic range image is not smaller than the first gray threshold, judging whether the gray of the low dynamic range image is smaller than a second gray threshold; the second gray level threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
If the gray level of the low dynamic range image is smaller than the second gray level threshold value, performing gradient pyramid mapping on the low dynamic range image, and outputting a corresponding mapping result;
And if the gray level of the low dynamic range image is not smaller than the second gray level threshold, performing human eye model mapping on the low dynamic range image, and outputting a corresponding mapping result.
2. The method of claim 1, wherein the convolving the high dynamic range image with the pyramid mapping to obtain a corresponding low dynamic range image comprises:
converting the high dynamic range image to a luminance space;
Determining a local adaptive luminance for each pixel in the high dynamic range image; the local adaptive brightness is used for identifying the average brightness of the neighborhood of the corresponding pixel;
mapping the local self-adaptive brightness of each pixel in the high dynamic range image to obtain a corresponding preliminary low dynamic range image; the local self-adaptive brightness of each pixel in the preliminary low dynamic range image belongs to a displayable dynamic range;
Adding the brightness information of the high dynamic range image to the preliminary low dynamic range image to obtain a corresponding low dynamic range image;
the low dynamic range image is converted to RGB space.
3. The method of claim 2, wherein said determining the locally adaptive luminance of each pixel in the high dynamic range image comprises:
High-pass filtering the high-dynamic-range image through a low-pass filter with a frequency factor s; high-pass filtering the high-dynamic-range image through a low-pass filter with a frequency factor of 2 s;
the locally adaptive luminance of each pixel in the high dynamic range image is determined by the following formula:
Wherein, A low pass filter representing a frequency factor s for the high dynamic range image/>Filtering results at,/>A low pass filter representing a frequency factor of 2s for the high dynamic range image/>Filtering results at,/>Representing the high dynamic range image/>Locally adaptive luminance at the location.
4. A method according to claim 3, wherein said mapping the local adaptive luminance of each pixel in said high dynamic range image to obtain a corresponding preliminary low dynamic range image comprises:
mapping the local adaptive luminance of each pixel in the high dynamic range image by the following formula:
Wherein, Representing the high dynamic range image/>Locally adaptive luminance at,/>Refers to the minimum local adaptive luminance of the high dynamic range image,/>Refers to the maximum local adaptive luminance of the high dynamic range image,/>Representing the maximum displayable luminance in the displayable luminance range,/>Representing the preliminary low dynamic range image/>Locally adaptive luminance at,/>Is determined by the following formula:
5. The method of claim 4, wherein adding the luminance information of the high dynamic range image to the preliminary low dynamic range image results in a corresponding low dynamic range image, comprising:
Determining the brightness of each pixel in the high dynamic range image;
adding the luminance and the locally adaptive luminance of each pixel in the high dynamic range image to the preliminary low dynamic range image by the following formula:
Wherein, Representing the preliminary low dynamic range image/>Locally adaptive luminance at,/>Representing the high dynamic range image/>Locally adaptive luminance at,/>Representing the high dynamic range imageAt luminance, I represents the luminance of the low dynamic range image.
6. The method of claim 1, wherein the gradient pyramid mapping the low dynamic range image comprises:
determining a target image corresponding to the low dynamic range image; the brightness of each pixel in the target image belongs to the displayable dynamic range, and the contrast meets the output requirement;
and taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image.
7. The method of claim 6, wherein the gradient pyramid mapping the low dynamic range image with the target image as a mapping target comprises:
Taking the target image as a mapping target, and performing gradient pyramid mapping on the low dynamic range image by the following formula:
Wherein, Represents the ith pixel in the first layer of the pyramid, K is the total number of layers of the pyramid, K is the kth layer of the pyramid,/>Is the total number of pixels, i is the ith pixel,/>Representing the neighborhood of the ith pixel, j representing the jth pixel in the neighborhood of the ith pixel,/>Means that the low dynamic range image is directed at the weight constant factors corresponding to the ith pixel and the jth pixel of the kth layer pyramid,/>Representing the contrast value of the low dynamic range image for the ith and jth pixels of the kth layer pyramid,/>Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,/>Is determined by the following formula:
Wherein, Representing the luminance value of the low dynamic range image for the ith pixel of the kth layer pyramid,/>Representing a luminance value of the low dynamic range image for a j-th pixel of a k-th layer pyramid;
Is determined by the following formula:
Wherein, Representing the contrast value of the target image for the ith and jth pixels of the kth layer pyramid,/>Refers to the contrast difference of the target image,/>Is determined by the following formula:
Wherein, Refers to the image contrast of the target image,/>Is determined by the following formula:
Wherein, Refers to the maximum brightness of the target image,/>Refers to the minimum brightness of the target image.
8. The method of claim 1, wherein said performing a human eye model mapping of said low dynamic range image comprises:
mapping each pixel in the low dynamic range image through the following formula to obtain a corresponding preliminary human eye model mapping result:
Wherein, Representing the mapping result/>, of the preliminary human eye modelLuminance value at/>Representing the low dynamic range image/>Luminance value at/>Center value representing 4 th order range within view cone,/>Represents the value of the inflection point of two equations, n is constant, m is constant,/>Is used to ensure/>At/>Continuity at, k represents the luminance perceptual amplitude coefficient,/>、/>、/>And/>Is determined by the following formula:
Wherein, Pixel mean value representing the low dynamic range image,/>Pixel median value representing the low dynamic range image,/>Intermediate value representing the overall brightness of the low dynamic range image,/>Representing a target luminance factor;
Adjusting the brightness information of the preliminary human eye model mapping result based on the brightness information of the low dynamic range image through the following formula to obtain a corresponding human eye mapping result:
Wherein, Representing the human eye mapping result,/>Representing weighted average contrast value,/>Representing the average dispersion value relative to a given median value,/>Representing an average dispersion value relative to the low dynamic range image,/>,/>To specify a constant value,/>Representing the mapping result/>, for the preliminary human eye modelGaussian kernel radius of local contrast measure at,/>Representing the brightness value at the mapping result x of the preliminary human eye model,/>Representing the brightness value at the mapping result y of the preliminary human eye model,/>Representing the mean value of the low dynamic range image,/>Representing the low dynamic range image/>A luminance value at.
9. A tone mapping apparatus for CT images, the apparatus comprising:
the first mapping unit is used for carrying out convolution pyramid mapping on the high dynamic range image to obtain a corresponding low dynamic range image; the brightness of each pixel in the low dynamic range image belongs to a displayable dynamic range;
A first judging unit configured to judge whether the grayscale of the low dynamic range image is smaller than a first grayscale threshold; the first gray threshold is used for judging whether the contrast ratio of the low dynamic range image meets the output requirement;
an output unit configured to output the low dynamic range image when the low dynamic range image is smaller than the first grayscale threshold;
a second judging unit configured to judge whether the gray level of the low dynamic range image is smaller than a second gray level threshold value when the low dynamic range image is not smaller than the first gray level threshold value; the second gray level threshold is used for determining a secondary tone mapping mode of the low dynamic range image;
The second mapping unit is used for carrying out gradient pyramid mapping on the low dynamic range image and outputting a corresponding mapping result when the gray level of the low dynamic range image is smaller than the second gray level threshold value;
and the third mapping unit is used for performing human eye model mapping on the low dynamic range image and outputting a corresponding mapping result when the gray level of the low dynamic range image is not smaller than the second gray level threshold value.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the tone mapping method for CT images as claimed in any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the tone mapping method for CT images according to any of claims 1 to 8.
12. A computer program product comprising a computer program which, when run by a processor, performs the tone mapping method for CT images as claimed in any one of claims 1 to 8.
CN202410245489.2A 2024-03-05 2024-03-05 Tone mapping method and related device for CT image Active CN117830183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410245489.2A CN117830183B (en) 2024-03-05 2024-03-05 Tone mapping method and related device for CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410245489.2A CN117830183B (en) 2024-03-05 2024-03-05 Tone mapping method and related device for CT image

Publications (2)

Publication Number Publication Date
CN117830183A CN117830183A (en) 2024-04-05
CN117830183B true CN117830183B (en) 2024-05-14

Family

ID=90523077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410245489.2A Active CN117830183B (en) 2024-03-05 2024-03-05 Tone mapping method and related device for CT image

Country Status (1)

Country Link
CN (1) CN117830183B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436647A (en) * 2011-11-14 2012-05-02 江苏技术师范学院 Image enhancement method based on adaptive gray mapping
CN110223256A (en) * 2019-06-10 2019-09-10 北京大学深圳研究生院 A kind of inverse tone mapping (ITM) method, apparatus and electronic equipment
CN113728624A (en) * 2019-04-23 2021-11-30 杜比实验室特许公司 Display management of high dynamic range images
CN115830084A (en) * 2022-11-25 2023-03-21 中国科学院深圳先进技术研究院 2D-3D image registration method and system
CN117474823A (en) * 2023-12-28 2024-01-30 大连清东科技有限公司 CT data processing system for pediatric infectious inflammation detection assistance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151719B2 (en) * 2018-12-19 2021-10-19 Caide Systems, Inc. Automatic brightness and contrast control neural network for medical diagnostic imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436647A (en) * 2011-11-14 2012-05-02 江苏技术师范学院 Image enhancement method based on adaptive gray mapping
CN113728624A (en) * 2019-04-23 2021-11-30 杜比实验室特许公司 Display management of high dynamic range images
CN110223256A (en) * 2019-06-10 2019-09-10 北京大学深圳研究生院 A kind of inverse tone mapping (ITM) method, apparatus and electronic equipment
CN115830084A (en) * 2022-11-25 2023-03-21 中国科学院深圳先进技术研究院 2D-3D image registration method and system
CN117474823A (en) * 2023-12-28 2024-01-30 大连清东科技有限公司 CT data processing system for pediatric infectious inflammation detection assistance

Also Published As

Publication number Publication date
CN117830183A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Eilertsen et al. A comparative review of tone‐mapping algorithms for high dynamic range video
Vijayalakshmi et al. A comprehensive survey on image contrast enhancement techniques in spatial domain
Kim et al. Natural HDR image tone mapping based on retinex
Boitard et al. Temporal coherency for video tone mapping
Lee et al. Local tone mapping using the K-means algorithm and automatic gamma setting
CN108090886B (en) High dynamic range infrared image display and detail enhancement method
US10003809B2 (en) Method and device for tone-mapping a high dynamic range image
CN115115554B (en) Image processing method and device based on enhanced image and computer equipment
Boschetti et al. High dynamic range image tone mapping based on local histogram equalization
CN109767413B (en) HDR method and device for resisting motion artifacts and portable terminal
Khan et al. Localization of radiance transformation for image dehazing in wavelet domain
JP2012519896A (en) Method for converting input image data into output image data, image conversion unit for converting input image data into output image data, image processing apparatus, display device
Park et al. Generation of high dynamic range illumination from a single image for the enhancement of undesirably illuminated images
CN115330640B (en) Illumination mapping noise reduction method, device, equipment and medium
CN112634384A (en) Method and device for compressing high dynamic range image
Liu et al. Enhancement of low illumination images based on an optimal hyperbolic tangent profile
CN117152182B (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN117830183B (en) Tone mapping method and related device for CT image
CN111784598A (en) Method for training tone mapping model, tone mapping method and electronic equipment
CN111882498A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116152272A (en) Infrared image compression method and device, storage medium and electronic equipment
Zhang et al. A dynamic range adjustable inverse tone mapping operator based on human visual system
Kwon et al. Enhanced high dynamic‐range image rendering using a surround map based on edge‐adaptive layer blurring
KR101418521B1 (en) Image enhancement method and device by brightness-contrast improvement
CN112887597A (en) Image processing method and device, computer readable medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant