CN108447037B - Method and device for enhancing dynamic range of image - Google Patents

Method and device for enhancing dynamic range of image Download PDF

Info

Publication number
CN108447037B
CN108447037B CN201810252197.6A CN201810252197A CN108447037B CN 108447037 B CN108447037 B CN 108447037B CN 201810252197 A CN201810252197 A CN 201810252197A CN 108447037 B CN108447037 B CN 108447037B
Authority
CN
China
Prior art keywords
processed
pixel point
value
gray
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810252197.6A
Other languages
Chinese (zh)
Other versions
CN108447037A (en
Inventor
刘国卿
田广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shunjiu Electronic Technology Co ltd
Original Assignee
Shanghai Shunjiu Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shunjiu Electronic Technology Co ltd filed Critical Shanghai Shunjiu Electronic Technology Co ltd
Priority to CN201810252197.6A priority Critical patent/CN108447037B/en
Publication of CN108447037A publication Critical patent/CN108447037A/en
Application granted granted Critical
Publication of CN108447037B publication Critical patent/CN108447037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for enhancing the dynamic range of an image. The application provides a method for enhancing the dynamic range of an image, which comprises the following steps: determining a pixel point to be processed in an image to be processed and an initial enhancement coefficient of the pixel point to be processed, and calculating a correction coefficient of the pixel point to be processed; correcting the initial enhancement coefficient by using the correction coefficient to obtain a corrected enhancement coefficient; and adopting the corrected enhancement coefficient to perform enhancement processing on the pixel point to be processed. And the correction coefficient is positively correlated with the gray value of the pixel point to be processed. The method and the device for enhancing the dynamic range of the image can maintain the details in the image while enhancing the image to be processed.

Description

Method and device for enhancing dynamic range of image
Technical Field
The present application relates to image processing technologies, and in particular, to a method and an apparatus for enhancing a dynamic range of an image.
Background
In recent years, with the development of HDR (High-Dynamic Range) technology, more and more display devices can support HDR functions. However, the HDR imaging apparatus is expensive, the imaging technology is not mature, and the like, and thus the high dynamic range image has less resources. At present, in order to improve the visual effect and fully utilize the display capability of the display device, an image dynamic range enhancement method is often adopted to perform enhancement processing on an image to be processed so as to convert the image to be processed into an HDR image.
At present, a gray stretching method is a commonly used image dynamic range enhancement method. Namely, a gray stretching method is adopted to perform enhancement processing on a highlight area (pixel points with brightness values larger than a preset value) in an image to be processed, so as to obtain an HDR image.
However, when the image to be processed is enhanced by the above method, the contrast of the highlight area is reduced, so that part of the details in the highlight area is lost.
Disclosure of Invention
In view of the above, the present application provides a method and an apparatus for enhancing a dynamic range of an image, so as to retain details in the image while performing an enhancement process on the image to be processed.
The first aspect of the present application provides a method for enhancing a dynamic range of an image, comprising:
determining a pixel point to be processed in an image to be processed and an initial enhancement coefficient of the pixel point to be processed;
calculating a correction coefficient of the pixel point to be processed, wherein the correction coefficient is positively correlated with the gray value of the pixel point to be processed;
correcting the initial enhancement coefficient by using the correction coefficient to obtain a corrected enhancement coefficient;
and adopting the corrected enhancement coefficient to perform enhancement processing on the pixel point to be processed.
A second aspect of the present application provides an apparatus for enhancing a dynamic range of an image, comprising: a determination module, a calculation module, a correction module and a processing module, wherein,
the determining module is used for determining to-be-processed pixel points in the to-be-processed image and initial enhancement coefficients of the to-be-processed pixel points;
the calculation module is used for calculating the correction coefficient of the pixel point to be processed; wherein the correction coefficient is positively correlated with the gray value of the pixel point to be processed;
the correction module is used for correcting the initial enhancement coefficient by adopting the correction coefficient to obtain a corrected enhancement coefficient;
and the processing module is used for adopting the corrected enhancement coefficient to carry out enhancement processing on the pixel point to be processed.
According to the method and the device for enhancing the dynamic range of the image, the initial enhancement coefficients of the to-be-processed pixel points and the to-be-processed pixel points in the to-be-processed image are determined, the correction coefficients of the to-be-processed pixel points are calculated, the initial enhancement coefficients are corrected by the correction coefficients, the corrected enhancement coefficients of the to-be-processed pixel points are obtained, and finally the corrected enhancement coefficients are used for enhancing the to-be-processed pixel points. Therefore, because the correction coefficient of the pixel point to be processed is positively correlated with the gray value of the pixel point to be processed, after the correction coefficient is adopted to correct the initial enhancement coefficient, each pixel point to be processed with different gray values can have a corrected enhancement coefficient with a larger degree of difference, and thus, after the image to be processed is enhanced by adopting the corrected enhancement coefficient, the contrast of the image can be kept, and the loss of details is avoided.
Drawings
FIG. 1 is a flowchart of a first embodiment of a method for enhancing dynamic range of an image provided by the present application;
FIG. 2 is a flowchart of a second embodiment of a method for enhancing dynamic range of an image provided by the present application;
FIG. 3 is a flowchart of a third embodiment of a method for enhancing dynamic range of an image provided by the present application;
FIG. 4 is a block diagram of a computer device in which an apparatus for enhancing dynamic range of an image according to an exemplary embodiment of the present application is installed;
fig. 5 is a schematic structural diagram of a first embodiment of an apparatus for enhancing a dynamic range of an image according to the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The application provides a method and a device for enhancing the dynamic range of an image, which are used for enhancing the image to be processed and simultaneously preserving the details in the image.
Several specific examples are given below to explain the technical solution of the present application in detail. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a flowchart of a first embodiment of a method for enhancing a dynamic range of an image according to the present application. Referring to fig. 1, the method provided in this embodiment may include:
s101, determining to-be-processed pixel points in the to-be-processed image and initial enhancement coefficients of the to-be-processed pixel points.
Specifically, the pixel points to be processed refer to pixel points in the image to be processed, of which the brightness values are greater than a preset value. It should be noted that the to-be-processed pixel point in the to-be-processed image and the initial enhancement coefficient of the to-be-processed pixel point may be determined according to the existing method. For example, the pixel points of the image to be processed, the brightness values of which are greater than the preset value, are determined as the pixel points to be processed, and the pixel points to be processed are subjected to filtering processing to obtain the initial enhancement coefficients of the pixel points to be processed.
S102, calculating a correction coefficient of the pixel point to be processed, wherein the correction coefficient is positively correlated with the gray value of the pixel point to be processed.
In the method provided by this embodiment, after the to-be-processed pixel point and the initial enhancement coefficient of the to-be-processed pixel point are determined, instead of directly using the initial enhancement coefficient of the to-be-processed pixel point to enhance the to-be-processed pixel point as in the prior art, the correction coefficient of the to-be-processed pixel point is calculated, and then the correction coefficient is used to correct the initial enhancement coefficient of the to-be-processed pixel point to obtain a corrected enhancement coefficient, and finally the corrected enhancement coefficient is used to enhance the to-be-processed pixel point.
It should be noted that the correction coefficient of the pixel to be processed is positively correlated with the gray value of the pixel to be processed, that is, the larger the gray value of the pixel to be processed is, the larger the correction coefficient of the pixel to be processed is. For example, in an embodiment, the correction coefficient of the pixel point to be processed may be calculated as follows: namely, the correction coefficient of the pixel point to be processed is the product of the gray value of the pixel point to be processed and the preset value. The preset value is set according to actual needs, and in this embodiment, specific values of the preset value are not limited. For example, in one embodiment, the preset value may be set to 1 divided by 2H, where H is the theoretical maximum gray level of the image to be processed. For example, if the theoretical maximum gray level of the image to be processed is 1023 and the theoretical minimum gray level is 0, the preset value is set to 1/2046. For another example, in another embodiment, the preset value may be set to 1 divided by 2L, where L is the theoretical maximum gray level of an 8-bit image, where the theoretical maximum gray level of the 8-bit image is 255, and at this time, the preset value is set to 1/510.
And S103, correcting the initial enhancement coefficient by adopting the correction coefficient to obtain the corrected enhancement coefficient of the pixel point to be processed.
Specifically, in a possible implementation manner of the present application, the corrected enhancement coefficient of each to-be-processed pixel is equal to that of the to-be-processed pixelAnd adding the product of the initial enhancement coefficient of the pixel point to be processed and the correction coefficient of the pixel point to be processed to the initial enhancement coefficient. I.e. BFi=BIi+BIi*BSiWherein, BFiThe corrected enhancement coefficient of the ith pixel point to be processed is obtained; BI (BI)iThe initial enhancement coefficient of the ith pixel point is obtained; BSiAnd the correction coefficient is the correction coefficient of the ith pixel point to be processed.
It should be noted that, in the method provided in this embodiment, after the correction coefficient of each to-be-processed pixel point is used to correct the initial enhancement coefficient of the to-be-processed pixel point, because the correction coefficient of each to-be-processed pixel point is positively correlated with the gray-scale value of the to-be-processed pixel point, the larger the gray-scale value of the to-be-processed pixel point is, the larger the correction coefficient of the to-be-processed pixel point is, and after the original enhancement coefficient is corrected by using the correction coefficient, each to-be-processed pixel point with a different gray-scale value can have a corrected enhancement coefficient with a greater difference, so that after the corrected enhancement coefficient is used to enhance the to-be-processed image, the contrast of the image can be maintained, and details are prevented from being lost.
And S104, adopting the corrected enhancement coefficient to perform enhancement processing on the pixel point to be processed.
The pixel value of each pixel point to be processed is updated to be the original pixel value multiplied by the corrected enhancement coefficient of the pixel point to be processed, and the enhanced image is obtained.
In the method provided by this embodiment, the to-be-processed pixel point in the to-be-processed image and the initial enhancement coefficient of the to-be-processed pixel point are determined, the correction coefficient of the to-be-processed pixel point is calculated, the correction coefficient of the to-be-processed pixel point is further used to correct the initial enhancement coefficient of the to-be-processed pixel point, and after the corrected enhancement coefficient of the to-be-processed pixel point is obtained, the to-be-processed pixel point is finally enhanced by using the corrected enhancement coefficient of the to-be-processed pixel point. Therefore, because the correction coefficient of each pixel point to be processed is positively correlated with the gray value of the pixel point to be processed, after the correction coefficient is adopted to correct the initial enhancement coefficient, each pixel point to be processed with different gray values can have a corrected enhancement coefficient with a larger degree of difference, and thus, after the image to be processed is enhanced by adopting the corrected enhancement coefficient, the contrast of the image can be kept, and the loss of details is avoided.
Fig. 2 is a flowchart of a second embodiment of the image dynamic range enhancement method provided in the present application. Referring to fig. 2, the method provided in this embodiment, on the basis of the foregoing embodiment, step S101, may include:
s201, determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed.
The image depth of the image refers to the number of bits used to store each pixel, and determines the number of possible colors of each pixel of a color image or determines the gray level of each pixel of a gray image. It determines the maximum number of colors that can be present in a color image, or the maximum gray scale level in a gray scale image. For example, for a gray-scale image, if the image depth of the image is 8, the maximum gray-scale level of the image is 8 powers of 2, i.e., the maximum gray-scale level of the image is 256, the theoretical minimum gray-scale level corresponding to the image is 0, and the theoretical maximum gray-scale level is 255.
In addition, in this embodiment, the processing coefficient corresponding to the image to be processed may be determined according to the image depth of the image to be processed and a preset corresponding relationship between the image depth and the processing coefficient. The corresponding relation between the image depth and the processing coefficient is set according to actual needs. In the present embodiment, this is not limited. For example, table 1 shows the correspondence between preset image depths and processing coefficients according to an exemplary embodiment of the present application.
TABLE 1 corresponding relationship between preset image depth and processing coefficient
Depth of image Coefficient of treatment
8bit
510
10 bit 2046
…… ……
In the example shown in table 1, the processing coefficient corresponding to a certain image depth is set to 2a, where a is the maximum theoretical gray level corresponding to the image depth. For example, in an embodiment, the image depth of the image to be processed is 8 bits, and at this time, the theoretical maximum gray level corresponding to the image depth is 255, so the processing coefficient corresponding to the image depth is set to 510.
S202, calculating a first deviation amount of the gray value of the pixel point to be processed; the first deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed.
Specifically, the specific implementation process of this step may include: (1) calculating the average value of the gray values of all the pixel points to be processed; (2) calculating a first deviation amount of the gray value of the pixel point to be processed; the first deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed.
For example, in an embodiment, there are 3 pixels to be processed in the image to be processed (that is, the luminance values of 3 pixels in the image to be processed are greater than the preset value), the gray value of the first pixel to be processed is 123, the gray value of the second pixel to be processed is 234, and the gray value of the third pixel to be processed is 212, at this time, the average value of the gray values of all the pixels to be processed is 193, where 193 is (123+234+ 212)/3. Further, the first deviation amount of the gray value of the first pixel to be processed is calculated to be-70, the first deviation amount of the gray value of the second pixel to be processed is 41, and the first deviation amount of the gray value of the third pixel to be processed is 19. Wherein, the value of-70 is 123 minus 193, the value of 41 is 234 minus 193, and the value of 19 is 212 minus 193.
S203, determining a first correction coefficient of the pixel point to be processed; wherein the first correction coefficient is a ratio of the first deviation amount to the processing coefficient.
With reference to the above example, for example, the image depth of the to-be-processed image is 8 bits, and according to table 1, the processing coefficient corresponding to the to-be-processed image is determined to be 510, in this step, the first correction coefficient of the first to-be-processed pixel point is calculated to be-70/510, the first correction coefficient of the second to-be-processed pixel point is calculated to be 41/510, and the first correction coefficient of the third to-be-processed pixel point is calculated to be 19/510.
In this embodiment, the first correction coefficient is theoretically in the range of-0.5 to 0.5. S204, calculating a second deviation amount of the gray value of the pixel point to be processed; and the second deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed.
Specifically, the specific implementation process of this step may include:
(1) and for each pixel point to be processed, calculating the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed.
For example, in an embodiment, the neighborhood is designated as a 3 × 3 neighborhood, and in this step, for each pixel to be processed, the average value of the gray values of all the pixels to be processed in the 3 × 3 neighborhood of the pixel to be processed is calculated. For example, if there are 5 pixels to be processed in 3 × 3 neighborhoods of pixels to be processed, the average value of the gray values of the 5 pixels to be processed is calculated.
(2) And calculating a second deviation amount of the gray value of each pixel point to be processed, wherein the second deviation amount of the gray value of each pixel point to be processed is a difference value between the gray value of the pixel point to be processed and an average value of the gray values of all the pixel points to be processed in a specified neighborhood of the pixel point to be processed.
In combination with the above example, for example, there are 3 pixels to be processed in the image to be processed, the first pixel to be processed is located in the 3 × 3 neighborhood of the second pixel to be processed, meanwhile, the second pixel to be processed is located in the 3 × 3 neighborhood of the first pixel to be processed, and there are no other pixels to be processed in the 3 × 3 neighborhood of the 3 rd pixel to be processed. At this time, the average value of the gray values of all the pixels to be processed in the specified neighborhood of the first pixel to be processed is 178.5, wherein 178.5 is (123+234)/2, and the second deviation amount of the gray value of the first pixel to be processed is-55.5; meanwhile, the average value of the gray values of all the pixels to be processed in the designated neighborhood of the second pixel to be processed is 178.5, and the second deviation amount of the gray value of the second pixel to be processed is 55.5; and calculating to obtain an average value of the gray values of all the pixels to be processed in the appointed neighborhood of the third pixel to be processed as 212, wherein the second deviation amount of the gray value of the third pixel to be processed is 0.
S205, determining a second correction coefficient of the pixel point to be processed; wherein the second correction coefficient is a ratio of the second deviation amount to the processing coefficient.
By combining the above example, the second correction coefficient of the first pixel to be processed is calculated to be-55.5/510, the second correction coefficient of the second pixel to be processed is 55.5/510, and the second correction coefficient of the third pixel to be processed is 0.
In this embodiment, the second correction coefficient is theoretically in the range of-0.5 to 0.5.
And S206, determining the average value of the first correction coefficient and the second correction coefficient as the correction coefficient of the pixel point to be processed.
With the above example, it is determined that the correction factor of the first pixel to be processed is-125.5/1020, the correction factor of the second pixel to be processed is 96.5/1020, and the correction factor of the third pixel to be processed is 19/1020.
The embodiment provides a method for enhancing an image dynamic range, which calculates a correction coefficient by using the method, so that after an initial enhancement coefficient is corrected by using the correction coefficient, pixels to be processed with different gray values can have a corrected enhancement coefficient with a relatively obvious discrimination, and thus, after an image to be processed is enhanced by using the corrected enhancement coefficient, details of the image can be kept.
Fig. 3 is a flowchart of a third embodiment of the image dynamic range enhancement method provided in the present application. Referring to fig. 3, on the basis of the first embodiment, in the method provided in this embodiment, step S101 may include:
s301, determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed.
Specifically, the specific implementation process and implementation principle of this step may be referred to the description in step S201, and are not described herein again.
S302, calculating the deviation amount of the gray value of the pixel point to be processed.
Specifically, in an embodiment of the present application, the deviation amount of the gray value of the to-be-processed pixel point may refer to a difference between the gray value of the to-be-processed pixel point and an average value of the gray values of all to-be-processed pixel points. In another embodiment of the present application, the deviation amount of the gray value of the to-be-processed pixel point may refer to a difference between the gray value of the to-be-processed pixel point and an average of the gray values of all to-be-processed pixel points located in a designated neighborhood of the to-be-processed pixel point.
Further, when the deviation amount of the gray value of the pixel to be processed refers to the difference between the gray value of the pixel to be processed and the average value of the gray values of all the pixel to be processed, the specific implementation process of this step may include the following steps:
(1) calculating the average value of the gray values of all the pixel points to be processed;
(2) calculating the deviation amount of the gray value of the pixel point to be processed; and the deviation value of the gray value of the pixel point to be processed is the difference value between the gray value of the pixel point to be processed and the average value.
Specifically, the specific implementation process and implementation principle of each step may refer to the description in the foregoing embodiments, and are not described herein again.
In addition, when the deviation value of the gray value of the pixel point to be processed refers to a difference value between the gray value of the pixel point to be processed and an average value of the gray values of all the pixel points to be processed in the specified neighborhood of the pixel point to be processed, the specific implementation process of the step may include the following steps:
(1) calculating the gray average value corresponding to the pixel point to be processed; and the gray average value corresponding to the pixel point to be processed is equal to the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed.
(2) Calculating the deviation amount of the gray value of the pixel point to be processed; and the deviation amount of the gray value of the pixel point to be processed is the difference value between the gray value of the pixel point to be processed and the average gray value corresponding to the pixel point to be processed.
S303, calculating a correction coefficient of the pixel point to be processed; and the correction coefficient of the pixel point to be processed is the ratio of the deviation amount of the gray value of the pixel point to be processed to the processing coefficient.
For example, in an embodiment, there are 3 pixels to be processed in the image to be processed, the gray value of the first pixel to be processed is 123, the gray value of the second pixel to be processed is 234, and the gray value of the third pixel to be processed is 212, and in this example, the deviation amount of the gray value of the pixel to be processed refers to the difference between the gray value of the pixel to be processed and the average value of the gray values of all the pixels to be processed, at this time, the deviation amount of the gray value of the first pixel to be processed calculated is-70, the deviation amount of the gray value of the second pixel to be processed is 41, and the deviation amount of the gray value of the third pixel to be processed is 19.
Further, for example, the depth of the image to be processed is 8 bits, and according to table 1, it is determined that the processing coefficient corresponding to the processed image is 510, at this time, the correction coefficient of the first pixel to be processed is-70/510, the correction coefficient of the second pixel to be processed is 41/510, and the correction coefficient of the third pixel to be processed is 19/510.
The embodiment provides a method for enhancing an image dynamic range, which calculates a correction coefficient by using the method, so that after an initial enhancement coefficient is corrected by using the correction coefficient, pixels to be processed with different gray values can have a corrected enhancement coefficient with a relatively obvious discrimination, and after an image to be processed is enhanced by using the corrected enhancement coefficient, the details of the image can be kept.
Corresponding to the embodiments of the method for enhancing the dynamic range of the image, the application also provides embodiments of the device for enhancing the dynamic range of the image.
The embodiment of the device for enhancing the dynamic range of the image can be applied to computer equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the memory into the memory for operation through the processor of the computer device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 4, for a hardware structure diagram of a computer device in which an apparatus for enhancing an image dynamic range shown in an exemplary embodiment of the present application is located, except for the memory 410, the processor 420, the memory 430, and the network interface 440 shown in fig. 4, the computer device in which the apparatus is located in the embodiment may also include other hardware according to an actual function of the apparatus for enhancing an image dynamic range, which is not described again.
Fig. 5 is a schematic structural diagram of a first embodiment of an apparatus for enhancing a dynamic range of an image according to the present application. Referring to fig. 5, the apparatus provided in this embodiment may include: a determination module 510, a calculation module 520, a modification module 530, and a processing module 540, wherein,
the determining module 510 is configured to determine to-be-processed pixel points in the to-be-processed image and initial enhancement coefficients of the to-be-processed pixel points;
the calculating module 520 is configured to calculate a correction coefficient of the pixel point to be processed; wherein the correction coefficient is positively correlated with the gray value of the pixel point to be processed;
the correcting module 530 is configured to correct the initial enhancement coefficient by using the correction coefficient to obtain a corrected enhancement coefficient;
the processing module 540 is configured to perform enhancement processing on the pixel point to be processed by using the modified enhancement coefficient.
The apparatus provided in this embodiment may be used to implement the technical solution shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
Further, the calculating module 520 is specifically configured to:
determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed;
calculating a first deviation amount of the gray value of the pixel point to be processed; the first deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed;
determining a first correction coefficient of the pixel point to be processed; wherein the first correction coefficient is a ratio of the first deviation amount to the processing coefficient;
calculating a second deviation amount of the gray value of the pixel point to be processed; the second deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
determining a second correction coefficient of the pixel point to be processed; wherein the second correction coefficient is a ratio of the second deviation amount to the processing coefficient;
and determining the average value of the first correction coefficient and the second correction coefficient as the correction coefficient of the pixel point to be processed.
The apparatus provided in this embodiment may be used to implement the technical solution shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
Further, the calculating module 520 is specifically configured to:
determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed;
calculating the deviation amount of the gray value of the pixel point to be processed;
calculating a correction coefficient of the pixel point to be processed; and the correction coefficient of the pixel point to be processed is the ratio of the deviation amount of the gray value of the pixel point to be processed to the processing coefficient.
Further, the calculating module 520 is specifically configured to:
calculating the average value of the gray values of all the pixel points to be processed;
calculating the deviation amount of the gray value of the pixel point to be processed; and the deviation amount of the gray value of the pixel point to be processed is the difference value between the gray value of the pixel point to be processed and the average value.
Further, the calculating module 520 is specifically configured to:
calculating the gray average value corresponding to the pixel point to be processed; the gray average value corresponding to the pixel point to be processed is equal to the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
calculating the deviation amount of the gray value of the pixel point to be processed; and the deviation amount of the gray value of the pixel point to be processed is the difference value of the gray value of the pixel point to be processed and the gray average value corresponding to the pixel point to be processed.
The apparatus provided in this embodiment may be used to implement the technical solution shown in fig. 3, and the implementation principle and the technical effect are similar, which are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (6)

1. A method for image dynamic range enhancement, the method comprising:
determining a pixel point to be processed in an image to be processed and an initial enhancement coefficient of the pixel point to be processed;
determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed;
calculating a first deviation amount of the gray value of the pixel point to be processed; the first deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed;
determining a first correction coefficient of the pixel point to be processed; wherein the first correction coefficient is a ratio of the first deviation amount to the processing coefficient;
calculating a second deviation amount of the gray value of the pixel point to be processed; the second deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
determining a second correction coefficient of the pixel point to be processed; wherein the second correction coefficient is a ratio of the second deviation amount to the processing coefficient;
determining an average value of the first correction coefficient and the second correction coefficient as a correction coefficient of the pixel point to be processed, wherein the correction coefficient is positively correlated with a gray value of the pixel point to be processed;
correcting the initial enhancement coefficient by using the correction coefficient to obtain a corrected enhancement coefficient;
and adopting the corrected enhancement coefficient to perform enhancement processing on the pixel point to be processed.
2. The method according to claim 1, wherein the calculating the first deviation amount of the gray-level value of the pixel point to be processed comprises:
calculating the average value of the gray values of all the pixel points to be processed;
calculating the deviation amount of the gray value of the pixel point to be processed; and the first deviation amount of the gray value of the pixel point to be processed is the difference value between the gray value of the pixel point to be processed and the average value.
3. The method according to claim 1, wherein the calculating the second deviation amount of the gray-level value of the pixel point to be processed comprises:
calculating the gray average value corresponding to the pixel point to be processed; the gray average value corresponding to the pixel point to be processed is equal to the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
calculating a second deviation amount of the gray value of the pixel point to be processed; and the second deviation amount of the gray value of the pixel point to be processed is the difference value of the gray value of the pixel point to be processed and the average gray value corresponding to the pixel point to be processed.
4. An apparatus for enhancing dynamic range of an image, the apparatus comprising a determining module, a calculating module, a modifying module and a processing module, wherein,
the determining module is used for determining to-be-processed pixel points in the to-be-processed image and initial enhancement coefficients of the to-be-processed pixel points;
the calculation module is specifically configured to:
determining a processing coefficient corresponding to the image to be processed according to the image depth of the image to be processed;
calculating a first deviation amount of the gray value of the pixel point to be processed; the first deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed;
determining a first correction coefficient of the pixel point to be processed; wherein the first correction coefficient is a ratio of the first deviation amount to the processing coefficient;
calculating a second deviation amount of the gray value of the pixel point to be processed; the second deviation value is the difference value between the gray value of the pixel point to be processed and the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
determining a second correction coefficient of the pixel point to be processed; wherein the second correction coefficient is a ratio of the second deviation amount to the processing coefficient;
determining the average value of the first correction coefficient and the second correction coefficient as the correction coefficient of the pixel point to be processed; wherein the correction coefficient is positively correlated with the gray value of the pixel point to be processed;
the correction module is used for correcting the initial enhancement coefficient by adopting the correction coefficient to obtain a corrected enhancement coefficient;
and the processing module is used for adopting the corrected enhancement coefficient to carry out enhancement processing on the pixel point to be processed.
5. The apparatus according to claim 4, wherein the calculating module calculates a first deviation amount of the gray-level value of the pixel to be processed, and is specifically configured to:
calculating the average value of the gray values of all the pixel points to be processed;
calculating a first deviation amount of the gray value of the pixel point to be processed; and the first deviation amount of the gray value of the pixel point to be processed is the difference value between the gray value of the pixel point to be processed and the average value.
6. The apparatus according to claim 5, wherein the calculating module calculates a second deviation amount of the gray-level value of the pixel to be processed, and is specifically configured to:
calculating the gray average value corresponding to the pixel point to be processed; the gray average value corresponding to the pixel point to be processed is equal to the average value of the gray values of all the pixel points to be processed in the appointed neighborhood of the pixel point to be processed;
calculating a second deviation amount of the gray value of the pixel point to be processed; and the second deviation amount of the gray value of the pixel point to be processed is the difference value of the gray value of the pixel point to be processed and the average gray value corresponding to the pixel point to be processed.
CN201810252197.6A 2018-03-26 2018-03-26 Method and device for enhancing dynamic range of image Active CN108447037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810252197.6A CN108447037B (en) 2018-03-26 2018-03-26 Method and device for enhancing dynamic range of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810252197.6A CN108447037B (en) 2018-03-26 2018-03-26 Method and device for enhancing dynamic range of image

Publications (2)

Publication Number Publication Date
CN108447037A CN108447037A (en) 2018-08-24
CN108447037B true CN108447037B (en) 2022-02-18

Family

ID=63197077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810252197.6A Active CN108447037B (en) 2018-03-26 2018-03-26 Method and device for enhancing dynamic range of image

Country Status (1)

Country Link
CN (1) CN108447037B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194267B (en) * 2021-04-29 2023-03-24 北京达佳互联信息技术有限公司 Image processing method and device and photographing method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639936A (en) * 2009-04-28 2010-02-03 北京捷科惠康科技有限公司 X-ray image enhancing method and system thereof
JP2011041029A (en) * 2009-08-11 2011-02-24 Sony Corp Video signal processing apparatus, enhance gain generation method and program
CN103002225A (en) * 2011-04-20 2013-03-27 Csr技术公司 Multiple exposure high dynamic range image capture
CN103116879A (en) * 2013-03-15 2013-05-22 重庆大学 Neighborhood windowing based non-local mean value CT (Computed Tomography) imaging de-noising method
CN104166969A (en) * 2014-08-25 2014-11-26 广东威创视讯科技股份有限公司 Digital image enhancement method and system
CN105139343A (en) * 2014-05-30 2015-12-09 上海贝卓智能科技有限公司 Image processing method and device
CN105184759A (en) * 2015-09-22 2015-12-23 中国科学院西安光学精密机械研究所 Image adaptive enhancement method based on histogram compactness transformation
CN105513024A (en) * 2015-12-07 2016-04-20 魅族科技(中国)有限公司 Method and terminal for processing image
CN106504205A (en) * 2016-10-20 2017-03-15 凌云光技术集团有限责任公司 A kind of image defogging method and terminal
CN106531092A (en) * 2016-11-08 2017-03-22 青岛海信电器股份有限公司 Method for adjusting image brightness and contrast ratio, video processor and display device
CN107465848A (en) * 2016-06-03 2017-12-12 上海顺久电子科技有限公司 A kind of method and device of image procossing
CN107680541A (en) * 2017-10-30 2018-02-09 杨晓艳 A kind of method and device for reducing liquid crystal display power consumption

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2328124B1 (en) * 2009-11-25 2019-05-15 Agfa Nv Method of enhancing the contrast of spatially-localized phenomena in an image

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639936A (en) * 2009-04-28 2010-02-03 北京捷科惠康科技有限公司 X-ray image enhancing method and system thereof
JP2011041029A (en) * 2009-08-11 2011-02-24 Sony Corp Video signal processing apparatus, enhance gain generation method and program
CN103002225A (en) * 2011-04-20 2013-03-27 Csr技术公司 Multiple exposure high dynamic range image capture
CN103116879A (en) * 2013-03-15 2013-05-22 重庆大学 Neighborhood windowing based non-local mean value CT (Computed Tomography) imaging de-noising method
CN105139343A (en) * 2014-05-30 2015-12-09 上海贝卓智能科技有限公司 Image processing method and device
CN104166969A (en) * 2014-08-25 2014-11-26 广东威创视讯科技股份有限公司 Digital image enhancement method and system
CN105184759A (en) * 2015-09-22 2015-12-23 中国科学院西安光学精密机械研究所 Image adaptive enhancement method based on histogram compactness transformation
CN105513024A (en) * 2015-12-07 2016-04-20 魅族科技(中国)有限公司 Method and terminal for processing image
CN107465848A (en) * 2016-06-03 2017-12-12 上海顺久电子科技有限公司 A kind of method and device of image procossing
CN106504205A (en) * 2016-10-20 2017-03-15 凌云光技术集团有限责任公司 A kind of image defogging method and terminal
CN106531092A (en) * 2016-11-08 2017-03-22 青岛海信电器股份有限公司 Method for adjusting image brightness and contrast ratio, video processor and display device
CN107680541A (en) * 2017-10-30 2018-02-09 杨晓艳 A kind of method and device for reducing liquid crystal display power consumption

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features;Diego Marín 等;《IEEE Transactions on Medical Imaging》;20110131;第30卷(第1期);146-158 *
A no-reference contrast assessment index based on foreground and background;Anupam Jaiswal 等;《2013 Students Conference on Engineering and Systems》;20130627;1-5 *
光照不均图像自适应增强方法;王明蓉 等;《激光杂志》;20170625;第38卷(第06期);74-77 *
基于Harr小波-Contourlet变换的图像增强算法;王建华;《西北民族大学学报(自然科学版)》;20090630;第30卷(第02期);50-54 *

Also Published As

Publication number Publication date
CN108447037A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN109003249B (en) Method, device and terminal for enhancing image details
CN110827229B (en) Infrared image enhancement method based on texture weighted histogram equalization
JP5889431B2 (en) Image processing apparatus and image processing method
JP4795473B2 (en) Image processing apparatus and control method thereof
CN109427047B (en) Image processing method and device
US20210319537A1 (en) Image processing method, image processing apparatus, image processing system, and memory medium
JP5177142B2 (en) Image processing apparatus, display apparatus, image processing method, and program thereof
CN110223244B (en) Image processing method and device, electronic equipment and storage medium
US9542617B2 (en) Image processing device and image processing method for correcting a pixel using a corrected pixel statistical value
CN111340721A (en) Pixel correction method, device, equipment and readable storage medium
US9571744B2 (en) Video processing method and apparatus
CN108447037B (en) Method and device for enhancing dynamic range of image
CN115965544A (en) Image enhancement method and system for self-adaptive brightness adjustment
US9715720B1 (en) System and method for reducing image noise
JP5089797B2 (en) Image processing apparatus and control method thereof
US7375770B2 (en) Method for luminance transition improvement
US7649652B2 (en) Method and apparatus for expanding bit resolution using local information of image
JP2021086284A (en) Image processing device, image processing method, and program
CN113364994B (en) Backlight compensation method and backlight compensation circuit
US8351729B2 (en) Apparatus, method, and program for image correction
US11032446B2 (en) Image processing device, image processing method, and program for correcting color in an image
CN113393394A (en) Low-illumination gray level image enhancement method and device based on gamma conversion and storage medium
JP4486661B2 (en) Dynamic contrast enhancement circuit and enhancement method
JP6334358B2 (en) Image signal processing apparatus and bit extension calculation processing method
CN109417616B (en) Method and apparatus for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant