WO2022241676A1 - 色调映射方法、图像处理装置及成像装置 - Google Patents
色调映射方法、图像处理装置及成像装置 Download PDFInfo
- Publication number
- WO2022241676A1 WO2022241676A1 PCT/CN2021/094666 CN2021094666W WO2022241676A1 WO 2022241676 A1 WO2022241676 A1 WO 2022241676A1 CN 2021094666 W CN2021094666 W CN 2021094666W WO 2022241676 A1 WO2022241676 A1 WO 2022241676A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- sliding window
- sliding
- tone mapping
- image
- Prior art date
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 247
- 238000000034 method Methods 0.000 title claims abstract description 100
- 238000012545 processing Methods 0.000 title claims abstract description 41
- 238000003384 imaging method Methods 0.000 title claims abstract description 25
- 238000003860 storage Methods 0.000 claims abstract description 28
- 238000004364 calculation method Methods 0.000 claims description 32
- 238000007781 pre-processing Methods 0.000 claims description 22
- 230000006835 compression Effects 0.000 claims description 14
- 238000007906 compression Methods 0.000 claims description 14
- 230000002207 retinal effect Effects 0.000 claims description 7
- 230000001054 cortical effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 15
- 230000008859 change Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 241000023320 Luma <angiosperm> Species 0.000 description 3
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- WWYNJERNGUHSAO-XUDSTZEESA-N (+)-Norgestrel Chemical compound O=C1CC[C@@H]2[C@H]3CC[C@](CC)([C@](CC4)(O)C#C)[C@@H]4[C@@H]3CCC2=C1 WWYNJERNGUHSAO-XUDSTZEESA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
Definitions
- the present application relates to the field of image processing, in particular to a tone mapping method, an image processing device and an imaging device.
- Tone Mapping can adjust the dynamic range of the image, enhance the details of the picture, and adjust the contrast.
- the change of brightness and color is not uniform enough, which leads to problems such as picture color distortion in the image after tone mapping.
- Embodiments of the present application provide a tone mapping method, a computer-readable storage medium, an image processing device, and an imaging device.
- the embodiment of the present application provides a tone mapping method, including: obtaining the brightness mapping value corresponding to the brightness component of each pixel in the image; according to the brightness mapping value, respectively obtaining the Gain value in tone mapping, wherein the gain value is obtained by calculating the offset value and the gain value at the same time; according to the gain value corresponding to each pixel point, the color mapping value corresponding to the color component of each pixel point is obtained , to complete the tone mapping of the image.
- the embodiments of the present application provide a computer-readable storage medium, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the above-mentioned first aspect is realized.
- the tone mapping method
- an embodiment of the present application provides an image processing device for performing tone mapping on an image
- the image processing device includes: one or more processors, and the one or more processors are configured Step 1: Obtain the brightness mapping value corresponding to the brightness component of each pixel in the image; obtain the gain value of each pixel in the tone mapping according to the brightness mapping value, wherein, by simultaneously calculating the offset value and The gain value is obtained by means of a gain value; respectively, according to the gain value corresponding to each pixel, the color mapping value corresponding to the color component of each pixel is obtained, so as to complete the tone mapping of the image.
- an embodiment of the present application provides an imaging device, including: a sensor for outputting an image; one or more processors configured to: acquire the sensor output The brightness mapping value corresponding to the brightness component of each pixel in the image; according to the brightness mapping value, obtain the gain value of each pixel in the tone mapping, wherein, by calculating the offset value and the gain value at the same time Acquire the gain value in a manner; obtain the color mapping value corresponding to the color component of each pixel according to the gain value corresponding to each pixel, so as to complete the tone mapping of the image.
- the tone mapping method, computer-readable storage medium, image processing device, and imaging device can reduce color distortion in a tone-mapped image.
- FIG. 1 shows a schematic diagram of a tone mapping curve according to an embodiment of the present application
- Fig. 2 shows a schematic tangent diagram of a tone mapping curve according to an embodiment of the present application
- FIG. 3 shows a flowchart of a tone mapping method according to an embodiment of the present application
- FIG. 4 shows a schematic diagram of gain values calculated in a tone mapping method according to an embodiment of the present application
- FIG. 5 shows a flow chart of obtaining a gain value according to an embodiment of the present application
- FIG. 6 shows a schematic diagram of obtaining one or more pixel points adjacent to any pixel point according to an embodiment of the present application
- FIG. 7 shows a flowchart of a tone mapping method according to another embodiment of the present application.
- Fig. 8 shows a sliding schematic diagram of a sliding window according to an embodiment of the present application.
- Fig. 9 shows a schematic diagram of a pixel point covered by a sliding window multiple times according to an embodiment of the present application.
- Fig. 10 shows a schematic diagram in which pixels on the edge of an image are covered multiple times by a sliding window according to an embodiment of the present application
- Fig. 11 shows a schematic diagram of an image processing device according to an embodiment of the present application.
- FIG. 12 shows a schematic diagram of an imaging device according to an embodiment of the present application.
- FIG. 13 shows a schematic diagram of a computer-readable storage medium according to an embodiment of the present application.
- Tone Mapping can be used to change the dynamic range of an image.
- the dynamic range of an image refers to the ratio between the highest brightness value and the lowest brightness value of the image.
- An image with a high dynamic range can show more picture details and make the picture have a more appropriate contrast.
- Tone mapping can calculate the average brightness of the current image, select a suitable brightness domain according to the average brightness, and then map the image to the brightness domain. Tone mapping can be divided into global tone mapping (Global Tone Mapping) and local tone mapping (Local Tone Mapping).
- mapping In local tone mapping, the effect of mapping is related to the distribution of brightness.
- the basic idea is to divide the image into different regions according to the distribution of brightness values. For different regions, different tone mapping curves (Tone curves) will be used to pair Brightness is mapped, and this method can make the image have more details.
- FIG. 1 shows an example of a tone mapping curve according to an embodiment of the present application.
- the tone mapping curve may be acquired using, for example, a histogram equalization method, a gamma compression method, a gradient-based compression method, a retinal cortical model ( Retinex) tone mapping method, learning model-based tone mapping method and other methods, the tone mapping curves calculated in these commonly used tone mapping methods are slightly different, but their basic shape conforms to the S-shaped curve shown in Figure 1 .
- Y in Figure 1 is the brightness component of the image, that is, the Y value in the original YUV image
- Y' is the brightness mapping value after tone mapping, that is, the Y value in the YUV image after tone mapping. It can be seen that in the brightness The curves are flatter in the lower and higher parts of the component, and steeper in the mid-range, increasing the detail and contrast of the picture while maintaining the original bright and dark areas.
- tone mapping curve shown in Figure 1 is only used as an example to intuitively show the changes in the brightness component in the tone mapping.
- tone mapping curve shown in Figure 1 is only used as an example to intuitively show the changes in the brightness component in the tone mapping.
- the tone mapping curve shown in Figure 1 is only used as an example to intuitively show the changes in the brightness component in the tone mapping.
- the YUV image also includes color components, that is, UV values, and the brightness mapping value corresponding to the brightness component can be obtained by directly searching the tone mapping curve.
- the tone mapping curve is a mapping relationship between the brightness component and the brightness mapping value. Curve, so the color mapping value corresponding to the color component cannot be obtained by directly finding the tone mapping curve, but can only be obtained indirectly through the brightness component and the brightness mapping value.
- the ideal state is to enable the color components to be transformed in proportion to the brightness components, so that the tone-mapped image can retain the color of the original picture.
- the colormap value is obtained by tonemapping the color component with the same gain value as the luma component.
- a simple estimation method is to directly use the ratio between the luminance map value and the luminance component as the gain value, but such an estimation method actually obtains the slope of the straight line L1 shown in Figure 2, which may be different from that of the straight line L2 The difference in slope is large, and the gain value estimated by this method may cause serious color distortion in the tone-mapped image compared with the original image.
- a tone mapping method 300 is provided according to an embodiment of the present application. Referring to FIG. 3 , it includes:
- Step S302 Obtain the brightness mapping value corresponding to the brightness component of each pixel in the image
- Step S304 Obtain the gain value of each pixel in tone mapping according to the brightness mapping value, wherein the gain value is obtained by simultaneously calculating an offset value and a gain value;
- Step S306 Obtain the color mapping value corresponding to the color component of each pixel according to the gain value corresponding to each pixel, so as to complete the tone mapping of the image.
- step S302 obtaining the brightness mapping value corresponding to the brightness component of each pixel in the image can be realized by searching the brightness mapping value corresponding to each brightness component on the tone mapping curve, and the brightness mapping curve can be obtained using the aforementioned
- the specific method for obtaining the brightness mapping curve is not limited in the embodiments of the present application.
- step S304 the gain value corresponding to each pixel is respectively acquired according to the luminance mapping value acquired in step S302.
- the tangent line L2 of the tone mapping curve at Y is a straight line that does not pass through the origin in most cases.
- the gain value by calculating the gain value and the offset value at the same time.
- the influence of the value, and the gain value and the offset value are estimated at the same time, so as to fit the slope of the tone mapping curve around Y.
- the straight line represented by the gain value and the offset value obtained according to the calculation method of the embodiment of the present application may be located at the position of the straight line L3 in FIG.
- the straight line L1 obtained by the calculation method of estimating the gain value by the ratio between the gain value and the brightness component, the slope of the straight line L3 obtained according to the calculation method of the embodiment of the present application is closer to the slope of the tangent line L2 at point Y in the tone mapping curve , that is, the obtained gain value is closer to the real gain value.
- the gain value and the offset value there are two unknowns, the gain value and the offset value, so at least two sets of data need to be obtained during the calculation to solve the problem, that is, except for the brightness component of the pixel and the brightness mapping
- the value In addition to the value, one or more additional sets of data need to be obtained, and the specific method of obtaining the data will be described in detail in the relevant part below.
- sampling can be performed in the area near the pixel, and the brightness of other pixels near the pixel can be used
- the brightness components of all pixels in the original image can be analyzed, and one or several pixels whose brightness components are closest to the pixel can be selected, and then the brightness components of these pixels can be used and brightness map values for calculations and so on.
- the gain value is obtained by calculating the gain value and the offset value at the same time, and the gain value obtained by this calculation method is closer to the pixel point
- the real gain value used in tone mapping can ensure that the color component and the brightness component are changed proportionally in the tone mapping as much as possible to reduce the tone mapping.
- the resulting color distortion can ensure that only using the gain value instead of the offset value when obtaining the color mapping value can ensure that the U value of the color component and the V value between them can maintain the original ratio, further avoiding the occurrence of color distortion.
- obtaining the brightness mapping value corresponding to the brightness component of each pixel in the image in step S302 may further include:
- Step S3021 Perform tone mapping preprocessing on the image
- Step S3022 Obtain the brightness mapping value corresponding to the brightness component of each pixel according to the result of the tone mapping preprocessing.
- the tone mapping preprocessing performed in step S3021 can be a complete tone mapping process on the original image, that is, the tone mapping preprocessing can output a processed image. Understandably, each pixel in the processed image The brightness component of the point is the brightness mapping value corresponding to the brightness component of the pixel in the original image, and the processed image may not be presented to the user.
- the tone mapping preprocessing can only obtain the tone mapping curve without performing a complete tone mapping process, then in step S3022, according to the result of the tone mapping preprocessing, that is, the tone mapping curve, each The luma map value corresponding to the luma component.
- tone mapping preprocessing can be performed using any suitable tone mapping curve generation method in the art, for example, using the histogram equalization method, gradient-based compression method, gamma compression method, One of the tone mapping method based on the retinal cortex model and the tone mapping method based on the learning model, but not limited to these methods, those skilled in the art can also use other suitable methods to perform, which is not specifically limited.
- acquiring the gain value of each pixel in tone mapping in step S304 for any pixel in the image, acquiring the gain value may include:
- Step S502 Obtain one or more pixel points adjacent to any pixel point
- Step S504 Obtain the gain value of any pixel point according to the respective luminance components and corresponding luminance mapping values of the any pixel point and one or more pixel points adjacent to the any pixel point.
- the tone mapping method requires at least two sets of data to be able to calculate the gain value and the offset value at the same time.
- the purpose of one or more neighboring pixels is to obtain data of multiple sets of luminance components and luminance map values.
- Fig. 6 shows a schematic diagram of obtaining one or more pixel points adjacent to any pixel point
- each small square in Fig. 6 is an example of a pixel point, taking pixel point P1 as an example, for pixel point P1
- To obtain one or more pixel points adjacent to P1 may be to select one or more pixel points in block 61. It can be understood that the pixel points in block 61 are pixel points directly adjacent to pixel point P1.
- the acquisition of one or more pixel points adjacent to P1 can also be one or more pixel points close to P1 but not directly adjacent to P1, for example, the range can be expanded, the range shown in box 62 It should be noted that, different from box 61, box 62 can actually be further expanded, and the area represented by box 62 is not necessarily a square, it can also be a rectangle or even an irregular figure, etc. It must be centered on the pixel point P1.
- the reason for selecting one or more pixel points adjacent to the pixel point P1 is that the difference between the luminance components of the pixel points adjacent to the pixel point P1 and the luminance component of P1 is relatively small in probability, that is, , its luminance component is closer to the luminance component of the pixel point P1 in the tone mapping curve shown in FIG. 1 , so that the calculated gain value can be closer to the real gain value.
- adjacent one or more pixel points may be randomly selected in such an area.
- the adjacent one or more pixel points may be the one or more pixel points selected in such an area with the smallest difference value between the luminance component and the luminance component P1, so that the calculation The gain value is closer to the real gain value, but this also means a larger amount of calculation.
- step S502 according to the luminance component and luminance mapping value of the pixel P1, and the respective luminance components and luminance mapping values of one or more pixels selected from the above-mentioned area, jointly calculate the gain value and the corresponding gain value of the pixel P1 offset value.
- only one pixel point adjacent to P1 can be selected, such as pixel point P2, and then the gain value and offset value can be obtained by solving a binary linear equation system.
- the operation can be obtained Simplified, but it also means that the obtained gain value may still have a large deviation.
- multiple pixel points adjacent to P1 can be selected, such as pixel points P2-P6, and six groups of corresponding brightness components and brightness mapping values will be obtained at this time, then a method such as linear regression can be used to
- the specific calculation method for calculating the gain value and the offset value can be selected by those skilled in the art according to actual needs, and will not be repeated here. Such a calculation method can improve the accuracy of the obtained gain value.
- the above-mentioned embodiment describes a specific method for obtaining a gain value for any pixel in an image. It can be understood that, for different pixels in an image, it is not necessary to obtain the gain value of each pixel in the same way. Use regions of the same size to acquire one or more adjacent pixel points, and the number of one or more pixel points may not be the same when acquiring one or more pixel points. For example, in a relatively single background area in the image, such as a solid color background area, the number of one or more pixels can be appropriately reduced, and for example, in a foreground area, or the human eye is interested in area, the number of acquired one or more pixel points can be appropriately increased, etc., and those skilled in the art can make adaptive adjustments according to actual needs.
- the embodiments of the present application also provide another method for obtaining the gain value.
- tone mapping method 700 may include:
- Step S702 Obtain the brightness mapping value corresponding to the brightness component of each pixel in the image
- Step S704 setting a sliding window, the sliding window is configured to cover at least two pixels, so that the sliding window slides in the image;
- Step S706 Before each sliding, calculate the gain value and offset value corresponding to the current position of the sliding window according to the respective luminance components and corresponding luminance mapping values of all pixels currently covered by the sliding window;
- Step S708 Acquire the gain value of each pixel in the tone mapping
- Step S710 Obtain the color mapping value corresponding to the color component of each pixel according to the gain value corresponding to each pixel, so as to complete the tone mapping of the image.
- a sliding window is set, the sliding window is configured to cover at least two pixels, and the sliding window can slide in the image. Before sliding the sliding window each time, it is necessary to calculate the gain value and offset value according to the respective brightness components and corresponding brightness mapping values of the pixels currently covered by the sliding window, and calculate the gain according to multiple groups of brightness components and brightness mapping values
- the method of the value and offset value can refer to the above-mentioned relevant content, and will not be repeated here.
- the purpose of setting the sliding window is also to obtain multiple sets of data for each pixel to meet the calculation requirements of the gain value and offset value.
- the sliding window is equivalent to limiting a range of sampling and calculation.
- the gain value of each pixel in the image can be obtained according to the multiple gain values obtained by the sliding window when sliding. For example, when a pixel is only covered by the sliding window once, the pixel can be covered by the sliding window When a pixel is covered by the sliding window for multiple times, the average value of the corresponding gain value when the sliding window covers the pixel can be used as the pixel’s gain value.
- the gain value, the specific acquisition method will be described in detail in the relevant part below, and will not be repeated here.
- the other pixels covered by the sliding window at this time are equivalent to one or more pixels adjacent to the pixel, that is to say, such an implementation
- the method can more conveniently obtain the luminance component and luminance mapping value of a certain pixel point and the pixels around the pixel point, and perform the next operation, compared with obtaining the adjacent pixel points for each pixel point separately
- it is simpler to implement, and calculations can be performed in batches, which improves the efficiency of calculations.
- multiple sliding windows can be set, and multiple sliding windows can be configured to slide in the image at the same time, so that the gain values of multiple pixels in the image can be obtained at the same time, increasing the efficiency of obtaining gain values .
- Those skilled in the art can also choose other suitable ways to design the sliding window according to the actual situation.
- the sliding window is configured as a square, and the side length of the sliding window is configured as an integer multiple of the pixel side length, for example, the sliding window can be configured as 2*2 pixels, 3 * 3 pixel-sized windows.
- This configuration enables the sliding window to completely cover all the pixels it can cover after each sliding by setting a suitable sliding starting point and sliding step. For example, a square sliding window with a size of 3*3 can cover exactly 9 pixels, and no incomplete pixels will appear in the covered area, so that when calculating the current gain value and offset value of the sliding window before each sliding , it is easier to read the data and perform calculations.
- the sliding window can also be configured in other shapes, such as a rectangle or even an irregular shape, and the side length of the sliding window is not necessarily an integer multiple of the side length of a pixel point.
- the pixels used when calculating When the sliding window corresponds to the gain value and offset value at a certain position, can be set to include only the pixels completely covered by the sliding window, or set to include all the pixels covered by the sliding window, even if Only part of the pixel is within the coverage area of the sliding window.
- the sliding in step S704 causes each pixel in the image to be covered by the sliding window at least once, that is, the sliding window needs to complete the traversal of the image. The number of times each pixel of is covered by the sliding window.
- the sliding in step S704 makes each pixel in the image covered by the sliding window the same number of times, for example, each pixel is covered by the sliding window 3 times.
- a fixed sliding step can be used, for example, the length of sliding one pixel at a time, so that each pixel in the image can be equal to Covered by the sliding window the same number of times to save computation.
- a fixed sliding direction can be used to slide the sliding window, so as to control the sliding of the sliding window more conveniently.
- the sliding direction of the sliding window may not be fixed, for example, slide to the right from the leftmost edge of the image, change direction after sliding to the rightmost edge, and then change the distance again after sliding down for a certain distance. Swipe left until the leftmost edge of the image is reached again, that is, adopt a running trajectory similar to "swing tail" to enable the sliding window to cover every pixel in the image, and in some implementations, enable the sliding window to Each pixel of is covered the same number of times.
- the sliding direction when a fixed sliding direction is used for sliding, the sliding direction may be configured along a diagonal direction of the pixel, for example, along a sliding direction from the upper left corner to the lower right corner of a pixel.
- the sliding step size can be configured as M times the length of the diagonal of the pixel point, M is an integer greater than or equal to 1, such a sliding step size makes the sliding window slide along the diagonal direction of the pixel point , each swipe can exactly cover an integer number of pixels completely.
- the sliding window 81 is configured as a square window with a size of 3*3 pixels, and as shown by the arrow 82 in FIG. 8 , the sliding direction is configured to be along the In the diagonal direction from the upper left corner to the lower right corner, the sliding step is configured as the diagonal length of one pixel.
- the sliding window will start from the position shown in Figure 81.1 Slide to the position shown in 81.2. It is understandable that only one sliding window is shown in FIG. 8 to slide in a partial area of the image. For the entire image, there may be multiple sliding windows. For overlapping sliding windows of 3*3 size, set the initial position of these sliding windows at the upper left corner of the image, and then perform sliding as shown in Figure 8. In this sliding method, the sliding window can be moved without changing its direction midway Complete the traversal of the image.
- FIG. 9 shows a schematic diagram of the whole process of one pixel point being covered by the sliding window in such sliding.
- the sliding window 91 shown in FIG. 9 has a size of 3*3 pixels, the sliding direction is configured to be along the diagonal direction from the upper left corner of the pixel to the lower right corner, and the sliding step is set to the diagonal of one pixel. length, under such parameter settings, when the sliding window slides to the position shown in Figure 91.1, the pixel point P1 is covered for the first time, and then the sliding window continues to slide to the positions shown in 91.2 and 91.3, when the sliding window slides to the position shown in Figure 91.1 The position shown in 91.3 is the last time that the pixel point P1 is covered by the sliding window.
- multiple sliding windows close to each other but not overlapping can also be arranged side by side as described above, although the relative positions of other pixels when covered by the sliding window may be different from the sliding window. It is slightly different from what is shown in the figure, but still ensures that each pixel in the image is covered by the sliding window 3 times.
- Those skilled in the art can also set multiple sliding windows in other ways to improve the efficiency of gain value acquisition, which will not be repeated here.
- FIG. 9 is only an example, and those skilled in the art can change the size, sliding direction, and sliding step of the sliding window according to actual needs, for example, changing the size of the sliding window to 2*2, 4* 4, 5*5, etc., or change the sliding direction to the direction of the pixel along the other diagonal of the pixel, or change the sliding step to the size of two pixels, etc., such parameter changes will be Will change the number of times each pixel is covered, but still ensure that each pixel in the image is covered the same number of times.
- the sliding window is shown in 101.1 and 102.2 in the figure
- the displayed position is not completely within the image, in other words, its interior is not completely filled with pixels, for example, the sliding window only covers 3 pixels at the position shown in Figure 101.1, although it can also Calculate the brightness component and brightness mapping value of the sliding window at the position of 101.1 according to the brightness components and brightness mapping values of these three pixels, but for pixel P1 and other pixels located on the edge of the image, the final gain obtained The value may not be accurate enough.
- a padding algorithm may be used to expand the edge of the image, so as to solve the above-mentioned problems existing in the calculation of the pixels on the edge of the image.
- the Pedding algorithm is an algorithm commonly used in the field to fill the edge of an image. Those skilled in the art can choose according to actual needs, or choose other algorithms that can expand the edge of the image, and will not repeat them here.
- the size of the sliding window can be determined according to the overall tone of the image. It can be understood that when the sliding window is larger, it can cover more pixels, which is equivalent to referring to the brightness components of more pixels adjacent to the pixel when finally obtaining the gain value of a certain pixel. and the brightness mapping value, which makes the pixel maintain a better spatial continuity with the surrounding pixels, and also improves the efficiency of gain value acquisition, but at the same time, because the reference range for gain value acquisition is relatively Large, may lead to reduced spatial variability, resulting in the final image after tone mapping compared with the original image, the difference is not obvious, and the effect is weak.
- Blocking effect occurs when the continuity on the image is reduced. Blocking effect means that a certain area in the image is quite different from its adjacent areas, so that the human eye may find that a certain area in the image is similar to the surrounding area when observing. In more extreme cases than slightly abrupt, there may be a sense of fragmentation.
- the size of the sliding window can be determined according to the overall tone of the image before tone mapping, so as to make a trade-off between spatial variability and spatial continuity, so that the image after tone mapping can better meet the user's desired effect .
- the size of the sliding window can also be adjusted during the sliding process.
- the size of the sliding window can be dynamically adjusted according to the color tone of the image and the sliding path of the sliding window. For example, in the sliding window A larger sliding window is used when the sliding path of is in a certain region determined by image hue, and a smaller sliding window is used when the sliding path is in another region determined by image hue. Such an approach can more flexibly adjust the weight between spatial continuity and spatial variability, thereby improving the effect of tone mapping processing.
- the sliding step can be changed accordingly, so as to ensure that each pixel in the image can be covered by the sliding window for the same number of times without Affected by window resizing.
- step S708 the corresponding gain value and offset at each position of the sliding window calculated in step S706 Shift value to obtain the gain value of each pixel in tone mapping.
- obtaining the gain value of each pixel point in the tone mapping in step S708, for any pixel point may include:
- Step S2082 Obtain the gain value corresponding to the sliding window when any pixel point is covered by the sliding window;
- Step S2084 Obtain the gain value of any pixel point according to the gain value corresponding to the sliding window.
- the gain value corresponding to the position of the sliding window covering the pixel point P1 is directly used as the gain value of the pixel point P1.
- the sliding window is configured to cover at least two pixel points.
- the sliding window covers P1 it must also cover at least another pixel point P2 at the same time.
- the finally obtained gain value is the same.
- step S2082 when the any pixel is covered by the sliding window multiple times, multiple gain values are acquired, and the multiple gain values include that the sliding window covers any pixel every time
- step S2084 the gain value of any pixel point is obtained according to the plurality of gain values.
- the method of calculating the arithmetic mean of multiple gain values may be used to obtain the gain value of any pixel point according to multiple gain values.
- the gain value of any pixel point may use a method of calculating a weighted average of multiple gain values.
- the pixel point P1 may be covered by the sliding window three times as shown in FIG. The gain value, and then, obtain the pixel point P1 by calculating the average value or weighted average value of these three gain values.
- each pixel in the image is covered three times in this way, so for each pixel, it is performing such arithmetic mean or weighted mean calculation
- the 3 gain values used are different from any other pixel.
- the gain values corresponding to the three positions of 91.1, 91.2, and 91.3 are used at the same time.
- Such an embodiment further increases the accuracy of gain value acquisition and further reduces the possibility of block effects.
- the above method may further include step S2083: acquiring a plurality of difference values, the plurality of difference values including: when the sliding window covers any pixel point each time, the brightness of any pixel point The average difference between the mapped value and the brightness mapped values of other pixels covered by the sliding window. Further, in step S2084, the weight of each of the multiple gain values during weight calculation may be determined according to the multiple average difference values.
- the difference between the luminance component of pixel P1 and the luminance components of other pixels currently covered by the sliding window can be calculated.
- the average difference value it can be understood that when the average difference value is small, it means that the brightness component of the pixel point P1 is relatively close to the brightness components of other pixels in the sliding window, so that the corresponding gain value of the sliding window at this position and The real gain value of the pixel point P1 is relatively close, so when the average difference value is small, the corresponding gain value of the sliding window at this position can be given a higher weight accordingly, so that the final obtained gain of the pixel point The value is closer to the real gain value.
- the weights of multiple gain values can also be determined according to the position of the pixel when it is covered by the sliding window. Still referring to FIG. 9, when the sliding window is at the position shown in 91.2, the pixel P1 is slid The window is covered, and it is exactly located in the center of the sliding window. At this time, a relatively high weight can be given to the gain value corresponding to the position of the sliding window shown in 91.2.
- Those skilled in the art may also select other appropriate implementations according to actual conditions to determine the respective weights of multiple gain values when performing weighted calculations, so that the finally obtained pixel gain values can have higher accuracy.
- the above step of calculating the arithmetic average or weighted average of multiple gain values can be performed in parallel with the sliding step of the sliding window, for example, the sliding window can be calculated after the sliding of 91.1, 91.2, and 91.3 is completed Obtain the gain value of the pixel P1 without waiting until the sliding window completes all the sliding.
- those skilled in the art can also select an appropriate processing method to further improve the efficiency of the operation, which will not be repeated here.
- an image processing device 1100 is also provided, referring to FIG. 11 , including: one or more processors 1100, and one or more processors 1110 are configured to: acquire each The brightness mapping value corresponding to the brightness component of the pixel; according to the brightness mapping value, the gain value of each pixel in the tone mapping is obtained respectively, wherein the gain value is obtained by simultaneously calculating the offset value and the gain value; respectively according to each The gain value corresponding to each pixel, and the color mapping value corresponding to the color component of each pixel is obtained to complete the tone mapping of the image.
- the one or more processors 1110 are further configured to: perform tone mapping preprocessing on the image; obtain a brightness mapping value corresponding to a brightness component of each pixel according to a result of the tone mapping preprocessing.
- tone mapping preprocessing uses one of the following methods: histogram equalization, gradient-based compression, gamma compression, retinal cortical model-based tone mapping, and learning model-based tone mapping.
- the one or more processors 1110 are further configured to: acquire one or more pixels adjacent to any pixel; Gain values of any pixel are obtained from respective luminance components of one or more adjacent pixels and corresponding luminance mapping values.
- one or more processors 1110 are further configured to: set a sliding window, the sliding window is configured to cover at least two pixels; make the sliding window slide in the image, and , according to the respective luminance components and corresponding luminance mapping values of all pixels currently covered by the sliding window, calculate the gain value and offset value corresponding to the current position of the sliding window.
- the sliding window is configured as a square, and the side length of the sliding window is configured as an integer multiple of the pixel side length.
- sliding causes each pixel in the image to be covered by the sliding window at least once.
- sliding causes each pixel in the image to be covered by the sliding window for the same number of times.
- the swipe uses a fixed swipe step size.
- the swipe uses a fixed swipe direction.
- the sliding direction includes a diagonal direction along the pixel points.
- the sliding step size of the sliding is configured to be M times the length of the diagonal line of the pixel, and M is an integer greater than or equal to 1.
- the sliding window is configured with a fixed size, and the size of the sliding window is determined by the hue of the image.
- the one or more processors 1110 are further configured to: during sliding, adjust the size of the sliding window according to the hue of the image and the sliding path of the sliding window.
- the one or more processors 1110 are further configured to: obtain the gain value corresponding to the sliding window when any pixel point is covered by the sliding window; obtain The gain value of any pixel.
- one or more processors 1110 are further configured to: acquire multiple gain values when any pixel is covered by the sliding window for multiple times, and the multiple gain values include that the sliding window covers any The corresponding gain value of the pixel; obtain the gain value of any pixel according to multiple gain values.
- the one or more processors 1110 are further configured to: calculate the arithmetic mean value of multiple gain values to obtain the gain value of any pixel.
- the one or more processors 1110 are further configured to: calculate a weighted average of multiple gain values to obtain the gain value of any pixel.
- the one or more processors 1110 are further configured to: obtain a plurality of difference values, the plurality of difference values include: when the sliding window covers any pixel point each time, the brightness mapping value of any pixel point The average difference value between the brightness map values of other pixels covered by the sliding window; the weight of each of the multiple gain values in the weighted calculation is determined according to the multiple average difference values.
- an imaging device 1200 is also provided, referring to FIG. 12 , including: a sensor 1210 for outputting an image; one or more processors 1220 configured to Step 1: Obtain the brightness mapping value corresponding to the brightness component of each pixel in the image output by the sensor 1210; obtain the gain value of each pixel in the tone mapping according to the brightness mapping value, wherein, by simultaneously calculating the offset value and The gain value is obtained by means of the gain value; and the color mapping value corresponding to the color component of each pixel is obtained respectively according to the gain value corresponding to each pixel, so as to complete the tone mapping of the image.
- the one or more processors 1220 are further configured to: perform tone mapping preprocessing on the image; obtain a brightness mapping value corresponding to a brightness component of each pixel according to a result of the tone mapping preprocessing.
- tone mapping preprocessing uses one of the following methods: histogram equalization, gradient-based compression, gamma compression, retinal cortical model-based tone mapping, and learning model-based tone mapping.
- the one or more processors 1220 are further configured to: acquire one or more pixels adjacent to any pixel; Gain values of any pixel are obtained from respective luminance components of one or more adjacent pixels and corresponding luminance mapping values.
- one or more processors 1220 are also configured to: set a sliding window, the sliding window is configured to cover at least two pixels; make the sliding window slide in the image, and , according to the respective luminance components and corresponding luminance mapping values of all pixels currently covered by the sliding window, calculate the gain value and offset value corresponding to the current position of the sliding window.
- the sliding window is configured as a square, and the side length of the sliding window is configured as an integer multiple of the pixel side length.
- sliding causes each pixel in the image to be covered by the sliding window at least once.
- sliding causes each pixel in the image to be covered by the sliding window for the same number of times.
- the swipe uses a fixed swipe step size.
- the swipe uses a fixed swipe direction.
- the sliding direction includes a diagonal direction along the pixel points.
- the sliding step size of the sliding is configured to be M times the length of the diagonal line of the pixel, and M is an integer greater than or equal to 1.
- the sliding window is configured with a fixed size, and the size of the sliding window is determined by the hue of the image.
- the one or more processors 1220 are further configured to: during sliding, adjust the size of the sliding window according to the hue of the image and the sliding path of the sliding window.
- the one or more processors 1220 are further configured to: obtain the gain value corresponding to the sliding window when any pixel point is covered by the sliding window; obtain The gain value of any pixel.
- one or more processors 1220 are further configured to: acquire multiple gain values when any pixel point is covered by the sliding window for multiple times, the multiple gain values include that the sliding window covers any The corresponding gain value of the pixel; obtain the gain value of any pixel according to multiple gain values.
- the one or more processors 1220 are further configured to: calculate the arithmetic mean value of multiple gain values to obtain the gain value of any pixel.
- one or more processors 1220 are further configured to: calculate a weighted average of multiple gain values to obtain the gain value of any pixel.
- the one or more processors 1220 are further configured to: obtain a plurality of difference values, the plurality of difference values include: when the sliding window covers any pixel point each time, the brightness mapping value of any pixel point The average difference value between the brightness map values of other pixels covered by the sliding window; the weight of each of the multiple gain values in the weighted calculation is determined according to the multiple average difference values.
- a computer-readable storage medium 1300 is also provided.
- the computer-readable storage medium 1300 stores computer instructions 1310, and when the computer instructions 1310 are executed, any of the above The tone mapping method described above.
- Computer readable storage media may include volatile or nonvolatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer readable storage media or computer readable storage devices.
- a computer readable storage medium may be a storage unit or a memory module having computer instructions stored thereon, as disclosed.
- a computer-readable storage medium may be a disk or flash drive having computer instructions stored thereon.
- modules/unit may be implemented by one or more processors, such that the one or more processors become one or more special-purpose processors for executing software instructions stored in a computer-readable storage medium to perform Module/unit-specific functions.
- each block in a flowchart or block diagram may represent a module, a program segment, or a portion of code that includes one or more Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two consecutive blocks may, in fact, be executed substantially concurrently, and sometimes in the reverse order, depending upon the functionality involved.
- Each block in the block diagrams and/or flowcharts and combinations of blocks in the block diagrams and/or flowcharts can be implemented by a dedicated hardware-based system for performing the corresponding function or operation, or can be implemented by dedicated hardware and a computer Combination of instructions to achieve.
- embodiments of the present application may be embodied as a method, system, or computer program product. Accordingly, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware to allow dedicated components to perform the functions described above. Furthermore, embodiments of the present application may take the form of a computer program product embodied in one or more tangible and/or non-transitory computer-readable storage media embodying computer-readable program code.
- non-transitory computer readable media include, for example, floppy disks, flexible disks, hard disks, solid state drives, magnetic tape, or any other magnetic data storage medium, CD-ROM, any other optical data storage medium, any physical medium having a hole in the form, RAM, PROM and EPROM, FLASH-EPROM or any other flash memory, NVRAM, cache, registers, any other memory chips or rolls, and networked versions of them.
- Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatuses, and computer program products according to embodiments of the application. It should be understood that each process and/or block in the flowchart and/or block diagram, and a combination of multiple processes and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a processor of a computer, an embedded processor, or other programmable data processing means to produce a special-purpose machine, such that execution of these instructions via a processor of a computer or other programmable data processing means creates a process for implementing One or more processes in a figure and/or means of a function specified in one or more blocks in a block diagram.
- These computer program instructions may also be stored in a computer-readable memory that directs a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means that implement the process A function specified in one or more processes in a diagram and/or in one or more boxes in a block diagram.
- a computer device includes one or more central processing units (CPUs), input/output interfaces, network interfaces and memory.
- Memory may include forms of volatile memory, random access memory (RAM) and/or nonvolatile memory, such as read only memory (ROM) or flash RAM in a computer readable storage medium.
- RAM random access memory
- ROM read only memory
- flash RAM flash RAM
- a computer-readable storage medium refers to any type of physical memory that can store information or data readable by a processor. Accordingly, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing a processor to perform steps or stages consistent with embodiments described herein.
- Computer-readable media includes nonvolatile and volatile media, and removable and non-removable media, where storage of information may be implemented in any method or technology. The information may be modules of computer readable instructions, data structures and programs, or other data.
- non-transitory computer readable media include, but are not limited to, phase change random access memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disk read-only memory (CD-ROM), digital versatile disk (DVD) or other optical storage , cassette, tape or disk storage or other magnetic storage device, cache, register or any other non-transmission medium usable for storing information that can be accessed by a computer device.
- Computer-readable storage media are non-transitory and do not include transitory media such as modulated data signals and carrier waves.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
一种色调映射方法、计算机可读存储介质、图像处理装置和成像装置,该色调映射方法包括:获取图像中每个像素点的亮度分量对应的亮度映射值(S302);根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值(S304),其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值(S306),以完成所述图像的色调映射。根据该色调映射方法、计算机可读存储介质、图像处理装置和成像装置能够减轻色调映射后的图像出现的色彩失真。
Description
本申请涉及图像处理领域,具体涉及一种色调映射方法、图像处理装置及成像装置。
色调映射(Tone Mapping)能够调整图像的动态范围,增强画面细节,调节对比度。在现有的色调映射方法中,亮度与色彩的变化不够统一,导致色调映射后的图像存在画面颜色失真等问题。
发明内容
本申请的实施例提出一种色调映射方法、计算机可读存储介质、图像处理装置及成像装置。
第一个方面,本申请的实施例提供了一种色调映射方法,包括:获取图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
第二个方面,本申请的实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述第一个方面所述的色调映射方法。
第三个方面,本申请的实施例提供了一种图像处理装置,用于对图像进行色调映射,所述图像处理装置包括:一个或多个处理器,所述一个或多个处理器被配置成:获取所述图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
第四个方面,本申请的实施例提供了一种成像装置,包括:传感器,用于输出图像;一个或多个处理器,所述一个或多个处理器被配置成:获取所述传感器输出的所述图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
根据本申请实施例的色调映射方法、计算机可读存储介质、图像处理装置和成像装置能够减轻色调映射后的图像出现的色彩失真。
图1示出了根据本申请实施例的色调映射曲线的示意图;
图2示出了根据本申请实施例的色调映射曲线的切线示意图;
图3示出了根据本申请实施例的色调映射方法的流程图;
图4示出了根据本申请实施例的色调映射方法中计算的增益值的示意图;
图5示出了根据本申请实施例的获取增益值的流程图;
图6示出了根据本申请实施例的获取与任一像素点相邻的一个或多个像素点的示意图;
图7示出了根据本申请另一实施例的色调映射方法的流程图;
图8示出了根据本申请实施例的滑动窗口的滑动示意图;
图9示出了根据本申请实施例的像素点被滑动窗口多次覆盖的示意图;
图10示出了根据本申请实施例的图像边缘的像素点被滑动窗口多次覆盖的示意图;
图11示出了根据本申请实施例的图像处理装置的示意图;
图12示出了根据本申请实施例的成像装置的示意图;
图13示出了根据本申请实施例的计算机可读存储介质的示意图。
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例的附图,对本申请的技术方案进行清楚、完整地描述。显然,所描述的实施例是本申请的一个实施例,而不是全部的实施例。基于所描述的本申请的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,除非另外定义,本申请使用的技术术语或者科学术语应当为本申请所属领域内具有一般技能的人士所理解的通常意义。若全文中涉及“第一”、“第二”等描述,则该“第一”、“第二”等描述仅用于区别类似的对象,而不能理解为指示或暗示其相对重要性、先后次序或者隐含指明所指示的技术特征的数量,应该理解为“第一”、“第二”等描述的数据在适当情况下可以互换。若全文中出现“和/或”,其含义为包括三个并列方案,以“A和/或B”为例,包括A方案,或B方案,或A和B同时满足的方案。
色调映射(Tone Mapping)可以用于改变图像的动态范围。图像的动态范围是指图像的最高亮度值与最低亮度值之间的比值,高动态范围的图像可以展现更多的画面细节,并可以使得图面拥有更为合适的对比度。
色调映射可以推算出当前图像的平均亮度,根据该平均亮度选择合适的亮度域,而后将图像映射到该亮度域。色调映射又可以分为全局色调映射(Global Tone Mapping)和局部色调映射(Local Tone Mapping)。
在全局色调映射中,不考虑空间的像素位置,对所有像素采取统一的操作,计算较为简单并且易于实现,并且能够保持图像整体的明暗效果,避免光晕和色调逆转的现象产生,但是其细节信息丢失相对较多,生成的图像较为模糊。
在局部色调映射中,映射的效果与亮度的分布相关,其基本思想是根据亮度值分布的不同将图像分为不同的区域,对于不同的区域,会使用不同的色调映射曲线(Tone曲线)对亮度进行映射,这样的方法能够使得图像拥有更多的细节。
图1中示出了根据本申请实施例的一种色调映射曲线的示例,色调映射曲线的获取方法可以使用例如直方图均衡法、伽马压缩法、基于梯度的压缩法、基于视网膜皮层模型(Retinex)的色调映射法、基于学习模型的色调映射法等方法获得,这些常用的色调映射方法中所计算出的色调映射曲线略有不同,但其基本形态符合图1中所示出的S形。图1中的Y为图像的亮度分量,即原始YUV图像中的Y值,Y’为经过色调映射后的亮度映射值,即色调映射后的YUV图像中的Y值,可以看出,在亮度分量较低和较高的部分曲线较为平缓,而在中间范围处曲线较为陡峭,从而能够在保持原有的亮区和暗区的情况下增加图片的细节和对比度。
需要注意的是,图1中示出的色调映射曲线仅作为示例来直观地展现色调映射中亮度分量所发生的变化,在实际的色调映射过程中,通常我们仅能够获得图像中像素点的亮度分量以及对应的亮度映射值,即,通常仅能够获得离散的点,使用平滑的曲线将这些点进行连接其将会呈现如图1中的色调映射曲线所示出的姿态,但是我们难以直接得到这样一条连续的曲线,更难以用一个具体的表达式来表达亮度分量和亮度映射值之间的对应关系。
YUV图像中还包括色彩分量,即UV值,亮度分量对应的亮度映射值能够通过直接查找色调映射曲线来获得,结合前述论述,色调映射曲线是表征亮度分量与亮度映射值之间的映射关系的曲线,因此色彩分量对应的色彩映射值无法使用直接查找色调映射曲线这样的方式来获得,而只能通过亮度分量和亮度映射值来间接获得。
在对色彩分量进行映射时,理想状态是使得色彩分量能够与亮度分量进行等比例的变换,从而色调映射后的图像能够保留原始画面的颜色。亮度分量与亮度映射值之间的关系可以写为Y’=G*Y,其中的G即为亮度分量在色调映射中的增益值(gain值),为了使色彩分量进行等比例的变换,需要使用与亮度分量相同的增益值对色彩分量进行色调映射获得色彩映射值。
请参照图1和图2,对图1中的Y进行色调映射时所使用的增益值实际上相当于色调映射曲线在该点附近的切线斜率,即图2中的直 线L2的斜率,正如前述内容记载的,实际的色调映射过程中通常无法获得这样一条连续且完整的曲线,也无法获得具体的表达式,因此也就无法通过直接计算的方式获得色调映射曲线在该点处的斜率,而只能够通过估算获得。一种简单的估算方法是直接使用亮度映射值与亮度分量之间的比值作为增益值,但是这样的估算方法实际上得到的是图2中示出的直线L1的斜率,其可能与直线L2的斜率差异较大,使用这样的方法估算的增益值会使得色调映射后的图像与原图像相比将可能会出现较为严重的色彩失真。
为了减轻色调映射后的图像出现的色彩失真,根据本申请的实施例提供了一种色调映射方法300,参照图3,包括:
步骤S302:获取图像中每个像素点的亮度分量对应的亮度映射值;
步骤S304:根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;
步骤S306:分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
在步骤S302中,获取图像中每个像素点的亮度分量对应的亮度映射值,可以通过分别在色调映射曲线上查找每个亮度分量对应的亮度映射值来实现,亮度映射曲线的获取可以使用前述内容记载的直方图平均法等方法,本申请的实施例中并不对亮度映射曲线的具体获取方法进行限定。
在步骤S304中,根据步骤S302中获取的亮度映射值来分别获取每个像素点对应的增益值。请再次参照图2,不同于直线L1,色调映射曲线在Y处的切线L2在大多数情况下是不经过原点的直线。
因此,在本申请的实施例中,提出了通过同时计算增益值和偏移值的方式来获取增益值。在这样的计算方式中,认为亮度分量和亮度映射值之间的关系为Y’=G*Y+b,其中G为增益值,b为偏移值,在对增益值进行估算时考虑偏移值所产生的影响,同时对增益值和偏移值进行估算,以此来拟合色调映射曲线在Y附近的斜率。
参照图4,根据本申请实施例的计算方式获得的增益值和偏移值所代表的直线可能位于图4中的直线L3所在的位置,可以直观地看出,相较于直接通过计算亮度映射值和亮度分量之间的比值来估算增益值的计算方法所获得的直线L1,根据本申请实施例的计算方式所获得的直线L3的斜率更加接近色调映射曲线中Y点处的切线L2的斜率,即,获取到的增益值更加接近真实的增益值。
根据本申请实施例的计算方式中,存在增益值和偏移值两个未知数,因此在计算时至少需要取得两组数据才能够进行求解,也就是说,除了该像素点的亮度分量和亮度映射值外,还需要额外获得一组或多组数据,该数据的具体获取方法将在下文相关部分处进行详细地描述。可以理解地,本领域技术人员还可以选择任何合适的方法来获得一组额外地数据以完成上述计算,例如可以在该像素点附近的区域进行采样,使用该像素点附近的其他像素点的亮度分量和亮度映射值进行计算,又例如可以对原图像中的所有像素点的亮度分量进行分析,选择亮度分量与该像素点最接近的一个或几个像素点,而后使用这些像素点的亮度分量和亮度映射值进行计算等等。
在步骤S306中,分别根据每个像素点对应的增益值获取色彩分量对应的色彩映射值,具体而言,针对图像中的一个像素点而言,假定该像素点的亮度分量为Y,色彩分量为U和V,在步骤S304中对色调映射曲线在Y处的斜率进行估算,获取到该像素点在色调映射中所使用的增益值G,则在对色彩分量进行色调映射时,直接使用U’=G*U,V’=G*V这样的计算公式来进行计算,获取到的U’和V’即为该像素点的色彩分量对应的色彩映射值。需要注意的是,在计算色彩分量对应的色彩映射值时,仅使用上述计算过程中获得的该像素点对应的增益值,而不加入偏移值,这是因为YUV图像的色相还与U和V之间的比例相关,在计算色彩映射值时加入偏移值将会改变原本的U和V之间的比例,这并不是我们所期望的。
根据本申请实施例的色调映射方法300中,针对每一个像素点而言,使用了同时计算增益值和偏移值的方式来获得增益值,该计算方法所获得的增益值更加接近该像素点在进行色调映射时所使用的真 实增益值,使用该增益值来获得色彩分量对应的色彩映射值能够尽可能地保证色彩分量与亮度分量在色调映射中进行了等比例的变化,以减轻色调映射所造成的色彩失真。进一步地,在获得色彩映射值时仅使用增益值而不使用偏移值,能够保证色彩分量的U值和之间V值能够保持原有的比例,进一步地避免了色彩失真的出现。
在一些实施方式中,步骤S302中获取图像中每个像素点的亮度分量对应的亮度映射值可以进一步包括:
步骤S3021:对图像进行色调映射预处理;
步骤S3022:根据色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
在一些实施方式中,步骤S3021进行的色调映射预处理可以是对原始图像进行一次完整的色调映射处理,即色调映射预处理可以输出一张处理图像,可以理解地,该处理图像中每个像素点的亮度分量即为原始图像中该像素点的亮度分量所对应的亮度映射值,而处理图像可以不向用户进行呈现。
在一些实施方式中,色调映射预处理可以仅仅获得色调映射曲线而并不执行完整的色调映射过程,则在步骤S3022中根据色调映射预处理的结果,也就是该色调映射曲线,可以查找每个亮度分量所对应的亮度映射值。
可以理解地,上述色调映射预处理中可以使用本领域中任何合适地色调映射曲线生成方法来进行,例如使用前述内容中提及的直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法中的一个,但并不限于这些方法,本领域技术人员也可以使用其他合适的方法来进行,对此不做具体的限定。
在一些实施方式中,参见图5,步骤S304中分别获取每个像素点在色调映射中的增益值时,针对该图像中的任一像素点,获取增益值可以包括:
步骤S502:获取与所述任一像素点相邻的一个或多个像素点;
步骤S504:根据所述任一像素点以及与所述任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取所述任一像素点的增益值。
结合前述内容,根据本申请实施例的色调映射方法在进行增益值和偏移值的计算时需要同时至少两组数据才能够进行,在本实施方式中,步骤S502中获取与任一像素点相邻的一个或多个像素点的目的就是为了获得多组亮度分量和亮度映射值的数据。
图6示出了一种获取与任一像素点相邻的一个或多个像素点的示意图,图6中的每一个小方块为像素点的示例,以像素点P1为例,针对像素点P1,获取与P1相邻的一个或多个像素点可以是在框61中选择一个或多个像素点,可以理解地,框61中的像素点是与像素点P1直接相邻的像素点。在一些实施方式中,获取与P1相邻的一个或多个像素点还可以是靠近P1但是未与P1直接相邻的一个或多个像素点,例如可以扩大范围,在框62示出的范围内进行选择,需要注意的是,不同于框61,框62实际上可以进一步地进行扩大,框62所表示的区域并不一定为正方形,还可以是矩形甚至不规则图形等等,也并不一定需要以像素点P1为中心。
选择与像素点P1相邻的一个或多个像素点原因还在于,与像素点P1相邻的像素点的亮度分量与P1的亮度分量之间的差异很大概率上是相对较小的,即,其亮度分量在图1中示出的色调映射曲线中距离像素点P1的亮度分量较近,从而使得计算的增益值能够更加贴近真实的增益值。
在一些实施方式中,相邻的一个或多个像素点可以在这样的区域中随机选择的。在一些实施方式中,相邻的一个或多个像素点可以是在这样的区域中所选择的与亮度分量P1的亮度分量之间的差分值最小的一个或多个像素点,从而能够使得计算的增益值进一步贴近真实的增益值,但是这也意味着更大的运算量。
在步骤S502中,根据像素点P1的亮度分量和亮度映射值,以及从上述区域中选择的一个或多个像素点各自的亮度分量和亮度映射值,来共同计算像素点P1对应的增益值和偏移值。
在一些实施方式中,可以仅选择与P1相邻的一个像素点,例如像素点P2,则可以通过求解二元一次方程组的形式来获得增益值和偏移值,这样的实施方式中运算得到了简化,但是也意味着所获得的增益值可能仍有较大偏差。
在一些实施方式中,可以选择与P1相邻的多个像素点,例如像素点P2-P6,此时将会获得六组对应的亮度分量和亮度映射值,则可以使用例如线性回归的方法来计算增益值和偏移值,具体的计算方法本领域技术人员可以根据实际需求进行选择,在此不再赘述。这样的计算方法能够提高所获得的增益值的准确性。
上述实施方式描述了针对图像中的任一像素点获取增益值的具体方法,可以理解地,针对图像中的不同像素点而言,并不一定在每个像素点的增益值获取过程中,均使用相同大小的区域来获取相邻的一个或多个像素点,在获取一个或多个像素点时的个数也并不一定相同。例如,在图像中某一较为单一的背景区域中,如某一块纯色背景区域,可以适当的减小获取一个或多个像素点的个数,又例如在某一前景区域,或者人眼感兴趣的区域,可以适当的增加获取一个或多个像素点的个数,等等,本领域技术人员可以根据实际需求进行适应性的调整。
使用上述方法来获取增益值时,运算量可能较大,导致处理效率较低,因此在一些实施方式中,本申请的实施例还提供了另一种获取增益值的方法。
参照图7,在一些实施方式中,色调映射方法700可以包括:
步骤S702:获取图像中每个像素点的亮度分量对应的亮度映射值;
步骤S704:设定滑动窗口,所述滑动窗口被配置成至少能够覆盖两个像素点,使所述滑动窗口在所述图像中滑动;
步骤S706:在每次滑动前,根据所述滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算所述滑动窗口在当前位置对应的增益值和偏移值;
步骤S708:分别获取每个像素点在色调映射中的增益值;
步骤S710:分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
在步骤S704和步骤S706中,设定滑动窗口,滑动窗口被配置成至少能够覆盖两个像素点,滑动窗口能够在图像中进行滑动。在滑动窗口每次进行滑动之前,需要根据滑动窗口当前所覆盖的像素点各自的亮度分量和对应的亮度映射值,来计算增益值和偏移值,根据多组亮度分量和亮度映射值计算增益值和偏移值的方法可以参照前述相关内容,在此不再赘述。
可以理解地,设置滑动窗口的目的同样是为了针对每个像素点获取到多组数据以满足增益值和偏移值的计算需求,滑动窗口相当于限定了一个采样和计算的范围,在步骤S708中,可以根据滑动窗口在滑动时所获得的多个增益值来获取图像中每个像素点的增益值,例如当一个像素点仅被滑动窗口覆盖了一次时,可以以滑动窗口在覆盖该像素点时对应的增益值作为该像素点的增益值,当一个像素点被滑动窗口覆盖了多次时,可以以滑动窗口在每次覆盖该像素点时对应的增益值的均值作为该像素点的增益值,具体的获取方法将在下文相关部分进行详细地描述,在此不再赘述。
在这样的实施方式中,当一个像素点被滑动窗口覆盖时,滑动窗口此时所覆盖的其他像素点相当于是与该像素点相邻的一个或多个像素点,也就是说,这样的实施方式能够更加方便地获取到某一像素点与该像素点周围的像素点各自的亮度分量和亮度映射值,并进行下一步的运算,相较于针对每个像素点单独获取相邻像素点的技术方案而言,在实施起来更加的简单,并且可以批量进行计算,提高了运算的效率。
在一些实施方式中,滑动窗口可以设置多个,多个滑动窗口可以被配置成同时在图像中进行滑动,从而可以同时获取图像中的多个像素点的增益值,增加了获取增益值的效率。本领域技术人员还可以根据实际情况选择其他合适的方式来进行滑动窗口的设计。
参照图8,在一些实施方式中,滑动窗口被配置成正方形,并且滑动窗口的边长被配置成像素点边长的整数倍,例如,可以将滑动窗 口配置成2*2个像素点、3*3个像素点大小的窗口,这样的配置方式能够使得滑动窗口能够通过设置合适的滑动起点和滑动步长,在每次滑动结束后都能够恰好完整地覆盖住其能够覆盖的所有像素点,例如3*3大小的方形滑动窗口能够正好覆盖9个像素点,不会在其覆盖的区域中出现不完整的像素点,从而在每次滑动前计算滑动窗口当前的增益值和偏移值时,更加容易读取数据并进行运算。
可以理解的,滑动窗口也可以被配置成其他的形状,例如矩形甚至不规则的形状,滑动窗口的边长也并不一定是像素点边长的整数倍,在这样的实施方式中,在计算滑动窗口在某一位置对应的增益值和偏移值时,所使用的像素点可以设置成仅包括被滑动窗口完全覆盖的像素点,也可以设置成包括所有被滑动窗口覆盖的像素点,即使该像素点仅有部分在滑动窗口覆盖区域内。
在一些实施方式中,步骤S704中的滑动使得图像中的每个像素点均被滑动窗口覆盖至少一次,即,滑动窗口需要完成对图像的遍历,在这样的遍历过程中,可以不限定图像中的每个像素点被滑动窗口所覆盖的次数。
在一些实施方式中步骤S704中的滑动使得图像中的每个像素点均被滑动窗口覆盖相同的次数,例如,每个像素点均被滑动窗口覆盖3次。
在一些实施方式中,为了更加方便地控制滑动窗口进行滑动,可以使用固定的滑动步长,例如每次滑动一个像素点的长度,从而能够通过简单地参数设置使得图像中的每个像素点均被滑动窗口覆盖相同的次数,以节省运算。
在一些实施方式中,可以使用固定的滑动方向进行滑动窗口的滑动,从而更方便地控制滑动窗口的滑动。在一些实施方式中,滑动窗口的滑动方向也可以不是固定的,例如自图像的最左边缘开始向右滑动,滑动到最右边缘后改变方向,向下滑动一段距离后,再次改变距离,向左滑动直到再次到达图像的最左边缘,即,采用类似于“摆尾”的运行轨迹来使得滑动窗口能够覆盖图像中的每个像素点,并且在一 些实施方式中使滑动窗口能够对图像中的每个像素点覆盖相同的次数。
在一些实施方式中,在使用固定的滑动方向进行滑动时,滑动方向可以配置成沿像素点的对角线方向,例如沿一个像素点的左上角到右下角的滑动方向。在这样的实施方式中,滑动步长可以被配置成像素点对角线长度的M倍,M为大于等于1的整数,这样的滑动步长使得滑动窗口在沿着像素点对角线方向滑动时,每次滑动都能够恰好完整地覆盖整数个像素点。
图8中示出了一种可能的滑动方式,滑动窗口81被配置成3*3个像素点大小的方形窗口,并且如图8中的箭头82所示,滑动方向被配置成沿像素点的左上角到右下角的对角线方向,滑动的步长被配置成一个像素点的对角线长度,在这样的配置下,经过一次滑动后,滑动窗口将会自图中81.1示出的位置滑动到81.2示出的位置。可以理解地,图8中仅示出了一个滑动窗口在图像的部分区域中进行滑动的示例,对于整个图像而言,滑动窗口可以有多个,例如可以并排设置数个彼此紧靠但并不重叠的3*3大小的滑动窗口,将这些滑动窗口的初始位置设置在图像的左上角,然后进行如图8中示出的滑动,这样的滑动方式中,滑动窗口可以无需中途改变方向即可完成对图像的遍历。
图9中示出了这样的滑动中一个像素点被滑动窗口所覆盖的全过程示意图。图9中示出的滑动窗口91具有3*3个像素点的大小,滑动方向配置成沿像素点的左上角到右下角的对角线方向,滑动步长设置成一个像素点的对角线长度,在这样的参数设置下,当滑动窗口滑动到图中91.1示出的位置时,像素点P1首次被覆盖,而后滑动窗口继续滑动到91.2和91.3示出的位置,当滑动窗口滑动到图中91.3示出的位置时,是该像素点P1最后一次被滑动窗口所覆盖。在这样的实施方式中,也可以如前述内容所述的并排设置多个彼此紧靠但并不重叠的滑动窗口,尽管其他的像素点在被滑动窗口覆盖时与滑动窗口之间的相对位置可能与图中示出的略有不同,但是仍然能够保证图像中的每个像素点都被滑动窗口覆盖3次。本领域技术人员还可以以 其他的方式来设置多个滑动窗口,以提高获取增益值的效率,在此不再赘述。
可以理解地,图9中仅为一种示例,本领域技术人员可以根据实际需求对滑动窗口的大小、滑动方向、滑动步长进行改变,例如将滑动窗口的大小改为2*2、4*4、5*5等等,或者将滑动方向改为像素点沿像素点另一个对角线的方向,或者将滑动步长改为两个像素点的大小等等,这样的参数上的改变将会改变每个像素点被覆盖的次数,但是仍然能够保证图像中每个像素点均被覆盖相同的次数。
这样的实施方式中,尽管图像边缘的像素点也能够被覆盖相同的次数,但是在一些情况下,参见图10,当像素点P1位于图像的边缘位置时,滑动窗口在图中101.1和102.2示出的位置时并没有完全地位于图像内,换言之,其内部并没有完全被像素点所填充,例如滑动窗口在图中101.1示出的位置时仅覆盖了3个像素点,此时尽管也能够根据这3个像素点的亮度分量和亮度映射值计算滑动窗口在101.1位置处的亮度分量和亮度映射值,但是对于像素点P1以及其他位于图像边缘的像素点而言,其最终获取到的增益值可能并不够准确。此外,当像素点P1为图像四个角上的像素点时,滑动窗口覆盖P1时可能仅覆盖了P1一个像素点,会导致无法计算滑动窗口在该位置对应的增益值和偏移值,因此,在一些实施方式中,可以使用填充算法(Pedding)来扩充图像的边缘,以解决上述图像边缘的像素点在计算时存在的问题。Pedding算法是本领域常用的填充图像边缘的算法,本领域技术人员可以根据实际需求进行选择,或者选择其他能够扩充图像边缘的算法,在此不再赘述。
在一些实施方式中,滑动窗口的大小可以根据图像的整体色调确定。可以理解地,当滑动窗口较大时,其能够覆盖更多的像素点,相当于在最终获取某一像素点的增益值时,参照了更多与该像素点相邻的像素点的亮度分量和亮度映射值,这使得该像素点与周围的像素点保持了较好地空间上的连续性,同时也能够提高获取增益值的效率,但是与此同时,由于获取增益值时参照的范围较大,可能会导致空间 上的变化性降低,导致最终色调映射后的图像与原图像相比差异并不明显,效果较弱。
当滑动窗口较小时,空间上的变化性则会增加,从而经过色调映射后的图像亮区将会更亮,而暗区也会更暗,增加对比度的效果较为明显,但是也可能会由于空间上的连续性降低而出现块效应,块效应是指图像中的某一区域与其相邻的区域相比差异较大,使得人眼在观察时可能会发觉图像中的某一区域与周围区域相比略显突兀,更极端的情况下,可能会产生割裂感。
为此,可以在进行色调映射前根据图像的整体色调来确定滑动窗口的大小,以在空间的变化性以及空间的连续性之间进行权衡,使得色调映射后的图像能够更加符合用户期望的效果。
在一些实施方式中,滑动窗口的大小也可以在滑动过程中进行调整,具体而言,可以根据图像的色调以及滑动窗口的滑动路径来对滑动窗口的大小进行动态地调整,例如,在滑动窗口的滑动路径位于根据图像色调确定的某一特定区域时,使用较大的滑动窗口,而在滑动路径位于根据图像色调确定的另一区域中时,使用较小的滑动窗口。这样的做法能够更加灵活地调整空间连续性和空间变化性之间的权重,从而提高色调映射处理的效果。
在一些实施方式中,当滑动窗口的大小在滑动过程中发生了改变时,可以相应地改变滑动步长,从而保证图像中的每个像素点能够被该滑动窗口覆盖相同的次数,而不会受到窗口大小调节的影响。
请再次参阅图7,在使用上述任一实施方式中的滑动方法完成了步骤S704和步骤S706后,在步骤S708中,根据步骤S706所计算的滑动窗口在每个位置上对应的增益值和偏移值,来分别获取每个像素点在色调映射中的增益值。
在一些实施方式中,步骤S708中分别获取每个像素点在色调映射中的增益值时,针对任一像素点,获取增益值可以包括:
步骤S2082:获取所述任一像素点在被所述滑动窗口覆盖时所述滑动窗口对应的增益值;
步骤S2084:根据所述滑动窗口对应的增益值获取所述任一像素点的增益值。
仍然以像素点P1为例,在本实施方式中直接以滑动窗口在覆盖了像素点P1的位置上对应的增益值来作为像素点P1的增益值。可以理解地,滑动窗口被配置成至少能够覆盖两个像素点,当滑动窗口覆盖了P1时,其必然还同时覆盖了至少另一个像素点P2,在这样的实施方式中,相当于P1和P2最终获取到的增益值是相同的,结合前述内容,由于P1和P2是相邻的两个像素点,其亮度分量大概率是较为接近的,也就是说,真实的增益值是较为接近的,但是仍然可能出现P1和P2之间的亮度分量差异巨大的情况,此时使用这种实施方式所获得的增益值将会具有较大的误差。并且,即使能够保证滑动窗口内的所有像素点的亮度分量都较为接近,使该滑动窗口内的所有像素点使用相同的增益值也可能会导致最终获得的图像中出现较为严重的块效应。
这样的问题可以通过使每个像素点均被滑动窗口覆盖多次来解决。在一些实施方式中,步骤S2082中,当所述任一像素点被滑动窗口覆盖了多次时,获取多个增益值,多个增益值包括滑动窗口在每次覆盖所述任一像素点时对应的增益值,进一步在步骤S2084中,根据所述多个增益值来获取所述任一像素点的增益值。
在一些实施方式中,根据多个增益值来获取所述任一像素点的增益值可以使用计算多个增益值的算术平均值的方法,在一些实施方式中,根据多个增益值来获取所述任一像素点的增益值可以使用计算多个增益值的加权平均值的方法。
请再次参阅图9,在这样的实施方式中,像素点P1可能如图9示出的被滑动窗口覆盖了3次,此时,分别获取滑动窗口在91.1、91.2、91.3位置处对应的3个增益值,而后,通过计算这3个增益值的平均值或加权平均值来获取像素点P1。结合前述内容,在这样的实施方式中图像中的每个像素点均以这样的方式被覆盖了3次,因此针对每个像素点而言,其在进行这样的算术平均值或加权平均值计算时,所使用的3个增益值与任一个其他像素点相比都是不同的,例如,对于 图9中示出的像素点P1而言,在整个图像中有且仅有像素点P1在获取增益值时同时使用了91.1、91.2、91.3这3个位置对应的增益值。这样的实施方式进一步增加了增益值获取时的准确性,也进一步地减少了块效应的出现的可能性。
在一些实施方式中,上述方法还可以包括步骤S2083:获取多个差分值,多个差分值包括:所述滑动窗口在每次覆盖所述任一像素点时,所述任一像素点的亮度映射值与所述滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值。进一步,在步骤S2084中,可以根据多个平均差分值确定多个增益值中的每个在加权计算时的权重。
请再次参阅图9,在这样的实施方式中,可以分别计算滑动窗口在91.1、91.2、91.3三个位置时,像素点P1的亮度分量与滑动窗口当前覆盖的其他像素点的亮度分量之间的平均差分值,可以理解地,当这一平均差分值较小时,意味着像素点P1的亮度分量与滑动窗口内的其他像素点的亮度分量较为接近,从而滑动窗口在该位置对应的增益值和像素点P1的真实增益值也就较为接近,因此当平均差分值较小的时候,可以相应地为滑动窗口在该位置对应的增益值赋予较高的权重,使得最终获取的该像素点的增益值更加贴近真实的增益值。
在一些实施方式中,多个增益值的权重还可以根据像素点在被滑动窗口覆盖时所在的位置来确定,仍然参照图9,当滑动窗口位于91.2示出的位置时,像素点P1被滑动窗口覆盖,并且恰好位于滑动窗口的中央位置,此时可以为滑动窗口在91.2示出的位置对应的增益值赋予相对较高的权重。本领域技术人员还可以根据实际情况选择其他合适的实施方式来确定多个增益值在进行加权计算时各自的权重,以使得最终获取的像素点的增益值能够拥有更高的准确性。
可以理解地,上述对多个增益值进行算术平均值或加权平均值计算的步骤可以与滑动窗口的滑动步骤可以并行执行,例如滑动窗口在完成了91.1、91.2、91.3的滑动后即可以进行计算获得像素点P1的增益值,而无需等到滑动窗口完成了全部的滑动。当然本领域技术人员也可以选择合适的处理方式来进一步地提高运算的效率,在此不再赘述。
根据本申请实施例的另一方面,还提供了一种图像处理装置1100,参照图11,包括:一个或多个处理器1100,一个或多个处理器1110被配置成:获取图像中每个像素点的亮度分量对应的亮度映射值;根据亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成图像的色调映射。
在一些实施方式中,一个或多个处理器1110还被配置成:对图像进行色调映射预处理;根据色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
在一些实施方式中,色调映射预处理使用以下方法之一:直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法。
在一些实施方式中,针对任一像素点,一个或多个处理器1110还被配置成:获取与任一像素点相邻的一个或多个像素点;根据任一像素点以及与任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1110还被配置成:设定滑动窗口,滑动窗口被配置成至少能够覆盖两个像素点;使滑动窗口在图像中滑动,并在每次滑动前,根据滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算滑动窗口在当前位置对应的增益值和偏移值。
在一些实施方式中,滑动窗口被配置成正方形,并且滑动窗口的边长被配置成像素点边长的整数倍。
在一些实施方式中,滑动使图像中的每个像素点均被滑动窗口覆盖至少一次。
在一些实施方式中,滑动使图像中的每个像素点均被滑动窗口覆盖相同的次数。
在一些实施方式中,滑动使用固定的滑动步长。
在一些实施方式中,滑动使用固定的滑动方向。
在一些实施方式中,滑动方向包括沿像素点的对角线方向。
在一些实施方式中,滑动的滑动步长配置成像素点对角线长度的M倍,M为大于等于1的整数。
在一些实施方式中,滑动窗口被配置成固定大小,并且滑动窗口的大小由图像的色调确定。
在一些实施方式中,一个或多个处理器1110还被配置成:在滑动中,根据图像的色调以及滑动窗口的滑动路径调整滑动窗口的大小。
在一些实施方式中,针对任一像素点,一个或多个处理器1110还被配置成:获取任一像素点在被滑动窗口覆盖时滑动窗口对应的增益值;根据滑动窗口对应的增益值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1110还被配置成:当任一像素点被滑动窗口覆盖多次时,获取多个增益值,多个增益值包括滑动窗口在每次覆盖任一像素点时对应的增益值;根据多个增益值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1110还被配置成:计算多个增益值的算术平均值,以获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1110还被配置成:计算多个增益值的加权平均值,以获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1110还被配置成:获取多个差分值,多个差分值包括:滑动窗口在每次覆盖任一像素点时,任一像素点的亮度映射值与滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值;根据多个平均差分值确定多个增益值中的每个在加权计算时的权重。
根据本申请实施例的另一方面,还提供了一种成像装置1200,参照图12,包括:传感器1210,用于输出图像;一个或多个处理器1220,一个或多个处理器1220被配置成:获取传感器1210输出的图像中每个像素点的亮度分量对应的亮度映射值;根据亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取增益值;分别根据每个像素点对应的增益值, 获取每个像素点的色彩分量对应的色彩映射值,以完成图像的色调映射。
在一些实施方式中,一个或多个处理器1220还被配置成:对图像进行色调映射预处理;根据色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
在一些实施方式中,色调映射预处理使用以下方法之一:直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法。
在一些实施方式中,针对任一像素点,一个或多个处理器1220还被配置成:获取与任一像素点相邻的一个或多个像素点;根据任一像素点以及与任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1220还被配置成:设定滑动窗口,滑动窗口被配置成至少能够覆盖两个像素点;使滑动窗口在图像中滑动,并在每次滑动前,根据滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算滑动窗口在当前位置对应的增益值和偏移值。
在一些实施方式中,滑动窗口被配置成正方形,并且滑动窗口的边长被配置成像素点边长的整数倍。
在一些实施方式中,滑动使图像中的每个像素点均被滑动窗口覆盖至少一次。
在一些实施方式中,滑动使图像中的每个像素点均被滑动窗口覆盖相同的次数。
在一些实施方式中,滑动使用固定的滑动步长。
在一些实施方式中,滑动使用固定的滑动方向。
在一些实施方式中,滑动方向包括沿像素点的对角线方向。
在一些实施方式中,滑动的滑动步长配置成像素点对角线长度的M倍,M为大于等于1的整数。
在一些实施方式中,滑动窗口被配置成固定大小,并且滑动窗口的大小由图像的色调确定。
在一些实施方式中,一个或多个处理器1220还被配置成:在滑动中,根据图像的色调以及滑动窗口的滑动路径调整滑动窗口的大小。
在一些实施方式中,针对任一像素点,一个或多个处理器1220还被配置成:获取任一像素点在被滑动窗口覆盖时滑动窗口对应的增益值;根据滑动窗口对应的增益值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1220还被配置成:当任一像素点被滑动窗口覆盖多次时,获取多个增益值,多个增益值包括滑动窗口在每次覆盖任一像素点时对应的增益值;根据多个增益值获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1220还被配置成:计算多个增益值的算术平均值,以获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1220还被配置成:计算多个增益值的加权平均值,以获取任一像素点的增益值。
在一些实施方式中,一个或多个处理器1220还被配置成:获取多个差分值,多个差分值包括:滑动窗口在每次覆盖任一像素点时,任一像素点的亮度映射值与滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值;根据多个平均差分值确定多个增益值中的每个在加权计算时的权重。
根据本申请实施例的另一方面,还提供了一种计算机可读存储介质1300,参照图13,计算机可读存储介质1300上存储有计算机指令1310,计算机指令1310被执行时实现如上任一所述的色调映射方法。计算机可读存储介质可以包括易失性或非易失性、磁性、半导体、磁带、光学、可移除、不可移除或其他类型的计算机可读存储介质或计算机可读存储装置。例如,如所公开的,计算机可读存储介质可以是其上存储有计算机指令的存储单元或存储模块。在一些实施例中,计算机可读存储介质可以是其上存储有计算机指令的盘或闪存驱动器。
本领域技术人员还将理解,参考本申请所描述的各种示例性的逻辑块、模块、电路和算法步骤可以被实现为专用电子硬件、计算机软件或二者的组合。例如,模块/单元可以由一个或多个处理器来实现, 以使该一个或多个处理器成为一个或多个专用处理器,用于执行存储在计算机可读存储介质中的软件指令以执行模块/单元的专用功能。
附图中的流程图和框图示出了根据本申请的多个实施例的系统和方法的可能实现的系统架构、功能和操作。就这一点而言,流程图或框图中的每个框可以表示一个模块、一个程序段或代码的一部分,其中模块、程序段或代码的一部分包括用于实现指定的逻辑功能的一个或多个可执行指令。还应该注意的是,在一些备选实施方式中,框中标记的功能还可以以与附图中标记的顺序不同的顺序发生。例如,实际上可以基本并行地执行两个连续的块,并且有时也可以以相反的顺序执行,这取决于所涉及的功能。框图和/或流程图中的每个框以及框图和/或流程图中的框的组合可以由用于执行相应的功能或操作的专用的基于硬件的系统来实现,或者可以通过专用硬件和计算机指令的组合来实现。
如本领域技术人员将理解的,本申请的实施例可以体现为方法、系统、或计算机程序产品。因此,本申请的实施例可以采取完全硬件实施例、完全软件实施例或组合了软件和硬件的实施例的形式,以允许专用部件来执行上述功能。此外,本申请的实施例可以采取计算机程序产品的形式,其体现在包含计算机可读程序代码的一个或多个有形和/或非暂时性计算机可读存储介质中。一般形式的非暂时性计算机可读介质包括例如软盘、柔性盘、硬盘、固态驱动器、磁带或其它任何磁性数据存储介质、CD-ROM、任何其它光学数据存储介质、具有孔形式的任何物理介质、RAM、PROM和EPROM、FLASH-EPROM或任何其他闪存存储器、NVRAM、高速缓存、寄存器、任何其他存储芯片或胶卷、以及它们的联网版本。
参照根据本申请的实施例的方法、装置和计算机程序产品的流程图和/或框图,来描述本申请的各实施例。应当理解,流程图和/或框图中的每个流程和/或框,以及流程图和/或框图中的多个流程和/或框的组合,可以通过计算机程序指令来实现。这些计算机程序指令可以提供给计算机的处理器、嵌入式处理器或其他可编程数据处理装置以产生专用机器,使得经由计算机的处理器或其他可编程数据处理装置 执行的这些指令创建用来实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能的装置。
这些计算机程序指令也可以存储在指导计算机或其他可编程数据处理装置以特定方式运行的计算机可读存储器中,使得计算机可读存储器中存储的指令产生包括指令装置的制造产品,该指令装置实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能。
这些计算机程序指令也可以装载在计算机或其他可编程数据处理装置中,使一系列可操作步骤在计算机或其他可编程装置上执行以产生由计算机实现的处理,使得在计算机或其他可编程装置上执行的指令提供用于实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能的步骤。在典型配置中,计算机装置包括一个或多个中央处理单元(CPU)、输入/输出接口、网络接口和存储器。存储器可以包括易失性存储器、随机存取存储器(RAM)和/或非易失性存储器等形式,例如计算机可读存储介质中的只读存储器(ROM)或闪存RAM。存储器是计算机可读存储介质的示例。
计算机可读存储介质是指可以存储处理器可读的信息或数据的任何类型的物理存储器。因此,计算机可读存储介质可以存储用于由一个或多个处理器执行的指令,包括用于使处理器执行与本文描述的实施例一致的步骤或阶段的指令。计算机可读介质包括非易失性和易失性介质以及可移除和不可移除介质,其中信息存储可以用任何方法或技术来实现。信息可以是计算机可读指令的模块、数据结构和程序、或其他数据。非暂时性计算机可读介质的示例包括但不限于:相变随机存取存储器(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其它类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、闪存或其它存储器技术、光盘只读存储器(CD-ROM)、数字多功能盘(DVD)或其他光存储器、盒式磁带、磁带或磁盘存储器或其他磁存储装置、高速缓存、寄存器或可用于存储能够被计算机装置访问的信息的任何 其他非传输介质。计算机可读存储介质是非暂时性的,并且不包括诸如调制数据信号和载波之类的暂时性介质。
尽管本文描述了所公开的原理的示例和特征,但是在不脱离所公开的实施例的精神和范围的情况下,可以进行修改、适应性改变和其他实现。此外,词语“包含”、“具有”、“包含有”和“包括”以及其它类似形式旨在在含义上是等同的并且是开放性的,这些词语中的任何一个之后的一个或多个项目并不意在作为这样的一个或多个项目的详尽列表,也并不意在仅限于所列出的一个或多个项目。还必须注意,如本文和所附权利要求书中所使用的,除非上下文另有明确说明,否则单数形式“一”、“一个”和“所述”包括复数指示物。
应该理解的是,本申请不限于上面已经描述并在附图中示出的确切结构,并且可以在不脱离本申请范围的情况下进行各种修改和变化。用意在于,本申请的范围应当仅由所附权利要求限定。
Claims (58)
- 一种色调映射方法,包括:获取图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
- 根据权利要求1所述的色调映射方法,其中,所述获取图像中每个像素点的亮度分量对应的亮度映射值包括:对所述图像进行色调映射预处理;根据所述色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
- 根据权利要求2所述的色调映射方法,其中,所述色调映射预处理使用以下方法之一:直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法。
- 根据权利要求1至3中任一项所述的色调映射方法,其中,分别获取每个像素点在色调映射中的增益值时,针对任一像素点,获取增益值包括:获取与所述任一像素点相邻的一个或多个像素点;根据所述任一像素点以及与所述任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取所述任一像素点的增益值。
- 根据权利要求1至3中任一项所述的色调映射方法,还包括:设定滑动窗口,所述滑动窗口被配置成至少能够覆盖两个像素点;使所述滑动窗口在所述图像中滑动,并在每次滑动前,根据所述滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算所述滑动窗口在当前位置对应的增益值和偏移值。
- 根据权利要求5所述的色调映射方法,其中,所述滑动窗口被配置成正方形,并且所述滑动窗口的边长被配置成像素点边长的整数倍。
- 根据权利要求5所述的色调映射方法,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖至少一次。
- 根据权利要求5所述的色调映射方法,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖相同的次数。
- 根据权利要求5所述的色调映射方法,其中,所述滑动使用固定的滑动步长。
- 根据权利要求5所述的色调映射方法,其中,所述滑动使用固定的滑动方向。
- 根据权利要求10所述的色调映射方法,其中,所述滑动方向包括沿像素点的对角线方向。
- 根据权利要求11所述的色调映射方法,其中,所述滑动的滑动步长配置成像素点对角线长度的M倍,M为大于等于1的整数。
- 根据权利要求5至12中任一项所述的色调映射方法,其中,所述滑动窗口被配置成固定大小,并且所述滑动窗口的大小由所述图像的色调确定。
- 根据权利要求5至12中任一项所述的色调映射方法,还包括:在所述滑动中,根据所述图像的色调以及所述滑动窗口的滑动路径调整所述滑动窗口的大小。
- 根据权利要求5至14中任一项所述的色调映射方法,其中,所述分别获取每个像素点在色调映射中对应的增益值时,针对任一像素点,获取增益值包括:获取所述任一像素点在被所述滑动窗口覆盖时所述滑动窗口对应的增益值;根据所述滑动窗口对应的增益值获取所述任一像素点的增益值。
- 根据权利要求15所述的色调映射方法,其中,所述获取增益值还包括:当所述任一像素点被所述滑动窗口覆盖多次时,获取多个增益值,所述多个增益值包括所述滑动窗口在每次覆盖所述任一像素点时对应的增益值;根据所述多个增益值获取所述任一像素点的增益值。
- 根据权利要求16所述的色调映射方法,其中,所述根据所述多个增益值获取所述任一像素点的增益值包括:计算所述多个增益值的算术平均值,以获取所述任一像素点的增益值。
- 根据权利要求16所述的色调映射方法,其中,所述根据所述多个增益值获取所述任一像素点的增益值包括:计算所述多个增益值的加权平均值,以获取所述任一像素点的增益值。
- 根据权利要求所述18的色调映射方法,还包括:获取多个差分值,所述多个差分值包括:所述滑动窗口在每次覆盖所述任一像素点时,所述任一像素点的亮度映射值与所述滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值;根据所述多个平均差分值确定所述多个增益值中的每个在加权计算时的权重。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现权利要求1-19任意一项所述的色调映射方法。
- 一种图像处理装置,所述图像处理装置用于对图像进行色调映射,所述图像处理装置包括:一个或多个处理器,所述一个或多个处理器被配置成:获取所述图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
- 根据权利要求21所述的图像处理装置,其中,所述一个或多个处理器还被配置成:对所述图像进行色调映射预处理;根据所述色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
- 根据权利要求22所述的图像处理装置,其中,所述色调映射预处理使用以下方法之一:直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法。
- 根据权利要求21至23中任一项所述的图像处理装置,其中,针对任一像素点,所述一个或多个处理器还被配置成:获取与所述任一像素点相邻的一个或多个像素点;根据所述任一像素点以及与所述任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取所述任一像素点的增益值。
- 根据权利要求21至23中任一项所述的图像处理装置,其中,所述一个或多个处理器还被配置成:设定滑动窗口,所述滑动窗口被配置成至少能够覆盖两个像素点;使所述滑动窗口在所述图像中滑动,并在每次滑动前,根据所述滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算所述滑动窗口在当前位置对应的增益值和偏移值。
- 根据权利要求25所述的图像处理装置,其中,所述滑动窗口被配置成正方形,并且所述滑动窗口的边长被配置成像素点边长的整数倍。
- 根据权利要求25所述的图像处理装置,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖至少一次。
- 根据权利要求25所述的图像处理装置,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖相同的次数。
- 根据权利要求25所述的图像处理装置,其中,所述滑动使用固定的滑动步长。
- 根据权利要求25所述的图像处理装置,其中,所述滑动使用固定的滑动方向。
- 根据权利要求30所述的图像处理装置,其中,所述滑动方向包括沿像素点的对角线方向。
- 根据权利要求31所述的图像处理装置,其中,所述滑动的滑动步长配置成像素点对角线长度的M倍,M为大于等于1的整数。
- 根据权利要求25至32中任一项所述的图像处理装置,其中,所述滑动窗口被配置成固定大小,并且所述滑动窗口的大小由所述图像的色调确定。
- 根据权利要求25至32中任一项所述的图像处理装置,其中,所述一个或多个处理器还被配置成:在所述滑动中,根据所述图像的色调以及所述滑动窗口的滑动路径调整所述滑动窗口的大小。
- 根据权利要求25至34中任一项所述的图像处理装置,其中,针对任一像素点,所述一个或多个处理器还被配置成:获取所述任一像素点在被所述滑动窗口覆盖时所述滑动窗口对应的增益值;根据所述滑动窗口对应的增益值获取所述任一像素点的增益值。
- 根据权利要求35所述的图像处理装置,其中,所述一个或多个处理器还被配置成:当所述任一像素点被所述滑动窗口覆盖多次时,获取多个增益值,所述多个增益值包括所述滑动窗口在每次覆盖所述任一像素点时对应的增益值;根据所述多个增益值获取所述任一像素点的增益值。
- 根据权利要求36所述的图像处理装置,其中,所述一个或多个处理器还被配置成:计算所述多个增益值的算术平均值,以获取所述任一像素点的增益值。
- 根据权利要求36所述的图像处理装置,其中,所述一个或多个处理器还被配置成:计算所述多个增益值的加权平均值,以获取所述任一像素点的增益值。
- 根据权利要求所述38的图像处理装置,其中,所述一个或多个处理器还被配置成:获取多个差分值,所述多个差分值包括:所述滑动窗口在每次覆盖所述任一像素点时,所述任一像素点的亮度映射值与所述滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值;根据所述多个平均差分值确定所述多个增益值中的每个在加权计算时的权重。
- 一种成像装置,包括:传感器,用于输出图像;一个或多个处理器,所述一个或多个处理器被配置成:获取所述传感器输出的所述图像中每个像素点的亮度分量对应的亮度映射值;根据所述亮度映射值,分别获取每个像素点在色调映射中的增益值,其中,通过同时计算偏移值和增益值的方式获取所述增益值;分别根据每个像素点对应的增益值,获取每个像素点的色彩分量对应的色彩映射值,以完成所述图像的色调映射。
- 根据权利要求40所述的成像装置,其中,所述一个或多个处理器还被配置成:对所述图像进行色调映射预处理;根据所述色调映射预处理的结果,获取每个像素点的亮度分量对应的亮度映射值。
- 根据权利要求41所述的成像装置,其中,所述色调映射预处理使用以下方法之一:直方图均衡法、基于梯度的压缩法、伽马压缩法、基于视网膜皮层模型的色调映射法和基于学习模型的色调映射法。
- 根据权利要求40至42中任一项所述的成像装置,其中,针对任一像素点,所述一个或多个处理器还被配置成:获取与所述任一像素点相邻的一个或多个像素点;根据所述任一像素点以及与所述任一像素点相邻的一个或多个像素点各自的亮度分量和对应的亮度映射值获取所述任一像素点的增益值。
- 根据权利要求40至42中任一项所述的成像装置,其中,所述一个或多个处理器还被配置成:设定滑动窗口,所述滑动窗口被配置成至少能够覆盖两个像素点;使所述滑动窗口在所述图像中滑动,并在每次滑动前,根据所述滑动窗口当前覆盖的所有像素点各自的亮度分量和对应的亮度映射值,计算所述滑动窗口在当前位置对应的增益值和偏移值。
- 根据权利要求44所述的成像装置,其中,所述滑动窗口被配置成正方形,并且所述滑动窗口的边长被配置成像素点边长的整数倍。
- 根据权利要求44所述的成像装置,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖至少一次。
- 根据权利要求44所述的成像装置,其中,所述滑动使所述图像中的每个像素点均被所述滑动窗口覆盖相同的次数。
- 根据权利要求44所述的成像装置,其中,所述滑动使用固定的滑动步长。
- 根据权利要求44所述的成像装置,其中,所述滑动使用固定的滑动方向。
- 根据权利要求49所述的成像装置,其中,所述滑动方向包括沿像素点的对角线方向。
- 根据权利要求50所述的成像装置,其中,所述滑动的滑动步长配置成像素点对角线长度的M倍,M为大于等于1的整数。
- 根据权利要求44至51中任一项所述的成像装置,其中,所述滑动窗口被配置成固定大小,并且所述滑动窗口的大小由所述图像的色调确定。
- 根据权利要求44至51中任一项所述的成像装置,其中,所述一个或多个处理器还被配置成:在所述滑动中,根据所述图像的色调以及所述滑动窗口的滑动路径调整所述滑动窗口的大小。
- 根据权利要求44至53中任一项所述的成像装置,其中,针对任一像素点,所述一个或多个处理器还被配置成:获取所述任一像素点在被所述滑动窗口覆盖时所述滑动窗口对应的增益值;根据所述滑动窗口对应的增益值获取所述任一像素点的增益值。
- 根据权利要求54所述的成像装置,其中,所述一个或多个处理器还被配置成:当所述任一像素点被所述滑动窗口覆盖多次时,获取多个增益值,所述多个增益值包括所述滑动窗口在每次覆盖所述任一像素点时对应的增益值;根据所述多个增益值获取所述任一像素点的增益值。
- 根据权利要求55所述的成像装置,其中,所述一个或多个处理器还被配置成:计算所述多个增益值的算术平均值,以获取所述任一像素点的增益值。
- 根据权利要求55所述的图像处理装置,其中,所述一个或多个处理器还被配置成:计算所述多个增益值的加权平均值,以获取所述任一像素点的增益值。
- 根据权利要求所述57的图像处理装置,其中,所述一个或多个处理器还被配置成:获取多个差分值,所述多个差分值包括:所述滑动窗口在每次覆盖所述任一像素点时,所述任一像素点的亮度映射值与所述滑动窗口覆盖的其他像素点的亮度映射值之间的平均差分值;根据所述多个平均差分值确定所述多个增益值中的每个在加权计算时的权重。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/094666 WO2022241676A1 (zh) | 2021-05-19 | 2021-05-19 | 色调映射方法、图像处理装置及成像装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/094666 WO2022241676A1 (zh) | 2021-05-19 | 2021-05-19 | 色调映射方法、图像处理装置及成像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022241676A1 true WO2022241676A1 (zh) | 2022-11-24 |
Family
ID=84140073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/094666 WO2022241676A1 (zh) | 2021-05-19 | 2021-05-19 | 色调映射方法、图像处理装置及成像装置 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022241676A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854557A (zh) * | 2008-12-31 | 2010-10-06 | 东部高科股份有限公司 | 实时图像生成器 |
US20100328490A1 (en) * | 2009-06-26 | 2010-12-30 | Seiko Epson Corporation | Imaging control apparatus, imaging apparatus and imaging control method |
US20110279710A1 (en) * | 2010-05-12 | 2011-11-17 | Samsung Electronics Co., Ltd. | Apparatus and method for automatically controlling image brightness in image photographing device |
US20170287149A1 (en) * | 2016-04-01 | 2017-10-05 | Stmicroelectronics (Grenoble 2) Sas | Macropixel processing system, method and article |
CN107993189A (zh) * | 2016-10-27 | 2018-05-04 | 福州瑞芯微电子股份有限公司 | 一种基于局部分块的图像色调动态调节方法和装置 |
CN109416832A (zh) * | 2016-06-29 | 2019-03-01 | 杜比实验室特许公司 | 高效的基于直方图的亮度外观匹配 |
CN110473158A (zh) * | 2019-08-14 | 2019-11-19 | 上海世茂物联网科技有限公司 | 一种车牌图像亮度的处理方法、装置及设备 |
-
2021
- 2021-05-19 WO PCT/CN2021/094666 patent/WO2022241676A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854557A (zh) * | 2008-12-31 | 2010-10-06 | 东部高科股份有限公司 | 实时图像生成器 |
US20100328490A1 (en) * | 2009-06-26 | 2010-12-30 | Seiko Epson Corporation | Imaging control apparatus, imaging apparatus and imaging control method |
US20110279710A1 (en) * | 2010-05-12 | 2011-11-17 | Samsung Electronics Co., Ltd. | Apparatus and method for automatically controlling image brightness in image photographing device |
US20170287149A1 (en) * | 2016-04-01 | 2017-10-05 | Stmicroelectronics (Grenoble 2) Sas | Macropixel processing system, method and article |
CN109416832A (zh) * | 2016-06-29 | 2019-03-01 | 杜比实验室特许公司 | 高效的基于直方图的亮度外观匹配 |
CN107993189A (zh) * | 2016-10-27 | 2018-05-04 | 福州瑞芯微电子股份有限公司 | 一种基于局部分块的图像色调动态调节方法和装置 |
CN110473158A (zh) * | 2019-08-14 | 2019-11-19 | 上海世茂物联网科技有限公司 | 一种车牌图像亮度的处理方法、装置及设备 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11790497B2 (en) | Image enhancement method and apparatus, and storage medium | |
CN108335279B (zh) | 图像融合和hdr成像 | |
US9311901B2 (en) | Variable blend width compositing | |
WO2021068618A1 (zh) | 图像融合方法、装置、计算处理设备和存储介质 | |
US9984445B2 (en) | Tone mapping | |
US10755394B2 (en) | Image processing method and device | |
CN113034358B (zh) | 一种超分辨率图像处理方法以及相关装置 | |
WO2018040463A1 (zh) | DeMura表的数据压缩、解压缩方法及Mura补偿方法 | |
CN107257452B (zh) | 一种图像处理方法、装置及计算设备 | |
US9779490B2 (en) | Defective pixel fixing | |
JP7175197B2 (ja) | 画像処理方法および装置、記憶媒体、コンピュータ装置 | |
US8498500B2 (en) | Image processing apparatus and image processing method | |
US10796419B2 (en) | Electronic apparatus and controlling method of thereof | |
US20240193739A1 (en) | Image processing method and apparatus, computer device, and storage medium | |
US9286653B2 (en) | System and method for increasing the bit depth of images | |
CN115578284A (zh) | 一种多场景图像增强方法及系统 | |
US7545984B1 (en) | Quantifying graphics image difference | |
US9258490B2 (en) | Smoothing of ghost maps in a ghost artifact detection method for HDR image creation | |
CN111369435B (zh) | 基于自适应稳定模型的彩色图像深度上采样方法及系统 | |
WO2022241676A1 (zh) | 色调映射方法、图像处理装置及成像装置 | |
Grundland et al. | Automatic contrast enhancement by histogram warping | |
US20230093967A1 (en) | Purple-fringe correction method and purple-fringe correction device | |
JP7271492B2 (ja) | 画像処理装置、および画像処理方法 | |
US8351729B2 (en) | Apparatus, method, and program for image correction | |
Albakri et al. | Rapid contrast enhancement algorithm for natural contrast-distorted color images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21940133 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21940133 Country of ref document: EP Kind code of ref document: A1 |