WO2024011976A1 - 一种扩展图像动态范围的方法及电子设备 - Google Patents

一种扩展图像动态范围的方法及电子设备 Download PDF

Info

Publication number
WO2024011976A1
WO2024011976A1 PCT/CN2023/088196 CN2023088196W WO2024011976A1 WO 2024011976 A1 WO2024011976 A1 WO 2024011976A1 CN 2023088196 W CN2023088196 W CN 2023088196W WO 2024011976 A1 WO2024011976 A1 WO 2024011976A1
Authority
WO
WIPO (PCT)
Prior art keywords
original image
dynamic range
brightness
segmentation result
threshold segmentation
Prior art date
Application number
PCT/CN2023/088196
Other languages
English (en)
French (fr)
Inventor
雷财华
武理友
胡志成
丁岳
邵涛
张嘉森
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP23762126.3A priority Critical patent/EP4328852A1/en
Publication of WO2024011976A1 publication Critical patent/WO2024011976A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • the present application relates to the field of image processing, and in particular to a method and electronic device for extending the dynamic range of an image.
  • This application provides a method and electronic device for expanding the dynamic range of an image, which can expand the dynamic range of the original image, thereby improving the user's visual experience.
  • this application provides a method for extending the dynamic range of an image, which method includes:
  • the original image information includes the original image and brightness level information.
  • the brightness level information is used to indicate the brightness value of the pixel in the original image.
  • the threshold segmentation result corresponding to the original image is obtained.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark is used to indicate the category of the area corresponding to each pixel in the original image.
  • Areas include standard dynamic range areas and extended dynamic range areas. The brightness value corresponding to the pixel in the standard dynamic range area is lower than the brightness threshold, and the brightness value corresponding to the pixel in the extended dynamic range area is higher than or equal to the brightness threshold.
  • the original image and the threshold segmentation result corresponding to the original image are associated and saved for subsequent dynamic range expansion of the extended dynamic range area in the original image based on the threshold segmentation result corresponding to the original image, thereby generating an extended dynamic range map corresponding to the original image.
  • the dynamic range of the extended dynamic range image is greater than the dynamic range of the original image.
  • the above solution obtains the threshold segmentation result corresponding to the original image based on the brightness level information contained in the original image information, and associates and saves the original image and the threshold segmentation result corresponding to the original image.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark can indicate whether each pixel in the original image corresponds to a standard dynamic range area (an area with a brightness value lower than the brightness threshold) or an extended dynamic range area (an area with a brightness value higher than or equal to the brightness threshold).
  • the dynamic range expansion of the extended dynamic range area in the original image can be performed according to the threshold segmentation result corresponding to the original image, thereby Realize the dynamic range expansion of the original image, and obtain the extended dynamic range map corresponding to the original image with a larger dynamic range than the original image, so that it can better express
  • the gradients and levels of light and color in the image are now displayed, giving users a visual effect closer to the real world.
  • the stored threshold segmentation results corresponding to the original image only include the regional category labels corresponding to each pixel in the original image, so the amount of data is small and the storage space required is small.
  • the above brightness level information is an exposure image collected in the same scene of the original image.
  • the subject in this exposure is the same as in the original image.
  • the brightness value of each pixel in the exposure map indicates the brightness value of the corresponding pixel in the original image.
  • the above-mentioned method of obtaining the threshold segmentation result corresponding to the original image based on the brightness level information and the brightness threshold included in the original image information may be to obtain the threshold segmentation result corresponding to the original image based on the brightness value and brightness threshold of each pixel in the exposure image. .
  • the exposure image in the above implementation is collected in the same scene of the original image, and the photographed object in the exposure image is the same as the photographed object in the original image. Therefore, the brightness value of each pixel point in the exposure image can be used to indicate the brightness value of the corresponding pixel point in the original image. And based on the brightness value and brightness threshold included in the exposure map, the threshold segmentation result corresponding to the original image can be obtained. In this way, the dynamic range of the extended dynamic range area in the original image is subsequently expanded based on the threshold segmentation result corresponding to the original image. In fact, the brightness value of each pixel in the exposure image is combined to expand the dynamic range of the original image, and the information used is more Rich, dynamic range expansion is better.
  • the threshold segmentation result corresponding to the original image is obtained based on the brightness value and brightness threshold of each pixel in the exposure map, which may be: based on the brightness value of each pixel in the exposure map Determine a brightness threshold, perform single-level threshold segmentation on the exposure image based on the brightness threshold, and obtain the threshold segmentation result.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark is used to indicate the category of the area corresponding to each pixel in the original image.
  • the area in this implementation includes a standard dynamic range area and an extended dynamic range area.
  • a single-level threshold segmentation method can be used for exposure images in which the target is relatively single.
  • the obtained area has only a standard dynamic range area and an extended dynamic range area, and there are only two corresponding area category labels. value.
  • the corresponding threshold segmentation result can be represented by a data sequence or a two-dimensional matrix (each element in the data sequence or two-dimensional matrix has only two possible values, such as 0 and 1), and the storage Threshold segmentation results occupy less storage space.
  • the threshold segmentation result corresponding to the original image is obtained based on the brightness value and brightness threshold of each pixel in the exposure map, which may be: based on the brightness value of each pixel in the exposure map Multiple brightness thresholds are determined, and multi-level threshold segmentation is performed on the exposure map based on the multiple brightness thresholds to obtain a threshold segmentation result.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark is used to indicate the category of the area corresponding to each pixel in the original image.
  • the area in this implementation includes a standard dynamic range area and multiple levels of extended dynamic range areas.
  • a multi-level threshold segmentation method can be used, and the resulting area includes a standard dynamic range area and multiple different levels of extended dynamic range areas.
  • the corresponding regional category tag has more than two values.
  • multi-level threshold segmentation is performed on more complex exposure images to obtain multiple regions with higher fine-grained segmentation, that is, a standard dynamic range region and multiple levels of extended dynamic range regions. From this, the corresponding When the expansion coefficient is used for tone mapping, images with richer brightness gradients and levels can be obtained.
  • the above threshold can be determined based on the brightness value of each pixel in the exposure map.
  • a corresponding brightness histogram is constructed based on the brightness value of each pixel in the exposure map.
  • This brightness histogram can display the number of pixels in the exposure map when the brightness values are 0, 1, ..., 255.
  • the brightness histogram includes an x-axis and a y-axis, where the value on the x-axis represents the brightness value, and the values are 0, 1, ..., 255; the value on the y-axis represents the number of pixels.
  • y0 is the number of pixels with a brightness value x of 0 in the exposure map
  • y1 is the number of pixels with a brightness value x of 1 in the exposure map
  • yk is the number of pixels with a brightness value x of k in the exposure map Number
  • y 255 is the number of pixels in the exposure map with a brightness value x of 255.
  • the brightness value k is used as the threshold.
  • the above implementation method can be applied to determine a single threshold in a single-level threshold segmentation method, or can be applied to determine multiple brightness thresholds in a multi-level threshold segmentation method. For example, by taking a unique value for the preset quantity, a unique threshold can be obtained; and by taking multiple different values for the preset quantity, multiple brightness thresholds can be obtained.
  • the above threshold can be determined based on the brightness value of each pixel in the exposure map. Specifically, a corresponding brightness histogram is constructed based on the brightness value of each pixel in the exposure map. This brightness histogram can display the number of pixels in the exposure map when the brightness values are 0, 1, ..., 255. Based on the brightness histogram, the Otsu method (OTSU algorithm) is used to determine the threshold.
  • Otsu method Otsu method
  • the above implementation method can be applied to a single-level threshold segmentation method, where a single threshold is determined through the OTSU algorithm, or it can be applied to a multi-level threshold segmentation method, where multiple brightness thresholds are determined through the OTSU algorithm.
  • is the standard deviation coefficient.
  • the above implementation method can be applied to determine a single threshold in a single-level threshold segmentation method, or can be applied to determine multiple brightness thresholds in a multi-level threshold segmentation method.
  • the standard deviation coefficient ⁇ can take a unique value, that is, a unique threshold T can be obtained; and the standard deviation coefficient ⁇ can take multiple different values, that is, multiple brightness thresholds T can be obtained.
  • the above threshold segmentation result is a data sequence.
  • the regional category mark corresponding to each pixel in the original image is represented by the value of each corresponding element in the data sequence.
  • the values of elements in the data sequence include 0 and 1.
  • the pixel corresponding to the element with a value of 0 in the data sequence in the original image corresponds to the standard dynamic range area.
  • the element with a value of 1 in the data sequence corresponds to the pixel point in the original image corresponding to the extended dynamic range area.
  • the above threshold segmentation result is a two-dimensional matrix.
  • the regional category mark corresponding to each pixel in the original image is represented by the value of each corresponding element in the two-dimensional matrix.
  • the values of elements in the two-dimensional matrix include 0 and 1.
  • the pixel corresponding to the element with a value of 0 in the two-dimensional matrix in the original image corresponds to the standard dynamic range area.
  • the element with a value of 1 in the two-dimensional matrix corresponds to the extended dynamic range area at the corresponding pixel point in the original image.
  • the obtained area category has only one standard dynamic range area and one extended dynamic range area, and the corresponding area category labels are Note that there are only two values.
  • the obtained threshold segmentation result can be represented by a data sequence or a two-dimensional matrix, and the elements therein have only two values, thereby reducing the storage space occupied by the threshold segmentation result.
  • the method before associating and saving the original image and the threshold segmentation result corresponding to the original image, the method further includes: assuming that the threshold segmentation result corresponding to the original image is I, and The threshold segmentation result I corresponding to the image is subjected to downsampling processing to obtain the threshold segmentation result I′ corresponding to the original image after downsampling. Furthermore, the above-mentioned associated storage of the original image and the threshold segmentation result corresponding to the original image includes: associated storage of the threshold segmentation result I′ corresponding to the original image and the down-sampled original image.
  • downsampling the threshold segmentation result corresponding to the original image can effectively reduce the data size of the threshold segmentation result. Downsampling the threshold segmentation result corresponding to the original image and then storing it can reduce the storage space occupied by the threshold segmentation result.
  • this application provides a method for extending the dynamic range of images, including:
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image. Areas include extended dynamic range areas and standard dynamic range areas. The brightness value of the pixel in the standard dynamic range area is lower than the brightness threshold, and the brightness value of the pixel in the extended dynamic range area is higher than or equal to the brightness threshold. According to the threshold segmentation result corresponding to the original image, the extended dynamic range area in the original image is dynamically expanded, thereby generating an extended dynamic range map corresponding to the original image; the dynamic range of the extended dynamic range map is greater than the dynamic range of the original image.
  • the above scheme extends the dynamic range of the original image, it is determined according to the threshold segmentation result corresponding to the original image that the pixels in the original image correspond to the standard dynamic range area or the extended dynamic range area, and the extended dynamic range in the original image is The dynamic range of the area is expanded, so that the dynamic range of the original image can be expanded to make the dynamic range larger, and an extended dynamic range image with a larger dynamic range than the original image can be obtained, which can better express the light and light in the image. Color gradients and levels can bring users a visual effect closer to the real world.
  • dynamic range expansion is performed on the extended dynamic range area in the original image according to the threshold segmentation result corresponding to the original image, thereby generating an extended dynamic range map corresponding to the original image, including
  • Each pixel in the image undergoes the following judgments and operations:
  • the area category mark corresponding to each pixel point in the original image included in the threshold segmentation result corresponding to the original image it is determined whether the pixel point corresponds to a standard dynamic range area or an extended dynamic range area.
  • the expansion coefficient ⁇ P corresponding to the pixel point P 1, and the R, G, and B values of the pixel point P are directly used as the pixel point P′ corresponding to the pixel point in the extended dynamic range map.
  • the expansion coefficient ⁇ P corresponding to the pixel is ⁇ 1. Multiply the R, G, and B values of the pixel P by the expansion coefficient ⁇ P to obtain a new set of R, G, and B values. value; use the new R, G, and B values as the R, G, and B values of the pixel point P′ corresponding to the pixel point P in the extended dynamic range map.
  • the above solution performs dynamic range expansion on the extended dynamic range area in the original image based on the threshold segmentation results corresponding to the original image.
  • the standard dynamic range corresponding to the pixels in the original image is determined based on the threshold segmentation results corresponding to the original image.
  • the area is also an extended dynamic range area, and the corresponding expansion coefficient is used to tone-map the R, G, and B values of each pixel in the original image to obtain an extended dynamic range map.
  • the corresponding expansion coefficient is taken to be 1, that is, its R, G, and B values remain unchanged and no brightening is performed.
  • the pixels corresponding to the standard dynamic range area on the original image Its brightness value remains unchanged.
  • the pixels corresponding to the extended dynamic range area (the area with a brightness value higher than or equal to the brightness threshold) multiply its R, G, and B values by an expansion coefficient greater than or equal to 1 to obtain a new set of R, G and B values, that is to say, the pixels corresponding to the extended dynamic range area on the original image will be brightened through tone mapping.
  • the dynamic range of an image is the ratio of the maximum brightness value of a pixel in the image to the minimum brightness value, that is, the maximum brightness value of a pixel in the image/the minimum brightness value of a pixel in the image.
  • the above solution combines the brightness value of the pixels in the original image with the R, G, and B values for tone mapping. Compared with only using the original image, The brightness value or R, G, and B values of the pixels are used for tone mapping.
  • the above solution of the present application utilizes richer information and has a better dynamic range expansion effect. At the same time, in this solution, pixels corresponding to low-brightness areas on the original image are not brightened, which can also reduce the amount of data processed in the image, thereby speeding up the expansion of the dynamic range of the image and improving real-time performance.
  • the expansion coefficient ⁇ P ⁇ 1 corresponding to the pixel point includes:
  • the tone mapping function value F(Gray) corresponding to the grayscale value Gray is obtained according to the tone mapping function F(x).
  • the tone mapping function F(x) is a monotonic non-decreasing function, and F(x) ⁇ 1.
  • the tone mapping function value F (Gray) is used as the expansion coefficient ⁇ P corresponding to the pixel point.
  • the above solution determines the expansion coefficient of the pixel based on the gray value of the pixel and the tone mapping function for the pixels in the extended dynamic range area.
  • the tone mapping function is a monotonic non-decreasing function, and its function value is greater than or equal to 1. In this way, for different pixels with different grayscale values in the extended dynamic range area, different expansion coefficients can be calculated through the tone mapping function. Therefore, different pixels in the extended dynamic range area of the original image can be brightened to varying degrees based on different expansion coefficients, so that the brightness values of the pixels in the original image can be evenly mapped to a brightness value range that meets the display requirements. , the corresponding extended dynamic range map is obtained.
  • the exposure map be a short exposure map or a medium exposure map. Since the lightness and darkness of the short exposure image or the medium exposure image are more suitable, especially the information in the brighter part of the image is not lost, when subsequent brightening of the pixels in the extended dynamic range area of the original image is performed, better results can be obtained color gradients and layers. Combined with the above technical solution, the pixels in the standard dynamic range area on the original image are not brightened, and their pixel values remain unchanged. Therefore, even if the information on the darker parts of the short exposure image or the medium exposure image is incomplete, for The dynamic range expansion effect of the above technical solution will not be affected either.
  • the threshold segmentation result is the threshold segmentation result I′ corresponding to the original image after downsampling processing
  • the above-mentioned dynamic range is performed on the extended dynamic range area in the original image. Expand, thereby generating an extended dynamic range map corresponding to the original image, using the following method: upsampling the threshold segmentation result I′ to obtain the threshold segmentation result I ⁇ .
  • the original The extended dynamic range area in the image is dynamically expanded, thereby generating an extended dynamic range image corresponding to the original image.
  • the obtained threshold segmentation result I′′ has the same size as the original image, and the pixels therein can be the same as those in the original image. Pixel Points correspond to one-to-one, so that the threshold segmentation result can be used to determine the expansion coefficient corresponding to each pixel in the original image, thereby dynamically expanding the original image.
  • the present application provides an electronic device.
  • the electronic device includes a memory and one or more processors; the memory is coupled to the processor; wherein computer program code is stored in the memory, and the computer program code includes computer instructions.
  • the computer instructions When executed by the processor, the electronic device is caused to execute the method in the above first or second aspect and any possible implementation thereof.
  • the electronic device includes one or more cameras, and the cameras are used to collect original image information.
  • the electronic device communication module is used to transmit data with other devices to obtain the original image and the threshold segmentation result corresponding to the original image.
  • the present application provides a computer-readable storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on a computer, the computer executes the above-mentioned first aspect or second aspect and any of its possible implementations. method.
  • Figure 1 is a rendering of an original image in the embodiment of the present application.
  • Figure 2 is a rendering of an extended dynamic range diagram in an embodiment of the present application.
  • Figure 3 is a schematic diagram of an electronic device in an embodiment of the present application.
  • Figure 4 is a flow chart of a method for extending the dynamic range of an image provided by an embodiment of the present application
  • Figure 5 is a schematic diagram of a threshold segmentation effect in an embodiment of the present application.
  • Figure 6 is a schematic diagram of a threshold segmentation result in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another threshold segmentation result in an embodiment of the present application.
  • Figure 8 is a flow chart of another method for extending the dynamic range of an image provided by an embodiment of the present application.
  • Figure 9 is a flow chart of another method for extending the dynamic range of an image provided by an embodiment of the present application.
  • Figure 10 is a rendering of another method for extending the dynamic range of an image provided by an embodiment of the present application.
  • Figure 11 is a tone mapping function curve diagram in an embodiment of the present application.
  • Figure 12 is a rendering of a method for extending the dynamic range of an image provided by an embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a chip system in an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this application, unless otherwise stated, “plurality” means two or more.
  • Dynamic Range is the ratio of the maximum and minimum values of a variable signal (such as sound or light).
  • a variable signal such as sound or light
  • the brightness difference of objects is very large.
  • the brightness value of the sun is about 1.5E+9cd/m2
  • the brightness value of a fluorescent lamp is about 1.2E+4cd/m2 (candela/square meter)
  • the brightness value of moonlight is about 1000cd/m2
  • a black and white TV The brightness value of the fluorescent screen is about 120cd/m2
  • the brightness value of the color TV fluorescent screen is about 80cd/m2.
  • the difference is very large, with a very large dynamic range.
  • dynamic range refers to the range of brightness in the scene that the camera can capture. It can be expressed as the ratio between the highest and lowest brightness that the camera can record in a single frame image, that is, the pixels in the image. The ratio of the maximum brightness value to the minimum brightness value.
  • High Dynamic Range (HDR) imaging compared with Low Dynamic Range (LDR) imaging or Standard Dynamic Range (SDR) imaging, has a larger dynamic range (i.e. greater light and shade) difference), can more accurately reflect the brightness variation range from direct sunlight to the darkest shadows in the real world, has a wider color range and richer image details, and can better reflect the visual effects in the real environment. .
  • the R, G, and B values of each pixel of an LDR image or SDR image are usually encoded using 8 bits, and the brightness value range they represent is only 0-255.
  • HDR images use more data bits to encode each color channel than LDR images or SDR images because they can represent a larger dynamic range.
  • their display color accuracy is greater and they can better express the image. Gradients and gradations of light and color.
  • images produced by traditional imaging devices usually have only a very limited dynamic range.
  • Table 1 shows the approximate dynamic range of several common imaging equipment.
  • the real world dynamic range is 100000:1. Therefore, it is necessary to expand the dynamic range of images produced by traditional imaging devices so that the images can better represent the real environment.
  • CTR cathode ray tube
  • LCD liquid crystal displays
  • embodiments of the present application provide a method for extending the dynamic range of images.
  • This method can be applied to electronic devices.
  • the method includes: obtaining original image information. Based on the brightness level information and brightness threshold in the original image information, the threshold segmentation result corresponding to the original image is obtained.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark is used to indicate the category of the area corresponding to each pixel in the original image.
  • the areas include extended dynamic range areas and standard dynamic range areas.
  • the original image and the threshold segmentation result corresponding to the original image are associated and saved for subsequent dynamic range expansion of the extended dynamic range area in the original image based on the threshold segmentation result corresponding to the original image, thereby generating an extended dynamic range map corresponding to the original image.
  • the dynamic range of the extended dynamic range image is greater than the dynamic range of the original image.
  • the extended dynamic range area in the original image can be dynamically expanded according to the threshold segmentation result corresponding to the original image.
  • the dynamic range of the original image is expanded to make the dynamic range larger, and an extended dynamic range image with a larger dynamic range than the original image is obtained.
  • the extended dynamic range map can better express the gradients and levels of light and color in the image compared to the original image.
  • the original image itself can be displayed on a display that only supports low dynamic range display.
  • the extended dynamic range image corresponding to the original image can be displayed, thereby giving users a visual effect closer to the real world and improving the user experience. Therefore, the method provided by the embodiments of the present application has strong adaptability.
  • the stored threshold segmentation results corresponding to the original image only include the regional category labels corresponding to each pixel in the original image, so the amount of data is small and the storage space required is small.
  • the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), or an augmented reality (augmented reality (AR) ⁇ virtual reality (VR) equipment and other equipment.
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • the embodiments of this application do not place special restrictions on the specific form of the electronic equipment.
  • the hardware structure of the electronic device (such as electronic device 300) is introduced.
  • the electronic device 300 may include: a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (USB) interface 330, a charging management module 340, a power management module 341, and a battery. 342, antenna 1, antenna 2, mobile communication module 350, wireless communication module 360, audio module 370, speaker 370A, receiver 370B, microphone 370C, headphone interface 370D, sensor module 380, button 390, motor 391, indicator 392, camera 393, display screen 394, and subscriber identification module (subscriber identification module, SIM) card interface 395, etc.
  • a processor 310 an external memory interface 320, an internal memory 321, a universal serial bus (USB) interface 330, a charging management module 340, a power management module 341, and a battery. 342, antenna 1, antenna 2, mobile communication module 350, wireless communication module 360, audio module 370, speaker 370A, receiver 370B, microphone 370C, headphone interface 370D, sensor module 380, button 390, motor 391, indicator 39
  • the sensor module 380 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and other sensors.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 300 .
  • the electronic device 300 may include more or fewer components than shown, or some components may be combined, or some components may be separated, or may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 310 may include one or more processing units.
  • the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, Video codec, digital signal processor (DSP), baseband processor, and/or neural network processing unit (NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • Video codec digital signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 300 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 310 may also be provided with a memory for storing instructions and data.
  • the memory in processor 310 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 310 . If processor 310 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 310 is reduced, thus improving the efficiency of the system.
  • processor 310 may include one or more interfaces.
  • the interface connection relationships between the modules illustrated in this embodiment are only schematic illustrations and do not constitute a structural limitation of the electronic device 300 .
  • the electronic device 300 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charge management module 340 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger. While the charging management module 340 charges the battery 342, it can also provide power to the electronic device through the power management module 341.
  • the power management module 341 is used to connect the battery 342, the charging management module 340 and the processor 310.
  • the power management module 341 receives input from the battery 342 and/or the charging management module 340, and supplies power to the processor 310, internal memory 321, external memory, display screen 394, camera 393, wireless communication module 360, etc.
  • the wireless communication function of the electronic device 300 can be implemented through the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 300 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 350 can provide wireless communication solutions including 2G/3G/4G/5G applied to the electronic device 300 .
  • the mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 350 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 350 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • the wireless communication module 360 can provide applications on the electronic device 300 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (blue tooth, BT), and global navigation.
  • WLAN wireless local area networks
  • WiFi wireless fidelity
  • BT Bluetooth
  • global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared infrared, IR
  • the antenna 1 of the electronic device 300 is coupled to the mobile communication module 350, and the antenna 2 is coupled to the wireless communication module 360, so that the electronic device 300 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 394 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 394 is used to display images, videos, etc.
  • the display screen 394 includes a display panel.
  • display screen 394 may be a touch screen.
  • the electronic device 300 can implement the shooting function through an ISP, a camera 393, a video codec, a GPU, a display screen 394, and an application processor. Among them, one or more cameras 393 can be provided.
  • the external memory interface 320 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 300.
  • the external memory card communicates with the processor 310 through the external memory interface 320 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 321 may be used to store computer executable program code, which includes instructions.
  • the processor 310 executes instructions stored in the internal memory 321 to execute various functional applications and data processing of the electronic device 300 .
  • the processor 310 can execute instructions stored in the internal memory 321, and the internal memory 321 can include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 300 (such as audio data, phone book, etc.).
  • the electronic device 300 can implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the headphone interface 370D, and the application processor. Such as music playback, recording, etc.
  • Touch sensor also called “touch panel”.
  • the touch sensor can be disposed on the display screen 394, and the touch sensor and the display screen 394 form a touch screen, which is also called a "touch screen”. Touch sensors are used to detect touches on or near them.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 394.
  • the touch sensor may also be disposed on the surface of the electronic device 300 at a location different from that of the display screen 394 .
  • the electronic device 300 can detect a touch operation input by the user on the touch screen through a touch sensor, and collect one or more of the touch position of the touch operation on the touch screen, the touch time, and the like. In some embodiments, the electronic device 300 can determine the touch location of the touch operation on the touch screen through a combination of a touch sensor and a pressure sensor.
  • the buttons 390 include a power button, a volume button, etc.
  • Key 390 may be a mechanical key. It can also be a touch button.
  • the electronic device 300 may receive key input and generate key signal input related to user settings and function control of the electronic device 300 .
  • Motor 391 can produce vibration prompts.
  • Motor 391 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 394, the motor 391 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also be customized.
  • the indicator 392 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 395 is used to connect the SIM card. The SIM card can be inserted into the SIM card interface 395, or pull out from the SIM card interface 395 to realize contact and separation with the electronic device 300.
  • the electronic device 300 may support 1 or M SIM card interfaces, where M is a positive integer greater than 1.
  • SIM card interface 395 can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the gyro sensor may be a three-axis gyroscope, used to track state changes of the electronic device 300 in 6 directions.
  • the acceleration sensor is used to detect the movement speed, direction and displacement of the electronic device 300 .
  • the electronic device 300 can detect the status and position of the electronic device 300 through a gyroscope sensor and an acceleration sensor. When the state and position of the electronic device 300 change significantly compared with the initial position and the initial state, the electronic device 300 can remind the user to correct the state and position of the electronic device 300 in real time on the display screen 394 .
  • the embodiment of the present application provides a method for extending the dynamic range of an image, which method can be applied to the above-mentioned electronic device 300.
  • the electronic device 300 is the mobile phone shown in FIG. 3 as an example to introduce the method of the embodiment of the present application.
  • an embodiment of the present application provides a method for extending the dynamic range of an image, which includes the following steps:
  • the original image information includes the original image and brightness level information. Among them, the brightness level information is used to indicate the brightness value of the pixel in the original image.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark is used to indicate the category of the area corresponding to each pixel in the original image.
  • Areas include standard dynamic range areas and extended dynamic range areas. The brightness value corresponding to the pixel in the standard dynamic range area is lower than the brightness threshold, and the brightness value corresponding to the pixel in the extended dynamic range area is higher than or equal to the brightness threshold.
  • the above solution obtains the threshold segmentation result corresponding to the original image based on the brightness level information contained in the original image information, and associates and saves the original image and the threshold segmentation result corresponding to the original image.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the area category mark can indicate whether each pixel in the original image corresponds to a standard dynamic range area (an area with a brightness value lower than the brightness threshold) or an extended dynamic range area (an area with a brightness value higher than or equal to the brightness threshold).
  • the extended dynamic range map corresponding to the original image can be generated according to the threshold segmentation result corresponding to the original image, so as to realize the expansion of the original image.
  • the dynamic range of the image is expanded to obtain an expanded dynamic range image with a larger dynamic range than the original image, which can better express the gradients and levels of light and color in the image, thus giving users an experience closer to the real world. Visual effect.
  • the stored threshold segmentation results corresponding to the original image only include the regional category labels corresponding to each pixel in the original image, so the amount of data is small and the storage space required is small.
  • the above method further includes S104: performing dynamic range expansion on the extended dynamic range area in the original image, thereby generating an extended dynamic range map corresponding to the original image.
  • the dynamic range of the extended dynamic range image is greater than the dynamic range of the original image.
  • each segmentation result in the original image can be determined based on the threshold segmentation result corresponding to the original image.
  • the pixel corresponds to the standard dynamic range area or the extended dynamic range area, so as to dynamically expand the extended dynamic range area in the original image, and obtain the extended dynamic range map corresponding to the original image, thereby achieving dynamic range expansion of the original image to make it dynamic
  • the range is larger, and the expanded dynamic range map is obtained with a larger dynamic range than the original image, which can better express the gradients and levels of light and color in the image, thereby giving users a visual effect closer to the real world. .
  • the original image and the threshold segmentation result corresponding to the original image are stored in the electronic device. Then, when the user views the original image on an electronic device that only supports low dynamic range display, the electronic device displays the original image itself to the user, and the original image itself is a low dynamic range LDR image or standard dynamic image supported by the electronic device. Range SDR images.
  • the electronic device can execute the method in this embodiment to dynamically expand the extended dynamic range area in the original image to achieve dynamic processing of the original image. Range expansion allows users to view the expanded dynamic range map corresponding to the original image that is closer to the real-world visual effect.
  • the first electronic device only supports low dynamic range display, while the second electronic device supports high dynamic range display.
  • the user opens a camera application (Application, App) on the first electronic device and uses the camera in the first electronic device to capture images.
  • the first electronic device can obtain the original image information, and can perform steps S101 to S103 in the above embodiment to process the captured image, and associate and save the original image and the threshold segmentation result corresponding to the original image.
  • the user exits the camera App on the first electronic device, opens the photo album on the first electronic device, and clicks to view the image just taken, and the first electronic device displays the corresponding original image to the user.
  • Step S104 in the example is to perform dynamic range expansion on the extended dynamic range area in the original image based on the threshold segmentation result corresponding to the original image, so as to obtain an extended dynamic range map corresponding to the original image. Therefore, when the user views the corresponding image on the second electronic device, the second electronic device can display the corresponding extended dynamic range map to the user, thereby providing the user with a better visual experience.
  • the images captured by its camera have only a very limited dynamic range.
  • the captured images are LDR images or SDR images.
  • the user opens the camera App on the electronic device and uses the camera in the electronic device to capture images.
  • the electronic device can obtain the original image information, perform steps S101 to S103 in the above embodiment to process the captured image, and associate and save the original image and the threshold segmentation result corresponding to the original image.
  • the user exits the camera app on the electronic device, opens the photo album on the electronic device, and clicks to view the image just taken.
  • the electronic device can perform step S104 in the above embodiment to determine whether each pixel in the original image corresponds to the standard dynamic range area or the extended dynamic range area based on the threshold segmentation result corresponding to the original image, thereby performing the original image Perform dynamic range expansion in the extended dynamic range area in the image to obtain the extended dynamic range map corresponding to the original image.
  • the electronic device can display the corresponding extended dynamic range map to the user, thereby providing the user with a better visual experience.
  • the above brightness level information is an exposure image collected in the same scene of the original image.
  • the subject in this exposure is the same as in the original image.
  • the brightness value of each pixel in the exposure map indicates the brightness value of the corresponding pixel in the original image.
  • the above is based on the brightness level information and brightness threshold included in the original image information, Obtaining the threshold segmentation result corresponding to the original image may be based on the brightness value and brightness threshold of each pixel in the exposure image.
  • the corresponding schematic diagram is shown in Figure 5.
  • the above exposure image can be one exposure image collected by the camera in the same scene of the original image at a certain exposure value, or it can be multiple exposure images collected by the camera in the same scene of the original image at multiple different exposure values.
  • the exposure picture may also be one exposure picture synthesized from the above-mentioned multiple exposure pictures.
  • the above-mentioned exposure image can be captured by one or more cameras. It should be noted that, if necessary, the original image and the exposure image need to be registered so that the pixels in the two images correspond one to one, that is, the pixels at the same coordinates in the two images correspond to the real same location in the environment.
  • the following method can be used to obtain the threshold segmentation result corresponding to the original image: according to each pixel point in the exposure image
  • the brightness value of the exposure image is threshold segmented using the single-level thresholding segmentation method to obtain the threshold segmentation result.
  • the threshold segmentation result includes the area category mark corresponding to each pixel in the original image; and the area corresponding to the pixel includes a standard dynamic range area and an extended dynamic range area.
  • the threshold segmentation result can be represented by the two-dimensional matrix shown in Figure 6.
  • FIG. 6 is a schematic diagram of a threshold segmentation result in an embodiment of the present application. As shown in Figure 6, the letter "N" in the exposure image on the left and its surrounding pixel values, the corresponding threshold segmentation result can be expressed as:
  • the threshold segmentation result can also be expressed as a data sequence.
  • the corresponding threshold segmentation result can be expressed as 010100111001010 for the letter "N" and its surrounding pixel values in the exposure map on the left in Figure 6.
  • the following method can be used to obtain the threshold segmentation result corresponding to the original image: according to each pixel point in the exposure image
  • the brightness value of the exposure image is threshold segmented using a multi-level thresholding segmentation method to obtain the threshold segmentation result.
  • the threshold segmentation result includes the area category mark corresponding to each pixel in the original image; the area category includes a standard dynamic range area and an extended dynamic range area; among which, the extended dynamic range area is divided into multiple levels of dynamic range areas.
  • threshold segmentation of the exposure map through a multi-level thresholding segmentation method can obtain an extended dynamic range containing multiple regions (a standard dynamic range region and multiple levels region), the segmentation is more fine-grained.
  • K thresholds can be used for segmentation to obtain a standard dynamic range area and K levels of extended dynamic range areas.
  • different levels of expansion coefficients can be used for tone mapping in different areas, thereby obtaining a richer expanded dynamic range map of brightness gradients and brightness levels.
  • the threshold segmentation result can be represented by a two-dimensional matrix. The elements in the two-dimensional matrix have multiple possible values, corresponding to multiple regions respectively.
  • FIG. 7 is a schematic diagram of a threshold segmentation result in an embodiment of the present application. As shown in Figure 7, the letter "N" in the exposure image on the left and its surrounding pixel values, the corresponding threshold segmentation result can be expressed as:
  • the threshold segmentation result can also be expressed as a data sequence.
  • the corresponding threshold segmentation result can be expressed as 021200222002120 for the letter "N" and its surrounding pixel values in the exposure map on the left in Figure 6.
  • the above threshold can be determined based on the brightness value of each pixel point in the exposure map. Specifically, according to the brightness value of each pixel point in the exposure map, a corresponding brightness histogram is constructed. This brightness histogram can display the number of pixels in the exposure map when the brightness values are 0, 1, ..., 255.
  • the brightness histogram includes an x-axis and a y-axis, where the value on the x-axis represents the brightness value, and the values are 0, 1,..., 255, that is, from left to right, from pure black (brightness value is 0) to pure white (Brightness value is 255); the value on the y-axis represents the number of pixels.
  • y 0 is the number of pixels with a brightness value x of 0 in the exposure map
  • y 1 is the number of pixels with a brightness value x of 1 in the exposure map
  • y k is the number of pixels with a brightness value x of k in the exposure map number
  • y 255 is the number of pixels in the exposure map with a brightness value x of 255.
  • the brightness value k is used as the threshold.
  • the preset number may be 80% of the total number of pixels in the exposure map.
  • the default number can be 800.
  • the brightness values in the exposure map are 0, 1,...,k,...,255 and the number of pixels are y 0 , y 1 ,..., y k ,..., y 255 respectively, then the brightness value is from the largest to the largest.
  • the brightness value 90 corresponds to the number of pixels y 90 , the total number of pixels accumulated is greater than 800. , then the brightness value is used as the threshold.
  • the method in the embodiment of the present application is applicable to both the single-level threshold segmentation method to determine the threshold value and the multi-level threshold segmentation method to determine the threshold value.
  • the preset number is set to a value, thereby obtaining a unique threshold for single-level threshold segmentation, and thus two areas can be segmented, respectively as the standard dynamic range area and Extend the dynamic range area.
  • the preset number is set to multiple different values, so that multiple brightness thresholds can be obtained for multi-level threshold segmentation, so that multiple regions can be segmented, where,
  • the area with brightness value less than the minimum threshold T min is regarded as the standard dynamic range area, and the area with the brightness value greater than or equal to the minimum threshold T min is regarded as the extended dynamic range area, and the extended dynamic range area is further divided into multiple areas by multiple brightness thresholds.
  • the extended dynamic range area can be further divided into n regions D 1 , D 2 , ..., D n .
  • the brightness value of the pixel (such as L) is in the brightness value interval [T min , T 1 ), then the pixel can be divided into the area D 1 ; the brightness value of the pixel (such as L) is in the brightness value interval [T 1 ,T 2 ), then the pixel can be divided into area D 2 ; if the brightness value (such as L) of the pixel is in the brightness value interval [T n-1 ,T n ), then the pixel can be divided into area D n .
  • the same or different tone mapping functions can be used to calculate the corresponding expansion coefficients for D 1 , D 2 ,..., D n respectively, so that the corresponding expansion coefficients can be obtained for tone mapping, thereby obtaining richer brightness gradients and brightness levels and extending the dynamic range. picture.
  • the above threshold can be determined based on the brightness value of each pixel point in the exposure map.
  • the corresponding brightness histogram can be constructed based on the brightness value of each pixel point in the exposure map. This brightness histogram can display the number of pixels in the exposure map when the brightness values are 0, 1, ..., 255.
  • OTSU is used to determine the Threshold for threshold segmentation of exposure maps.
  • OTSU's principle of determining the threshold is: for a given threshold T, the pixels on the image to be segmented are divided into foreground pixels and background pixels; where the number of foreground pixels accounts for pixels in the image to be segmented
  • the ratio of the total number of points is p1
  • the average brightness value of the foreground pixel points is b 1
  • the ratio of the number of background pixel points to the total number of pixel points on the image to be segmented is p 2
  • the average brightness value of the background pixel points is b 2
  • OTSU is one of the most widely used image segmentation methods. This method is also called the maximum inter-class method threshold segmentation method. The criterion for selecting the segmentation threshold for this method is that the inter-class variance of the image reaches the maximum or the intra-class variance is minimum. It should be understood that OTSU can be extended from single-threshold segmentation to multi-threshold segmentation. At the same time, intelligent optimization algorithms can be used to search for multiple thresholds to obtain the optimal threshold, which greatly speeds up the algorithm.
  • the above threshold can be determined based on the brightness value of each pixel in the exposure map, specifically:
  • is the variance coefficient
  • the method in the embodiment of the present application is applicable to both the single-level threshold segmentation method to determine the threshold value and the multi-level threshold segmentation method to determine the threshold value.
  • the variance coefficient is set to a value, thereby obtaining a brightness threshold for single-level threshold segmentation. From this, two areas can be segmented, respectively as the standard dynamic range area and the extended area. Dynamic range area.
  • the variance coefficient is set to multiple different values, so that multiple different thresholds can be obtained for multi-level threshold segmentation, so that multiple regions can be segmented, where,
  • the area with brightness value less than the minimum threshold is regarded as the standard dynamic range area, and the area with the brightness value greater than or equal to the minimum threshold is regarded as the extended dynamic range area, and the extended dynamic range area is further divided into multiple areas by multiple brightness thresholds, such as D 1 , D 2 ,...,D n .
  • different expansion coefficients are used for tone mapping for D 1 , D 2 , ..., D n , so that a richer expanded dynamic range map of brightness gradients and brightness levels can be obtained.
  • the threshold segmentation result is a data sequence.
  • the regional category mark corresponding to each pixel in the original image is represented by the value of the corresponding element in the data sequence.
  • the values of elements in the data sequence include 0 and 1.
  • the pixel corresponding to the element with a value of 0 in the data sequence in the original image corresponds to the standard dynamic range area.
  • the element with a value of 1 in the data sequence corresponds to the pixel point in the original image corresponding to the extended dynamic range area.
  • the elements in this data sequence can correspond one-to-one to the pixels in the original image. For example, suppose the size of the original image is H*W, where H and W represent the height and width of the original image respectively.
  • the number of pixels in the original image is H*W.
  • its corresponding area category label is stored in the data sequence in order from top to bottom and from left to right.
  • the number of elements in the data sequence can be H*W.
  • the value of the i*(H-1)+j-th element in the data sequence represents the pixel point in the i-th row and j-th column of the original image.
  • the threshold segmentation result is a two-dimensional matrix.
  • the regional category mark corresponding to each pixel in the original image is represented by the value of the corresponding element in the two-dimensional matrix.
  • the values of the elements in the two-dimensional matrix include 0 and 1.
  • the pixel corresponding to the element with a value of 0 in the two-dimensional matrix in the original image corresponds to the standard dynamic range area.
  • the element with a value of 1 in the two-dimensional matrix corresponds to the extended dynamic range area at the corresponding pixel point in the original image.
  • the elements in the two-dimensional matrix can correspond one-to-one to the pixels in the original image. For example, suppose the size of the original image is H*W, where H and W represent the height and width of the original image respectively.
  • the number of pixels in the original image is H*W.
  • the size of the two-dimensional matrix can also be H*W.
  • the obtained area categories are only standard dynamic range areas and extended dynamic range areas, and there are only two corresponding area category labels. value. Therefore, the obtained threshold segmentation result can be represented by a data sequence or a two-dimensional matrix, thereby reducing the storage space occupied by the threshold segmentation result.
  • the method before associating and saving the original image and the threshold segmentation result corresponding to the original image, the method further includes: performing downsampling processing on the threshold segmentation result corresponding to the original image. Assume that the threshold segmentation result corresponding to the original image is I. After downsampling the threshold segmentation result I, the threshold segmentation result I′ is obtained. Furthermore, the above-mentioned associated storage of the original image and the threshold segmentation result corresponding to the original image includes: associated storage of the threshold segmentation result I′ corresponding to the original image and the down-sampled original image.
  • downsampling the threshold segmentation result corresponding to the original image can effectively reduce the data size of the threshold segmentation result. Downsampling the threshold segmentation result corresponding to the original image and then storing it can reduce the storage space occupied by the threshold segmentation result. At the same time, when the threshold segmentation result after downsampling is subsequently used to expand the dynamic range of the original image, it can be restored to the same size as the original image through upsampling, thereby determining the expansion corresponding to each pixel of the original image. coefficient.
  • downsampling the threshold segmentation result I corresponding to the original image includes: using a sampling unit to downsample the threshold segmentation result I.
  • a sampling unit with a size of 2*2, 3*3, 4*4 or other sizes can be used to downsample the threshold segmentation result I, so that the size of the threshold segmentation result I after downsampling is the threshold segmentation result. 1/4, 1/9, 1/16 or other proportions of the size.
  • using a sampling unit with a size of 2*2 to downsample the threshold segmentation result I means that the 2*2 elements in the threshold segmentation result I are represented by one element in the corresponding threshold segmentation result I',
  • the value of this element in the threshold segmentation result I′ can be the average value of the corresponding 2*2 elements in the threshold segmentation result I, or it can be the value of any one of the corresponding 2*2 elements in the threshold segmentation result I.
  • Using a sampling unit with a size of 3*3 to downsample the threshold segmentation result I refers to using one element to represent the 3*3 elements in the threshold segmentation result I in the corresponding threshold segmentation result I′.
  • Threshold segmentation The value of this element in the result I' can be the average value of the corresponding 3*3 elements in the threshold segmentation result I, or it can be the value of any one of the corresponding 3*3 elements in the threshold segmentation result I.
  • Using a sampling unit with a size of 4*4 to downsample the threshold segmentation result I means that the 4*4 elements in the threshold segmentation result I are represented by one element in the corresponding threshold segmentation result I′.
  • Threshold segmentation The value of this element in the result I' can be the average value of the corresponding 4*4 elements in the threshold segmentation result I, or it can be the value of any one of the corresponding 4*4 elements in the threshold segmentation result I.
  • the process of downsampling the threshold segmentation result by using sampling units of other sizes is analogous, and will not be listed one by one here. It should be understood that the above embodiment is only an example of the method of downsampling the threshold segmentation result corresponding to the original image and the size of the threshold segmentation result. This illustration does not constitute a limitation of the present application.
  • other methods can also be used to downsample the threshold segmentation results corresponding to the original image, and the threshold segmentation results can also be of other sizes. This application does not impose specific restrictions on this.
  • the threshold segmentation result is a data sequence, it can also be expanded into a corresponding binary matrix according to the corresponding relationship between the elements and the pixels in the original image, and then the above method is used to perform downsampling processing, and then stored.
  • the above-mentioned threshold segmentation result corresponding to the original image will be used to expand the dynamics in the original image.
  • the range area is dynamically expanded to obtain the extended dynamic range map.
  • the following method is used: upsample the threshold segmentation result I′ to obtain the threshold segmentation result I ⁇ .
  • the extended dynamic range area in the original image is dynamic range expanded to obtain an extended dynamic range map.
  • the obtained threshold segmentation result I′′ has the same number of elements as the number of pixels of the original image, and the The elements in the threshold segmentation result I" correspond to the pixels in the original image one-to-one.
  • the threshold segmentation result can be used to determine the expansion coefficient corresponding to each pixel in the original image, thereby dynamically expanding the original image.
  • take the threshold segmentation result as a two-dimensional matrix Suppose the size of the original image is H*W, where H and W represent the height and width of the original image respectively, that is, there are H pixels in the height direction of the original image and W pixels in the width direction of the original image.
  • the number of pixels the image has is H*W.
  • the size of the corresponding threshold segmentation result I′ obtained after downsampling processing is H′*W′, where H′ and W′ respectively represent the number of rows and columns of the threshold segmentation result I′, that is, the threshold segmentation result I′ It includes H′ rows, and the threshold segmentation result I′ includes W′ columns.
  • the number of elements of the threshold segmentation result I′ is H′*W′, where H′ ⁇ H and W′ ⁇ W. In this way, the space occupied by threshold segmentation results can be reduced when they are stored.
  • the threshold segmentation result I' can be upsampled first, so that the size of the upsampled threshold segmentation result I" is restored to H*W, that is, the threshold segmentation result I" includes H Rows and W columns, the number of elements of the threshold segmentation result I" is H*W.
  • the threshold segmentation result I" has the same size as the original image, and the pixels therein correspond to the pixels in the original image one-to-one. , that is, the threshold segmentation result I can be used to determine the expansion coefficient corresponding to each pixel in the original image, thereby dynamically expanding the original image.
  • the original image and the threshold segmentation result corresponding to the original image are associated and saved, and the original image and the threshold segmentation result corresponding to the original image can be associated and saved in a common image format, such as Portable Network Graphics , PNG) format and Joint Photographic Experts Group (Joint Photographic Experts Group, JPEG) and other formats.
  • PNG Portable Network Graphics
  • JPEG Joint Photographic Experts Group
  • image formats according to their corresponding encoding rules (encoding rules for PNG format and JPEG format files), the original image and its corresponding threshold segmentation results can be stored in one file.
  • the original image and the threshold segmentation result corresponding to the original image can be stored in the same file, and then tone mapping (dynamic range expansion) is performed on the original image.
  • tone mapping dynamic range expansion
  • the corresponding threshold segmentation results can be efficiently found, so that the expansion coefficient of the original image for tone mapping can be quickly determined based on the threshold segmentation results, thereby quickly expanding the dynamic range of the original image and ensuring real-time image processing.
  • an embodiment of the present application also provides a method for extending the dynamic range of an image, including:
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the areas corresponding to pixels include extended dynamic range areas and standard dynamic range areas.
  • the brightness value of the pixel in the extended dynamic range area is higher than or equal to the brightness threshold, and the brightness value of the pixel in the standard dynamic range area is lower than the brightness threshold.
  • the embodiment of the present application performs dynamic range expansion on the extended dynamic range area in the original image based on the area to which the pixel points in the original image belong, that is, whether it corresponds to the standard dynamic range area or the extended dynamic range area, so that dynamic range expansion of the original image can be achieved.
  • the range is expanded to make the dynamic range larger, and the expanded dynamic range map is obtained with a larger dynamic range than the original image, which can better express the gradients and levels of light and color in the image, thus bringing users closer Real-world visual effects.
  • dynamic range expansion is performed on the extended dynamic range area in the original image to obtain the extended dynamic range map.
  • the threshold segmentation result corresponding to the original image determine the original image. Whether each pixel in the image corresponds to the standard dynamic range area or the extended dynamic range area, the R, G, and B values of each pixel in the original image are tone mapped using the corresponding expansion coefficients to obtain the extended dynamic range corresponding to the original image.
  • Scope diagram including:
  • the threshold segmentation result corresponding to the original image is represented by a data sequence, and the elements with a value of 0 in the data sequence correspond to the standard dynamic range area in the pixels in the original image, and the elements with a value of 1 in the data sequence are in the original image.
  • the corresponding pixels in correspond to the extended dynamic range area.
  • the pixel point P corresponds to the standard dynamic range area. If the value of the element corresponding to the pixel point P in the data sequence is 1, then the pixel point P corresponds to the extended dynamic range area.
  • the R, G, and B values of the pixel point P are tone mapped using the corresponding expansion coefficients, thereby obtaining the value corresponding to the pixel point P in the extended dynamic range map.
  • the R, G, and B values of pixel point P′ specifically:
  • the expansion coefficient ⁇ P corresponding to the pixel point P 1, and the R, G, and B values of the pixel point P are directly used as the pixel point P corresponding to the pixel point P in the extended dynamic range map. ′ R, G, B values;
  • the expansion coefficient ⁇ P corresponding to the pixel point P ⁇ 1 the pixel
  • the R, G, and B values of point P are multiplied by the expansion coefficient ⁇ P to obtain a set of new R, G, and B values; the new R, G, and B values are used as the corresponding pixel point P in the extended dynamic range map.
  • R, G, B values of pixel point P′ the expansion coefficient ⁇ P corresponding to the pixel point P ⁇ 1
  • the expansion coefficient ⁇ P ⁇ 1 corresponding to the pixel point includes:
  • the tone mapping function F(Gray) corresponding to the grayscale value according to the tone mapping function F(x).
  • the tone mapping function F(x) is a monotonic non-decreasing function, and F(x) ⁇ 1.
  • the tone mapping function value F (Gray) is used as the expansion coefficient ⁇ P corresponding to the pixel point.
  • the corresponding expansion coefficients are used to map the R, G, and B values of the pixel point P to obtain the extended dynamic range map.
  • the corresponding expansion coefficient is taken to be 1, that is, its R, G, and B values remain unchanged and no brightening is performed. In other words, the brightness values of pixels corresponding to low-brightness areas in the original image remain unchanged.
  • the dynamic range of the image is the ratio of the maximum brightness value to the minimum brightness value of the pixels in the image, that is, the maximum brightness value of the pixels in the image/the minimum brightness value of the pixels in the image.
  • the threshold segmentation result corresponding to the original image is divided into a standard dynamic range area and an extended dynamic range area.
  • the division of different areas reflects the brightness information of different areas of the original image.
  • the pixels in the original image The area corresponding to the point reflects the brightness value of the pixel in the original image.
  • the method provided by the embodiment of the present application actually combines the brightness value and R, G, and B values of the pixels in the original image to perform tone mapping on the pixels in the original image to achieve the purpose of tone mapping.
  • the dynamic range expansion of the original image uses richer information and has a better dynamic range expansion effect.
  • the expansion coefficient is determined for tone mapping based only on the brightness value or R, G, and B values of the pixels in the original image, then because the white paper or white wall background corresponds If the brightness value or R, G, B value of the pixel is higher, the corresponding expansion coefficient of this part will also be higher during tone mapping. If the area is brightened according to the expansion coefficient, the image will be obtained after tone mapping.
  • the background brightness of the medium white paper or white wall will be extremely high, which does not match the visual effect in the real environment.
  • the method provided by the embodiment of the present application combines the brightness level information to divide the standard dynamic range area and the extended dynamic range area in the image.
  • the white paper or white wall background has low brightness in the real environment, the white paper or white wall in the original image will The white wall background will be divided into standard dynamic range areas, so that it does not need to be brightened during subsequent tone mapping. Therefore, the brightness of the white paper or white wall background in the image obtained after tone mapping will be more consistent with the vision in the real environment. Effect.
  • Gray R*0.299+G*0.587+B*0.114.
  • Gray (R+G+B)/3.
  • the preferred exposure map is Short exposure image (underexposed image) or medium exposure image (normally exposed image). Since the lightness and darkness of the short exposure image or the medium exposure image are more suitable, especially the information in the brighter part of the image is not lost, when subsequent brightening of the pixels in the extended dynamic range area of the original image is performed, better results can be obtained.
  • the brightness of the long exposure image is too high, and the information in the brighter part of the image will be lost. If the solution provided by the embodiment of the present application is implemented based on the long exposure image, the dynamic range expansion effect will not be based on the short or medium exposure image. The effect of the exposure map is ideal.
  • the tone mapping function F(x) is a monotonic non-decreasing function, its value range is 1-N, and its definition domain is 0-255.
  • the maximum brightness value of the pixels in the extended dynamic range image can be expanded to N times the maximum brightness value of the pixels in the original image.
  • the extended The dynamic range of the dynamic range map can be expanded to N times the dynamic range of the original image, which is mapped to the dynamic range supported by the monitor. It can make full use of the display performance of the monitor and bring users visual effects that are as close to the real environment as possible.
  • multi-level threshold segmentation is sampled, multiple extended dynamic range regions are obtained, such as D 1 , D 2 ,..., D n .
  • the same or different tone mapping functions can be used to calculate the corresponding expansion coefficients for pixels in the expanded dynamic range area at each level. For example, for each pixel P 1 in the first-level extended dynamic range area D 1 , input the P 1 grayscale value Gray 1 of the pixel into the first tone mapping function F 1 (X) to obtain the corresponding pixel P 1 Expansion coefficient F 1 (Gray 1 ), the value range of F 1 (Gray 1 ) is 1-N 1 .
  • the brightness value of the pixel in the first-level extended dynamic range area D 1 can be expanded by up to N 1 times.
  • the grayscale value Gray 2 of the pixel point P 2 into the second tone mapping function F 2 (X) to obtain the extension corresponding to the pixel point P 2 Coefficient F 2 (Gray 2 ), the value range of F 2 (Gray2) is N 1 -N 2 .
  • the brightness value of the pixel in the second-level extended dynamic range area D 2 can be expanded by up to N 2 times.
  • the value range of F n (Gray n ) is N n-1 -N n
  • multiply the R, G, and B values of pixel point P n by the expansion coefficient F n ( Gray n ) then the brightness value of the pixel in the n-th level extended dynamic range area D n can be expanded by N n times at most.
  • An embodiment of the present application also provides a device for extending the dynamic range of image display, including:
  • Original image information acquisition module used to obtain original image information.
  • the original image information includes the original image and brightness level information.
  • the brightness level information is used to indicate the brightness value of the pixel in the original image.
  • the threshold segmentation module is used to obtain the threshold segmentation result corresponding to the original image based on the brightness level information and brightness threshold included in the original image information.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image.
  • the areas corresponding to pixels include extended dynamic range areas and standard dynamic range areas. Among them, the brightness value corresponding to the pixel point in the extended dynamic range area is higher than or equal to the brightness threshold, and the brightness value corresponding to the pixel point in the standard dynamic range area is lower than the brightness threshold value.
  • the extended dynamic range area in the original image can be dynamically extended based on the threshold segmentation result corresponding to the original image, thereby obtaining an extended dynamic range map corresponding to the original image.
  • the dynamic range of the extended dynamic range image is greater than the dynamic range of the original image.
  • the storage module is used to associate and save the original image and the threshold segmentation results corresponding to the original image.
  • the embodiment of the present application also provides another device for extending the dynamic range of image display, including:
  • the data acquisition module is used to obtain the original image and the threshold segmentation results corresponding to the original image.
  • the threshold segmentation result includes the regional category label corresponding to each pixel in the original image. Areas include extended dynamic range areas and standard dynamic range areas. The brightness value of the pixel in the standard dynamic range area is lower than the brightness threshold, and the brightness value of the pixel in the extended dynamic range area is higher than or equal to the brightness threshold.
  • the dynamic range expansion module is used to dynamically expand the extended dynamic range area in the original image according to the threshold segmentation result corresponding to the original image, thereby generating an extended dynamic range map corresponding to the original image; the dynamic range of the extended dynamic range map is greater than The dynamic range of the original image.
  • the above device embodiment has the function of implementing the above method embodiment. This function can be implemented by hardware, or it can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • Embodiments of the present application also provide an electronic device.
  • the electronic device includes a memory and one or more processors; the memory is coupled to the processor; wherein computer program code is stored in the memory, and the computer program code includes computer instructions.
  • the computer instructions When the computer instructions are When the processor is executed, the electronic device is caused to perform various functions or steps as performed by the mobile phone in the above method embodiment.
  • the structure of the electronic device may refer to the structure of the electronic device 300 shown in FIG. 3 .
  • the electronic device includes one or more cameras, and the cameras are used to collect original image information.
  • the electronic device communication module is used to transmit data with other devices to obtain the original image and the threshold segmentation result corresponding to the original image.
  • the chip system 1000 includes at least one processor 1001 and at least one interface circuit 1002 .
  • processor 1001 and the interface circuit 1002 can be interconnected through lines.
  • interface circuit 1002 may be used to receive signals from other devices, such as memory of an electronic device.
  • interface circuit 1002 may be used to send signals to other devices (eg, processor 1001).
  • the interface circuit 1002 can read instructions stored in the memory and send the instructions to the processor 1001.
  • the electronic device can be caused to perform various functions or steps performed by the mobile phone 100 in the above method embodiment.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiments of this application.
  • Embodiments of the present application also provide a computer storage medium.
  • the computer storage medium includes computer instructions.
  • the electronic device When the computer instructions are run on the above-mentioned electronic device, the electronic device causes the electronic device to perform various functions or steps performed by the mobile phone in the above-mentioned method embodiments. .
  • Embodiments of the present application also provide a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform various functions or steps performed by the mobile phone in the above method embodiments.
  • the computer can be the above-mentioned mobile phone.
  • Each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage device.
  • the medium includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种扩展图像动态范围的方法及电子设备,涉及图像处理领域,可以扩展图像的动态范围。该方法包括:获取原图信息。该原图信息包括原图和亮度水平信息。基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记用于指示原图中的各个像素点对应的区域的类别。区域包括标准动态范围区域和扩展动态范围区域。标准动态范围区域中像素点对应的亮度值低于亮度阈值,扩展动态范围区域中像素点对应的亮度值高于或等于亮度阈值。按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,以得到原图对应的扩展动态范围图。

Description

一种扩展图像动态范围的方法及电子设备
本申请要求于2022年07月14日提交国家知识产权局、申请号为202210827757.2、发明名称为“一种扩展图像动态范围的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种扩展图像动态范围的方法及电子设备。
背景技术
传统成像显示设备获取的图像以及传统成像显示设备所能显示的图像的动态范围十分有限,故用户从传统成像显示设备观看到的图像相对于真实场景差异较大,导致用户体验较差。近年来,随着显示领域软硬件技术的不断提升,高动态范围(High Dynamic Range,HDR)显示器的使用越来越普及。因此,需要对传统成像显示设备获取的图像进行动态范围扩展,给用户带来更接近于真实场景的视觉效果,提升用户体验。
发明内容
本申请提供一种扩展图像动态范围的方法及电子设备,可以对原图进行动态范围扩展,从而提升用户的视觉体验效果。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种扩展图像动态范围的方法,该方法包括:
获取原图信息。该原图信息包括原图和亮度水平信息。其中,亮度水平信息用于指示原图中像素点的亮度值。
基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记用于指示原图中的各个像素点对应的区域的类别。区域包括标准动态范围区域和扩展动态范围区域。标准动态范围区域中像素点对应的亮度值低于亮度阈值,扩展动态范围区域中像素点对应的亮度值高于或等于亮度阈值。
将原图及原图对应的阈值分割结果进行关联保存,用于后续基于原图对应的阈值分割结果对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图,扩展动态范围图的动态范围大于原图的动态范围。
应理解,上述方案基于原图信息中包含的亮度水平信息,获取原图对应的阈值分割结果,并将原图及原图对应的阈值分割结果进行关联保存。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记可以指示原图中的各个像素点是对应标准动态范围区域(亮度值低于亮度阈值的区域)还是对应扩展动态范围区域(亮度值高于或等于亮度阈值的区域)。这样,在需要原图(如摄像头捕获到的LDR图像或SDR图像)进行动态范围扩展时,就可以按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而实现对原图进行动态范围扩展,得到相对于原图具有更大的动态范围的原图对应的扩展动态范围图,从而可以更好地表 现图像当中光线和颜色的渐变和层次,给用户带来更接近于真实世界的视觉效果。此外,所存储的原图对应的阈值分割结果仅包括原图中的各个像素点对应的区域类别标记,数据量小,所需的存储空间小。
在第一方面的一种可能的实现方式中,上述亮度水平信息为在原图的同一场景下采集的曝光图。该曝光图中的拍摄对象与原图中的拍摄对象相同。该曝光图中各个像素点的亮度值指示原图中对应像素点的亮度值。进而,上述基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果,可以是根据曝光图中各个像素点的亮度值和亮度阈值,获取原图对应的阈值分割结果。
应理解,上述实现方式中的曝光图是在原图的同一场景下采集,且该曝光图中的拍摄对象与原图中的拍摄对象相同。因此,可以用曝光图中各个像素点的亮度值指示所述原图中对应像素点的亮度值。并可以基于曝光图中包括的亮度值和亮度阈值,获取原图对应的阈值分割结果。这样,后续基于原图对应的阈值分割结果对原图中的扩展动态范围区域进行动态范围扩展,实际上结合了曝光图中各个像素点的亮度值对原图进行动态范围扩展,利用的信息更丰富,动态范围扩展的效果更好。
在第一方面的一种可能的实现方式中,上述根据曝光图中各个像素点的亮度值和亮度阈值,获取原图对应的阈值分割结果,可以是:根据曝光图中各个像素点的亮度值确定一个亮度阈值,并基于该一个亮度阈值对曝光图进行单级阈值分割,得到阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记用于指示原图中的各个像素点对应的区域的类别。而本实现方式中的区域包括一个标准动态范围区域和一个扩展动态范围区域。
应理解,对于其中目标比较单一的曝光图,可以采用单级阈值分割方法,此时得到的区域就只有一个标准动态范围区域和一个扩展动态范围区域,对应的区域类别标记也就只有两种取值。这样,在进行存储时,相应的阈值分割结果可以采用数据序列或二维矩阵(该数据序列或二维矩阵中的每一个元素只有两种可能的取值,如0和1)表示,存储该阈值分割结果所占用的存储空间更小。
在第一方面的一种可能的实现方式中,上述根据曝光图中各个像素点的亮度值和亮度阈值,获取原图对应的阈值分割结果,可以是:根据曝光图中各个像素点的亮度值确定多个亮度阈值,并基于该多个亮度阈值对曝光图进行多级阈值分割,得到阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记用于指示原图中的各个像素点对应的区域的类别。而本实现方式中的区域包括一个标准动态范围区域和多个级别的扩展动态范围区。
应理解,对于其中包含多个目标的比较复杂的曝光图,可以采用多级阈值分割方法,此时得到的区域包括一个标准动态范围区域和多个不同级别的扩展动态范围区域。对应的区域类别标记有两种以上取值。本方案中针对比较复杂的曝光图进行多级阈值分割,可以得到分割细粒度更高的多个区域,即一个标准动态范围区域和多个级别的扩展动态范围区域,由此在后续采用对应的扩展系数进行色调映射时,可以得到亮度渐变和层次更加丰富的图像。
在第一方面的一种可能的实现方式中,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:根据曝光图中各个像素点的亮度值,构建对应的亮度直方图。 该亮度直方图可以显示曝光图中亮度值分别为0,1,…,255时的像素点个数。该亮度直方图包括x轴和y轴,其中x轴的数值表示亮度值,取值依次为0,1,…,255;y轴的数值表示像素点的个数。基于该亮度直方图,获取曝光图中亮度值x分别为0,1,…,k,…,255的像素点个数,记为y0,y1,…,yk,…,y255。其中,y0是曝光图中亮度值x为0的像素点个数,y1是曝光图中亮度值x为1的像素点个数,yk是曝光图中亮度值x为k的像素点个数,y255是曝光图中亮度值x为255的像素点个数。按亮度值x由大到小的顺序,将曝光图中对应的像素点个数逐个累加,当累加至亮度值k对应像素点个数yk时累加得到的总像素点个数大于预设数量,则将亮度值k作为阈值。
需要说明的是,上述实现方式既可以应用于单级阈值分割方法中确定单一阈值,也可以应用于多级阈值分割方法中确定多个亮度阈值。比如,将预设数量取唯一的数值,就可以得到唯一的阈值;而将预设数量取多个不同的数值,就可以得到多个亮度阈值。
在第一方面的一种可能的实现方式中,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:根据曝光图中各个像素点的亮度值,构建对应的亮度直方图。该亮度直方图可以显示曝光图中亮度值分别为0,1,…,255时的像素点个数。基于亮度直方图,采用大津法(OTSU算法)确定阈值。
需要说明的是,上述实现方式既可以应用于单级阈值分割方法中,通过OTSU算法确定单一阈值,也可以应用于多级阈值分割方法中,通过OTSU算法确定多个亮度阈值。
在第一方面的一种可能的实现方式中,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:计算曝光图上所有像素点的亮度值的均值M和标准差STD,通过以下公式计算亮度阈值T:
T=M+β·STD
其中,β为标准差系数。
需要说明的是,上述实现方式既可以应用于单级阈值分割方法中确定单一阈值,也可以应用于多级阈值分割方法中确定多个亮度阈值。比如,其中的标准差系数β可以取唯一的数值,即可以得到唯一的阈值T;而标准差系数β取多个不同的数值,即可以得到多个亮度阈值T。
在第一方面的一种可能的实现方式中,上述阈值分割结果为数据序列。而原图中的各个像素点对应的区域类别标记用该数据序列中各个对应的元素的值表示。其中,数据序列中的元素的值包括0和1。该数据序列中值为0的元素在原图中对应的像素点对应标准动态范围区域。该数据序列中值为1的元素在原图中对应的像素点对应扩展动态范围区域。
在第一方面的一种可能的实现方式中,上述阈值分割结果为二维矩阵。而原图中的各个像素点对应的区域类别标记用该二维矩阵中的各个对应的元素的值表示。其中,二维矩阵中的元素的值包括0和1。该二维矩阵中值为0的元素在原图中对应的像素点对应标准动态范围区域。该二维矩阵中值为1的元素在原图中对应的像素点对应扩展动态范围区域。
应理解,对于其中目标比较单一的曝光图,可以采用单级阈值分割方法,得到的区域类别就只有一个标准动态范围区域和一个扩展动态范围区域,对应的区域类别标 记也就只有两种取值。由此,得到的阈值分割结果可以采用数据序列或二维矩阵表示,且其中的元素只有两种取值,从而可以减小该阈值分割结果所占用的存储空间。
在第一方面的一种可能的实现方式中,在将原图及原图对应的阈值分割结果进行关联保存之前,该方法还包括:设所述原图对应的阈值分割结果为I,对原图对应的阈值分割结果I进行降采样处理,得到经过降采样处理后的所述原图对应的阈值分割结果I′。进而,上述将原图及原图对应的阈值分割结果进行关联保存,包括:将原图及经过降采样处理后的原图对应的阈值分割结果I′进行关联保存。
应理解,对原图对应的阈值分割结果进行降采样处理,可以有效减小阈值分割结果的数据大小。对原图对应的阈值分割结果进行降采样处理后再进行存储,可以减小该阈值分割结果所占用的存储空间。
第二方面,本申请提供一种扩展图像动态范围的方法,包括:
获取原图及原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。区域包括扩展动态范围区域和标准动态范围区域。标准动态范围区域中像素点的亮度值低于亮度阈值,扩展动态范围区域中像素点的亮度值高于或等于亮度阈值。按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图;该扩展动态范围图的动态范围大于原图的动态范围。
应理解,上述方案在对原图进行动态范围扩展时,根据原图对应的阈值分割结果,确定原图中的像素点对应标准动态范围区域还是扩展动态范围区域,对原图中的扩展动态范围区域进行动态范围扩展,从而可以实现对原图进行动态范围扩展,使其动态范围更大,得到相对于原图具有更大的动态范围的扩展动态范围图,可以更好的表现图像当中光线和颜色的渐变和层次,从而可以给用户带来更接近于真实世界的视觉效果。
在第二方面的一种可能的实现方式中,按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图,包括对原图中的每一个像素点进行以下判断和操作:
根据原图对应的阈值分割结果中包括的原图中的各个像素点对应的区域类别标记,判断像素点对应标准动态范围区域还是对应扩展动态范围区域。
若像素点P对应标准动态范围区域,则像素点P对应的扩展系数αP=1,直接将像素点P的R、G、B值作为扩展动态范围图中与像素点对应的像素点P′的R、G、B值;
若像素点对应扩展动态范围区域,则像素点对应的扩展系数αP≥1,将像素点P的R、G、B值乘以扩展系数αP,得到一组的新的R、G、B值;将该新的R、G、B值作为扩展动态范围图中与像素点P对应的像素点P′的R、G、B值。
应理解,上述方案根据原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,具体为:根据原图对应的阈值分割结果,确定原图中像素点对应标准动态范围区域还是扩展动态范围区域,并采用对应的扩展系数对原图中的各个像素点的R、G、B值进行色调映射,以得到扩展动态范围图。针对对应标准动态范围区域(亮度值低于亮度阈值的区域)的像素点,其对应的扩展系数取为1,即令其R、G、B值保持不变,不对其进行增亮。也就是说,原图上对应标准动态范围区域的像素点, 其亮度值保持不变。而针对对应扩展动态范围区域(亮度值高于或等于亮度阈值的区域)的像素点,将其R、G、B值乘以一个大于或等于1的扩展系数,得到一组的新的R、G、B值,也就是说,原图上对应扩展动态范围区域的像素点,通过色调映射会进行增亮。而图像的动态范围为图像中像素点的最大亮度值与最小亮度值的比值,即图像中像素点的最大亮度值/图像中像素点的最小亮度值。所以,即使低亮度区域像素点的亮度值保持不变,但是对高亮度区域像素点进行增亮后,图像中最高的亮度值会增大,图像的动态范围也会增大,可以实现图像的动态范围扩展。原图中像素点对应区域反映了原图中像素点的亮度值,因此,上述方案是结合了原图中像素点的亮度值和R、G、B值进行色调映射,相对于只利用原图中像素点的亮度值或R、G、B值进行色调映射,本申请上述方案利用的信息更丰富,动态范围扩展的效果更好。同时,该方案中对于原图上对应低亮度区域的像素点不对其进行增亮,也可以减小图像处理的数据量,进而加快图像动态范围扩展的速度,提高实时性。
在第二方面的一种可能的实现方式中,上述若像素点P对应扩展动态范围区域,则像素点对应的扩展系数αP≥1,包括:
若像素点对应扩展动态范围区域,则将像素点P的R、G、B值转为灰度值Gray;
根据色调映射函数F(x)获取灰度值Gray对应的色调映射函数值F(Gray)。其中,色调映射函数F(x)为单调非递减函数,且F(x)≥1。
将色调映射函数值F(Gray)作为像素点对应的扩展系数αP
应理解,上述方案针对扩展动态范围区域的像素点,基于像素点的灰度值和色调映射函数确定像素点的扩展系数。该色调映射函数为单调非递减函数,其函数值大于或等于1。这样,对于扩展动态范围区域中灰度值不同的不同像素点,可以通过色调映射函数计算得到不同的扩展系数。从而可以基于不同的扩展系数,对原图中扩展动态范围区域的不同像素点进行不同程度的增亮,从而可以使原图中的像素点的亮度值均匀地映射到满足显示需求的亮度值区间,得到相应的扩展动态范围图。
基于上述技术方案,在根据曝光图的亮度信息获取阈值分割结果时,优选曝光图为短曝光图或中曝光图。由于短曝光图或中曝光图的明暗度比较合适,尤其是图像较亮部分的信息没有丢失,由此在后续针对原图上处于扩展动态范围区域的像素点进行增亮时,可以获得更好的颜色渐变和层次。而结合上述技术方案,对于原图上处于标准动态范围区域的像素点不进行增亮,其像素值保持不变,所以即使是短曝光图或中曝光图上较暗部分的信息不完整,对于上述技术方案的动态范围扩展效果也不会有影响。
与上述第一方面中一种可能的实现方式对应地,若阈值分割结果是经过降采样处理后的原图对应的阈值分割结果I′,那么上述对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图,采用如下方法:对阈值分割结果I′进行上采样处理,得到阈值分割结果I〞。按照经过上采样处理后的阈值分割结果I〞,对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图。
应理解,对经过降采样处理后的原图对应的阈值分割结果I′进行上采样处理后,得到的阈值分割结果I〞与原图具有同样的尺寸,其中的像素点可以与原图中的像素 点一一对应,这样,可以利用该阈值分割结果确定原图中的每一个像素点对应的扩展系数,从而对原图进行动态扩展。
第二方面各步骤的具体实现方式可以参考上述第一方面及其各种可能的实现方式中的相关描述,这里不予赘述。
第三方面,本申请提供一种电子设备,电子设备包括存储器和一个或多个处理器;存储器与处理器耦合;其中,存储器中存储有计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被处理器执行时,使得电子设备执行上述第一方面或第二方面及其任一种可能的实现方式中的方法。
在第三方面的一种可能的实现方式中,电子设备包括一个或多个摄像头,摄像头用于采集原图信息。
在第三方面的一种可能的实现方式中,电子设备通信模块,通信模块用于与其他设备进行数据传输,获取原图及原图对应的阈值分割结果。
第四方面,本申请提供一种计算机可读存储介质,包括计算机指令,当计算机指令在计算机上运行时,使得计算机执行上述第一方面或第二方面及其任一种可能的实现方式中的方法。
可以理解地,第三方面至第四方面的具体实现方式可以参见上述第一方面至第二方面及其任一种可能的实现方式中的相关描述,其所带来的技术效果也可以参见上述第一方面至第二方面及其任一种可能的实现方式所带来的技术效果,此处不再赘述。
附图说明
图1为本申请实施例中的一张原图的效果图;
图2为本申请实施例中的一张扩展动态范围图的效果图;
图3为本申请实施例中的一种电子设备示意图;
图4为本申请实施例提供的一种扩展图像动态范围的方法流程图;
图5为本申请实施例中的一种阈值分割效果示意图;
图6为本申请实施例中的一种阈值分割结果示意图;
图7为本申请实施例中的另一种阈值分割结果示意图;
图8为本申请实施例提供的另一种扩展图像动态范围的方法流程图;
图9为本申请实施例提供的另一种扩展图像动态范围的方法流程图;
图10为本申请实施例提供的另一种扩展图像动态范围的方法效果图;
图11为本申请实施例中的一种色调映射函数曲线图;
图12为本申请实施例提供的一种扩展图像动态范围的方法效果图;
图13为本申请实施例中的一种芯片系统结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请实施例的描述中,术语“包括”、“包含”或者其任何其他变体,意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包 括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
为了更好的理解本申请的方案,以下对本申请实施例所涉及的部分术语及应用场景进行介绍:
动态范围(Dynamic Range,DR),是可变化信号(例如声音或光)最大值和最小值的比值。自然界中,物体的亮度差异是非常大的。例如,太阳的亮度值约为1.5E+9cd/㎡,日光灯的亮度值约为1.2E+4cd/㎡(坎德拉/平方米),月光(满月)的亮度值约为1000cd/㎡,黑白电视机荧光屏的亮度值约为120cd/㎡,彩色电视机荧光屏的亮度值约为80cd/㎡。这中间的差异是非常大的,有着非常大的动态范围。
在图像领域,动态范围则是指相机能捕捉的场景中的光亮度的范围,可以表示为相机在单帧图像内可以记录的最高的和最低的亮度之间的比值,也就是图像中像素点的最大亮度值与最小亮度值的比值。高动态范围(High Dynamic Range,HDR)成像,相对于低动态范围(Lower Dynamic Range,LDR)成像或标准动态范围(Standard Dynamic Range,SDR)成像,具有更大的动态范围(即更大的明暗差别),能够更精确地反映真实世界中从太阳光直射到最暗的阴影的亮度变化范围,具有更宽的色彩范围和更丰富的图像细节,能够更好的反映出真实环境中的视觉效果。
LDR图像或SDR图像每个像素点的R、G、B值通常是使用8比特编码,其表示的亮度取值范围仅为0-255。而HDR图像每个颜色通道使用比LDR图像或SDR图像使用更多的数据位编码,因为其可以表示更大的动态范围,相应地,其显示颜色精度就越大,能够更好的表现图像当中光线和颜色的渐变和层次。
但是,传统的成像装置所成的图像却通常只有很有限的动态范围。示例性地,表1显示了几种常见成像器材的大概的动态范围。而真实世界的动态范围为100000:1。因此,需要对传统的成像装置所成的图像的动态范围进行扩展,才能使图像更好地表现真实环境。
表1
此外,传统的显示设备,如阴极射线管(Cathode Ray Tube,CRT)显示器和液晶显示器(Liquid Crystal Display,LCD)都只能显示有限的动态范围。而随着近年来显示领域软硬件技术的不断提升,高动态范围的HDR显示器的使用越来越普及,因此,针对支持不同动态范围显示的显示器需要有不同动态范围的图像来进行显示,以兼顾显示器的性能以及显示效果。
针对背景技术中存在的问题,本申请实施例提供一种扩展图像动态范围的方法, 该方法可以应用于电子设备。该方法包括:获取原图信息。基于原图信息中亮度水平信息和亮度阈值,获取原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记用于指示原图中的各个像素点对应的区域的类别。而区域包括扩展动态范围区域和标准动态范围区域。将原图及原图对应的阈值分割结果进行关联保存,用于后续基于原图对应的阈值分割结果对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图,扩展动态范围图的动态范围大于原图的动态范围。这样,在需要对原图(如摄像头捕获到的LDR图像或SDR图像)进行动态范围扩展时,就可以按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而实现对原图进行动态范围扩展,使其动态范围更大,得到相对于原图具有更大的动态范围的扩展动态范围图。如图1和图2所示,扩展动态范围图相对于原图可以更好地表现图像当中光线和颜色的渐变和层次。基于本申请实施例提供的方法,考虑到显示器的性能限制,在仅支持低动态范围显示的显示器中,可以显示原图本身。而在支持高动态范围显示的显示器中,可以显示原图对应的扩展动态范围图,从而可以给用户带来更接近于真实世界的视觉效果,提升用户使用体验。因此,本申请实施例提供的方法,适应性强。此外,所存储的原图对应的阈值分割结果仅包括原图中的各个像素点对应的区域类别标记,数据量小,所需的存储空间小。
下面将结合附图对本申请实施例的实施方式进行详细描述。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等设备,本申请实施例对该电子设备的具体形态不作特殊限制。
以上述电子设备是手机为例,介绍电子设备(如电子设备300)的硬件结构。
如图3所示,电子设备300可以包括:处理器310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,移动通信模块350,无线通信模块360,音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,传感器模块380,按键390,马达391,指示器392,摄像头393,显示屏394,以及用户标识模块(subscriber identification module,SIM)卡接口395等。
其中,上述传感器模块380可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备300的具体限定。在另一些实施例中,电子设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器310可以包括一个或多个处理单元,例如:处理器310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器, 视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备300的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器310中的存储器为高速缓冲存储器。该存储器可以保存处理器310刚用过或循环使用的指令或数据。如果处理器310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器310的等待时间,因而提高了系统的效率。在一些实施例中,处理器310可以包括一个或多个接口。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备300的结构限定。在另一些实施例中,电子设备300也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块340用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。充电管理模块340为电池342充电的同时,还可以通过电源管理模块341为电子设备供电。
电源管理模块341用于连接电池342,充电管理模块340与处理器310。电源管理模块341接收电池342和/或充电管理模块340的输入,为处理器310,内部存储器321,外部存储器,显示屏394,摄像头393,和无线通信模块360等供电。
电子设备300的无线通信功能可以通过天线1,天线2,移动通信模块350,无线通信模块360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备300中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块350可以提供应用在电子设备300上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
无线通信模块360可以提供应用在电子设备300上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
在一些实施例中,电子设备300的天线1和移动通信模块350耦合,天线2和无线通信模块360耦合,使得电子设备300可以通过无线通信技术与网络以及其他设备通信。
电子设备300通过GPU,显示屏394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏394用于显示图像,视频等。该显示屏394包括显示面板。例如,显示屏394可以是触摸屏。
电子设备300可以通过ISP,摄像头393,视频编解码器,GPU,显示屏394以及应用处理器等实现拍摄功能。其中,摄像头393可以设置一个或多个。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备300的存储能力。外部存储卡通过外部存储器接口320与处理器310通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器310通过运行存储在内部存储器321的指令,从而执行电子设备300的各种功能应用以及数据处理。例如,在本申请实施例中,处理器310可以通过执行存储在内部存储器321中的指令,内部存储器321可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备300使用过程中所创建的数据(比如音频数据,电话本等)等。
电子设备300可以通过音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
触摸传感器,也称“触控面板”。触摸传感器可以设置于显示屏394,由触摸传感器与显示屏394组成触摸屏,也称“触控屏”。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏394提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器也可以设置于电子设备300的表面,与显示屏394所处的位置不同。
本申请实施例中,电子设备300可以通过触摸传感器检测到用户在触摸屏输入的触摸操作,并采集该触摸操作在触摸屏上的触控位置,以及触控时间等中的一项或多项。在一些实施例中,电子设备300可以通过触摸传感器和压力传感器结合起来,确定触摸操作在触摸屏的触控位置。
按键390包括开机键,音量键等。按键390可以是机械按键。也可以是触摸式按键。电子设备300可以接收按键输入,产生与电子设备300的用户设置以及功能控制有关的键信号输入。
马达391可以产生振动提示。马达391可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏394不同区域的触摸操作,马达391也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口395用于连接SIM卡。SIM卡可以通过插入SIM 卡接口395,或从SIM卡接口395拔出,实现和电子设备300的接触和分离。电子设备300可以支持1个或M个SIM卡接口,M为大于1的正整数。SIM卡接口395可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
陀螺仪传感器可以是三轴陀螺仪,用于追踪电子设备300在6个方向的状态变化。加速度传感器用于检测电子设备300的运动速度、方向以及位移。本申请实施例中,电子设备300可以通过陀螺仪传感器和加速度传感器检测电子设备300的状态和位置。当电子设备300的状态和位置相比于初始位置和初始状态发生较大变化时,电子设备300可以实时在显示屏394上提醒用户及时纠正电子设备300的状态和位置。
以下实施例中的方法均可以在具有上述硬件结构的电子设备300中实现。
本申请实施例提供一种扩展图像动态范围的方法,该方法可以应用于上述电子设备300。以下实施例中,以电子设备300是图3所示的手机为例,介绍本申请实施例的方法。
参见图4,本申请实施例提供一种扩展图像动态范围的方法,包括以下步骤:
S101、获取原图信息。该原图信息包括原图和亮度水平信息。其中,亮度水平信息用于指示原图中像素点的亮度值。
S102、基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。区域类别标记用于指示原图中的各个像素点对应的区域的类别。区域包括标准动态范围区域和扩展动态范围区域。标准动态范围区域中像素点对应的亮度值低于亮度阈值,扩展动态范围区域中像素点对应的亮度值高于或等于亮度阈值。
S103、将原图及原图对应的阈值分割结果进行关联保存,用于后续基于原图对应的阈值分割结果对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图,扩展动态范围图的动态范围大于原图的动态范围。
应理解,上述方案基于原图信息中包含的亮度水平信息,获取原图对应的阈值分割结果,并将原图及原图对应的阈值分割结果进行关联保存。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。该区域类别标记可以指示原图中的各个像素点是对应标准动态范围区域(亮度值低于亮度阈值的区域)还是对应扩展动态范围区域(亮度值高于或等于亮度阈值的区域)。这样,在需要对原图(如摄像头捕获到的LDR图像或SDR图像)中进行动态范围扩展时,就可以按照原图对应的阈值分割结果,生成原图对应的扩展动态范围图,实现对原图进行动态范围扩展,得到相对于原图具有更大的动态范围的扩展动态范围图,可以更好地表现图像当中光线和颜色的渐变和层次,从而可以给用户带来更接近于真实世界的视觉效果。此外,所存储的原图对应的阈值分割结果仅包括原图中的各个像素点对应的区域类别标记,数据量小,所需的存储空间小。
在一些实施例中,上述方法还包括S104、对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图。该扩展动态范围图的动态范围大于原图的动态范围。
应理解,本申请实施例基于亮度水平信息,获得该原图对应的阈值分割结果,在对原图进行动态范围扩展时,可以根据该原图对应的阈值分割结果确定原图中的各个 像素点对应标准动态范围区域还是扩展动态范围区域,从而对原图中的扩展动态范围区域进行动态范围扩展,得到原图对应的扩展动态范围图,实现对原图进行动态范围扩展,使其动态范围更大,得到相对于原图具有更大的动态范围的扩展动态范围图,可以更好地表现图像当中光线和颜色的渐变和层次,从而可以给用户带来更接近于真实世界的视觉效果。
基于该实施例提供的方法,电子设备中存储了原图及原图对应的阈值分割结果。那么,用户在只支持低动态范围显示的电子设备上,查看该原图,则电子设备向用户显示该原图本身,该原图本身为该电子设备支持显示的低动态范围LDR图像或标准动态范围SDR图像。而用户在支持高动态范围显示的电子设备上,查看该原图,则电子设备可以执行本实施例中的方法,对原图中的扩展动态范围区域进行动态范围扩展,实现对原图进行动态范围扩展,使用户查看到更接近真实世界视觉效果的该原图对应的扩展动态范围图。
比如说,在一种应用场景下,有第一电子设备和第二电子设备,该第一电子设备只支持低动态范围显示,而第二电子设备支持高动态范围显示。在此场景下,用户打开第一电子设备上的拍照应用程序(Application,App),使用第一电子设备中的摄像头拍摄图像。由此第一电子设备可以获取原图信息,并可以执行上述实施例中的步骤S101-步骤S103对拍摄的图像进行处理,并将原图及原图对应的阈值分割结果进行关联保存。之后,用户退出第一电子设备上的拍照App,打开第一电子设备上的相册,并点击查看刚才拍摄的图像,则第一电子设备向用户展示相应的原图。若用户通过第一电子设备上的通信App将该原图及原图对应的阈值分割结果传输至第二电子设备,由于第二电子设备支持高动态范围显示,所以第二电子设备可以执行上述实施例中的步骤S104,即基于原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,以得到原图对应的扩展动态范围图。所以,在用户通过第二电子设备上查看相应图像时,第二电子设备可以向用户展示对应的扩展动态范围图,从而提供给用户更好的视觉体验。
再比如说,在另一种应用场景下,某电子设备虽然支持高动态范围显示,但其摄像头所捕获的图像却只有很有限的动态范围,比如捕获到的图像为LDR图像或SDR图像。在此场景下,用户打开电子设备上的拍照App,使用该电子设备中的摄像头拍摄图像。由此该电子设备可以获取原图信息,并执行上述实施例中的步骤S101-步骤S103对拍摄的图像进行处理,并将原图及原图对应的阈值分割结果进行关联保存。之后,用户退出该电子设备上的拍照App,打开该电子设备上的相册,并点击查看刚才拍摄的图像。在此情况下,该电子设备可以执行上述实施例中的步骤S104,基于原图对应的阈值分割结果,确定原图中的各个像素点对应标准动态范围区域还是扩展动态范围区域,从而对原图中的扩展动态范围区域进行动态范围扩展,以得到原图对应的扩展动态范围图。这样该电子设备可以向用户展示对应的扩展动态范围图,从而提供给用户更好的视觉体验。
在一些实施例中,上述亮度水平信息为在原图的同一场景下采集的曝光图。该曝光图中的拍摄对象与原图中的拍摄对象相同。该曝光图中各个像素点的亮度值指示所述原图中对应像素点的亮度值。上述基于原图信息中包括的亮度水平信息和亮度阈值, 获取原图对应的阈值分割结果,可以是根据曝光图中各个像素点的亮度值和亮度阈值,获取原图对应的阈值分割结果。相应的示意图如图5所示。
应理解,上述曝光图可以是摄像头在某一曝光值下在原图的同一场景下采集的一张曝光图,也可以是摄像头在多个不同的曝光值下在原图的同一场景下采集的多张曝光图,也可以是由上述多张曝光图合成的一张曝光图。在获取原图对应的阈值分割结果时,可以选取其中动态范围比较理想的曝光图进行阈值分割,得到相应的阈值分割结果。此外,上述的曝光图可以由一个或多个摄像头拍摄得到。需要说明的是,如有必要,需对原图和曝光图进行配准,使得两幅图像中的像素点一一对应,也就是说,使两幅图像中同一坐标处的像素点对应于真实环境中的同一位置。
在一些实施例中,对于其中目标比较单一的曝光图,基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果,可以采用以下方法:根据曝光图中各个像素点的亮度值,采用单级阈值(single-level thresholding)分割方法对曝光图进行阈值分割,得到阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记;而像素点对应的区域包括标准动态范围区域和扩展动态范围区域两种。相应地,该阈值分割结果可以用图6所示的二维矩阵表示,该二维矩阵中的元素只有两种可能的取值,如0和1,分别对应标准动态范围区域和扩展动态范围区域。图6为本申请实施例中的一种阈值分割结果示意图,如图6所示,图中左边的曝光图中字母“N”及其周边的像素值,对应的阈值分割结果可以表示为:
或者,该阈值分割结果也可以用数据序列表示,这样,图6中左边的曝光图中字母“N”及其周边的像素值,对应的阈值分割结果可以表示为010100111001010。
在一些实施例中,对于其中目标比较复杂的曝光图,基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果,可以采用以下方法:根据曝光图中各个像素点的亮度值,采用多级阈值(multi-level thresholding)分割方法对曝光图进行阈值分割,得到阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记;区域类别包括标准动态范围区域和扩展动态范围区域;其中,扩展动态范围区域又分为多个级别的动态范围区域。应理解,对于其中目标比较复杂的图像,通过多级阈值(multi-level thresholding)分割方法对曝光图进行阈值分割,可以得到包含多个区域(一个标准动态范围区域和多个级别的扩展动态范围区域)的阈值分割结果,分割细粒度更高。例如,对应一幅有K个目标和背景的图像,可以使用K个阈值进行分割,得到一个标准动态范围区域和K个级别的扩展动态范围区域。后续可以针对不同的区域,相应采用不同级别的扩展系数进行色调映射,从而得到亮度渐变度和亮度层次更丰富扩展动态范围图。相应地,该阈值分割结果可以用二维矩阵表示,该二维矩阵中的元素有多种可能的取值,分别对应多个区域。如阈值分割结果包括一个标准动态范围区域和两个级别的扩展动态范围区域,则对应的二维矩阵中的元素有3种可能的取值,如0、1、2。图7为本申请实施例中的一种阈值分割结果示意图,如图7所示,图中左边的曝光图中字母“N”及其周边的像素值,对应的阈值分割结果可以表示为:
或者,该阈值分割结果也可以用数据序列表示,这样,图6中左边的曝光图中字母“N”及其周边的像素值,对应的阈值分割结果可以表示为021200222002120。
示例性地,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:根据曝光图中各个像素点的亮度值,构建对应的亮度直方图。该亮度直方图可以显示曝光图中亮度值分别为0,1,…,255时的像素点个数。该亮度直方图包括x轴和y轴,其中x轴的数值表示亮度值,取值依次为0,1,…,255,即从左到右是从纯黑色(亮度值为0)到纯白色(亮度值为255);y轴的数值表示像素点的个数。基于该亮度直方图,获取曝光图中亮度值x分别为0,1,…,k,…,255的像素点个数,记为y0,y1,…,yk,…,y255。其中,y0是曝光图中亮度值x为0的像素点个数,y1是曝光图中亮度值x为1的像素点个数,yk是曝光图中亮度值x为k的像素点个数,y255是曝光图中亮度值x为255的像素点个数。按亮度值x由大到小的顺序,将曝光图中对应的像素点个数逐个累加,当累加至亮度值k对应像素点个数yk时累加得到的总像素点个数大于预设数量,则将亮度值k作为阈值。示例性地,预设数量可以是曝光图中像素点总个数的80%。例如,对于一张尺寸为100*100的曝光图,其中总共有10000个像素点,那么预设数量可以取800。设该曝光图中的亮度值分别为0,1,…,k,…,255的像素点个数分别为y0,y1,…,yk,…,y255,则按亮度值从大到小的顺序,将对应的像素点个数y255,y254,y253,逐个进行累加,若累加至亮度值90对应像素点个数y90时,累加得到的总像素点个数大于800,则将亮度值作为阈值。
应理解,本申请实施例中的方法既适用于单级阈值分割方法确定阈值,也适用于多级阈值分割方法确定阈值。在应用于单级阈值分割方法确定阈值时,预设数量设置为一个值,由此得到唯一的阈值,用于单级阈值分割,由此可以分割得到两个区域,分别作为标准动态范围区域和扩展动态范围区域。在应用于多级阈值分割方法确定阈值时,预设数量设置为多个不同的值,由此可以得到多个亮度阈值,用于多级阈值分割,由此可以分割得到多个区域,其中,亮度值小于最小阈值Tmin的区域作为标准动态范围区域,亮度值大于或等于最小阈值Tmin的区域作为扩展动态范围区域,而扩展动态范围区域又进一步由多个亮度阈值划分为多个区域。比如,除最小阈值Tmin外,还有阈值T1,T2,…,Tn,那么基于扩展动态范围区域内各个像素点的亮度值L,按照Tmin≤L<T1、T1≤L<T2、…、Tn-1≤L<Tn的标准,可以将扩展动态范围区域进一步划分n个区域D1,D2,…,Dn。其中,像素点的亮度值(如L)在亮度值区间[Tmin,T1),则该像素点可以划分至区域D1;像素点的亮度值(如L)在亮度值区间[T1,T2),则该像素点可以划分至区域D2;像素点的亮度值(如L)在亮度值区间[Tn-1,Tn),则该像素点可以划分至区域Dn。后续进行色调映射时,针对D1,D2,…,Dn可以分别采用相同或不同的色调映射函数计算得到对应的扩展系数进行色调映射,从而可以得到亮度渐变和亮度层次更丰富扩展动态范围图。
示例性地,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:曝光图中各个像素点的亮度值,构建对应的亮度直方图。该亮度直方图可以显示曝光图中亮度值分别为0,1,…,255时的像素点个数。根据亮度直方图,采用OTSU确定用于 对曝光图进行阈值分割的阈值。
以确定单个阈值为例,OTSU确定阈值的原理为:对于给定阈值T,将待分割图像上的像素点分为前景像素点和背景像素点;其中前景像素点个数占待分割图像上像素点总个数的比例为p1,前景像素点的平均亮度值为b1,而背景像素点个数占待分割图像上像素点总个数比例为p2,背景像素点的平均亮度值为b2;待分割图像上所有像素点的亮度值的均值为b,有:b=p1b1+p2b2;待分割图像上所有像素点的亮度值的方差为σ22=p1(b1-b)2+p2(b2-b)2=p1p2(b1-b2)2。遍历所有亮度值,将使得σ2最大的值为阈值T。
其中,OTSU是应用最广泛的图像分割法之一,该方法也叫最大类间方法阈值分割法,该方法选择分割阈值的标准是图像的类间方差达到最大或者类内方差最小。应理解,OTSU可以从单阈值分割扩展到多阈值分割。同时,可以利用智能优化算法进行多阈值的寻找,以获得最佳阈值,大大加快算法的速度。
示例性地,上述阈值可以根据曝光图中各个像素点的亮度值确定,具体地:
计算曝光图上所有像素点的亮度值的均值M和标准差STD,通过以下公式计算用于对曝光图进行阈值分割的阈值T:
T=M+β·STD
式中,β为方差系数。
应理解,本申请实施例中的方法既适用于单级阈值分割方法确定阈值,也适用于多级阈值分割方法确定阈值。在应用于单级阈值分割方法确定阈值时,方差系数设置为一个值,由此得到一个亮度阈值,用于单级阈值分割,由此可以分割得到两个区域,分别作为标准动态范围区域和扩展动态范围区域。在应用于多级阈值分割方法确定阈值时,方差系数设置为多个不同的值,由此可以得到多个不同的阈值,用于多级阈值分割,由此可以分割得到多个区域,其中,亮度值小于最小阈值的区域作为标准动态范围区域,亮度值大于或等于最小阈值的区域作为扩展动态范围区域,而扩展动态范围区域又进一步由多个亮度阈值划分为多个区域,比如D1,D2,…,Dn。后续进行色调映射时,针对D1,D2,…,Dn分别采用不同的扩展系数进行色调映射,从而可以得到亮度渐变度和亮度层次更丰富扩展动态范围图。
在一些实施例中,阈值分割结果为数据序列。而原图中的各个像素点对应的区域类别标记用该数据序列中对应元素的值表示。其中,数据序列中的元素的值包括0和1。该数据序列中值为0的元素在原图中对应的像素点对应标准动态范围区域。该数据序列中值为1的元素在原图中对应的像素点对应扩展动态范围区域。该数据序列中的元素可以与原图中的像素点一一对应。比如说,设原图的尺寸为H*W,其中H和W分别表示原图的高度和宽度,即原图的高度方向上有H个像素点,原图的宽度方向上有W个像素点,该原图具有的像素点个数为H*W。对原图上各个像素点,按从上到下,从左到右顺序,将其对应的区域类别标记存储于数据序列中。那么相应地,该数据序列中的元素个数可以为H*W,该数据序列中第i*(H-1)+j个的元素的值表示原图中第i行第j列的像素点对应的区域类别标记,其中i=1,2,…,H,j=1,2,…,W。
在一些实施例中,阈值分割结果为二维矩阵。而原图中的各个像素点对应的区域类别标记用该二维矩阵中对应元素的值表示。其中,二维矩阵中的元素的值包括0和 1。该二维矩阵中值为0的元素在原图中对应的像素点对应标准动态范围区域。该二维矩阵中值为1的元素在原图中对应的像素点对应扩展动态范围区域。该二维矩阵中的元素可以与原图中的像素点一一对应。比如说,设原图的尺寸为H*W,其中H和W分别表示原图的高度和宽度,即原图的高度方向上有H个像素点,原图的宽度方向上有W个像素点,该原图具有的像素点个数为H*W。那么相应地,该二维矩阵的尺寸也可以是H*W,该二维矩阵中第i行第j列的元素的值表示原图中第i行第j列的像素点对应的区域类别标记,其中i=1,2,…,H,j=1,2,…,W。
应理解,对于其中目标比较单一的曝光图,可以采用单级阈值分割方法,得到的区域类别就只有标准动态范围区域和扩展动态范围区域这两种,对应的区域类别标记也就只有两种取值。由此,得到的阈值分割结果可以采用数据序列或二维矩阵表示,从而可以减小该阈值分割结果所占用的存储空间。
在一些实施例中,在将原图及原图对应的阈值分割结果进行关联保存之前,该方法还包括:对原图对应的阈值分割结果进行降采样处理。设原图对应的阈值分割结果为I,对阈值分割结果I进行降采样处理后,得到阈值分割结果I′。进而,上述将原图及原图对应的阈值分割结果进行关联保存,包括:将原图及经过降采样处理后的原图对应的阈值分割结果I′进行关联保存。
应理解,对原图对应的阈值分割结果进行降采样处理,可以有效阈值分割结果的数据大小。对原图对应的阈值分割结果进行降采样处理后再进行存储,可以减小该阈值分割结果所占用的存储空间。同时,该降采样处理后的阈值分割结果在后续用于对原图进行动态范围扩展时,可以通过上采样恢复到跟原图相同的尺寸,从而用于确定原图每个像素点对应的扩展系数。
以阈值分割结果为二维矩阵为例,示例性地,对原图对应的阈值分割结果I进行降采样处理包括:采用采样单元对阈值分割结果I进行降采样处理。比如,可以采用尺寸为2*2、3*3、4*4或其它尺寸的采样单元对阈值分割结果I进行降采样处理,使经过降采样处理后的阈值分割结果I的尺寸为阈值分割结果的尺寸的1/4、1/9、1/16或其它比例。其中,采用尺寸为2*2的采样单元对阈值分割结果I进行降采样处理指的是将该阈值分割结果I中的2*2个元素在对应的阈值分割结果I′中用一个元素表示,阈值分割结果I′中该元素的值可以为阈值分割结果I中相应2*2个元素的值的平均值,也可以为阈值分割结果I中相应2*2个元素中任意一个元素的值。采用尺寸为3*3的采样单元对阈值分割结果I进行降采样处理,指的是将阈值分割结果I中的3*3个元素在对应的阈值分割结果I′中用一个元素表示,阈值分割结果I′中该元素的值可以为阈值分割结果I中相应3*3个元素的值的平均值,也可以为阈值分割结果I中相应3*3个元素中任意一个元素的值。采用尺寸为4*4的采样单元对阈值分割结果I进行降采样处理,指的是将阈值分割结果I中的4*4个元素在对应的阈值分割结果I′中用一个元素表示,阈值分割结果I′中该元素的值可以为阈值分割结果I中相应4*4个元素的值的平均值,也可以为阈值分割结果I中相应4*4个元素中任意一个元素的值。采用其它尺寸的采样单元对该阈值分割结果进行降采样处理的过程以此类推,此处不再一一列举。应理解,上述实施例仅是对对原图对应的阈值分割结果进行降采样处理的方式及阈值分割结果的尺寸进行举例说明,该举例说明不构成对本申请 中对原图对应的阈值分割结果进行降采样处理的方式及阈值分割结果的尺寸的限制。本申请中对原图对应的阈值分割结果进行降采样处理也可以采用其他方式,阈值分割结果也可以是其他尺寸,本申请对此不作具体限制。若该阈值分割结果为数据序列,同样可以按照其中元素与原图中像素点的对应关系展开成相应的二值矩阵,然后采用上述方法进行降采样处理,之后再进行存储。
与上述实施例对应地,若与原图进行关联保存的是经过降采样处理后的原图对应的阈值分割结果I′,那么上述按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,以得到扩展动态范围图,采用如下方法:对阈值分割结果I′进行上采样处理,得到阈值分割结果I〞。按照上采样处理后得到的阈值分割结果I〞,对原图中的扩展动态范围区域进行动态范围扩展,以得到扩展动态范围图。
应理解,对经过降采样处理后的原图对应的阈值分割结果I′进行上采样处理后,得到的阈值分割结果I〞具有的元素个数与原图具有的像素点个数相同,且该阈值分割结果I〞中的元素与原图中的像素点一一对应,这样,可以利用该阈值分割结果确定原图中的每一个像素点对应的扩展系数,从而对原图进行动态扩展。示例性地,以阈值分割结果为二维矩阵为例。设原图的尺寸为H*W,其中H和W分别表示原图的高度和宽度,即原图的高度方向上有H个像素点,原图的宽度方向上有W个像素点,该原图具有的像素点个数为H*W。而经过降采样处理后得到的对应的阈值分割结果I′的尺寸为H′*W′,其中H′和W′分别表示阈值分割结果I′的行数和列数,即阈值分割结果I′包括H′行,阈值分割结果I′包括W′列,该阈值分割结果I′具有的元素个数为H′*W′,其中H′<H,W′<W。这样,在对阈值分割结果进行存储时可以减少其所占用的空间。而在对原图进行动态范围扩展时,可以先对阈值分割结果I′进行上采样处理,使得上采样后的阈值分割结果I〞的尺寸恢复为H*W,即阈值分割结果I〞包括H行和W列,该阈值分割结果I〞具有的元素个数为H*W。这样,阈值分割结果I〞与原图具有相同的尺寸,其中的像素点与原图中的像素点一一对应,也就可以利用该阈值分割结果I〞确定原图中的每一个像素点对应的扩展系数,从而对原图进行动态扩展。
在一些实施例中,将原图及原图对应的阈值分割结果进行关联保存,可以以通用的图像格式将原图及原图对应的阈值分割结果进行关联保存,如便携式网络图形(Portable Network Graphics,PNG)格式和联合图像专家组(Joint Photographic Experts Group,JPEG)等格式。对于这些图像格式,根据其相应的编码规则(PNG格式和JPEG格式文件的编码规则),可以将原图及其对应的阈值分割结果存于一个文件中。同时,在需要将原图及原图对应的阈值分割结果进行分离的时候,通过相应的解码规则(PNG格式和JPEG格式文件的解码规则),即可以分离出存于一个文件中的原图及原图对应的阈值分割结果。
通过上述实施例提供的原图及原图对应的阈值分割结果存储方式,可以将原图及原图对应的阈值分割结果存储于同一个文件中,在对原图进行色调映射(动态范围扩展)的时候,可以高效地查找到与其对应的阈值分割结果,从而可以快速地根据阈值分割结果确定原图进行色调映射的扩展系数,从而快速地实现原图动态范围扩展,保证图像处理的实时性。
参见图8,本申请实施例还提供一种扩展图像动态范围的方法,包括:
S201、获取原图及原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。像素点对应的区域包括扩展动态范围区域和标准动态范围区域。扩展动态范围区域中像素点的亮度值高于或等于亮度阈值,标准动态范围区域中像素点的亮度值低于亮度阈值。
S202、按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,以得到扩展动态范围图;所述扩展动态范围图的动态范围大于所述原图的动态范围。
应理解,本申请实施例基于原图中的像素点所属区域,即对应标准动态范围区域还是扩展动态范围区域,对原图中的扩展动态范围区域进行动态范围扩展,可以实现对原图进行动态范围扩展,使其动态范围更大,得到相对于原图具有更大的动态范围的扩展动态范围图,可以更好的表现图像当中光线和颜色的渐变和层次,从而可以给用户带来更接近于真实世界的视觉效果。
需要说明的是,基于该实施例提供的方法,用户在只支持低动态范围显示的电子设备上,查看所述原图,看到的是原图本身,该原图本身为低动态范围LDR图像或标准动态范围SDR图像。而用户在支持高动态范围显示的电子设备上,查看所述原图,则可以查看到原图对应的扩展动态范围图。所以,本申请实施例提供的方法,适用性强。在只支持低动态范围显示的电子设备上,可以显示该电子设备支持显示的准动态范围图,而在支持高动态范围显示的电子设备上,可以显示该电子设备支持显示的更大动态范围的扩展动态范围图,显示效果更接近真实环境的视觉效果,用户体验更好。
在一些实施例中,按照原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,以得到扩展动态范围图,具体为:按照原图对应的阈值分割结果,确定原图中的各个像素点对应标准动态范围区域还是扩展动态范围区域,从而对原图中的各个像素点的R、G、B值采用对应的扩展系数进行色调映射,以得到原图对应的扩展动态范围图,包括:
参见图9和图10,对原图中的每一个像素点P,根据原图对应的阈值分割结果中包括的原图中的各个像素点对应的区域类别标记,判断像素点P对应标准动态范围区域还是对应扩展动态范围区域。比如,原图对应的阈值分割结果是用数据序列表示的,且该数据序列中值为0的元素在原图中对应的像素点对应标准动态范围区域,该数据序列中值为1的元素在原图中对应的像素点对应扩展动态范围区域。那么,若像素点P在数据序列中对应的元素的值为0,则像素点P对应标准动态范围区域。若像素点P在数据序列中对应的元素的值为1,则像素点P对应扩展动态范围区域。
在确定像素点P对应标准动态范围区域还是扩展动态范围区域之后,对像素点P的R、G、B值采用对应的扩展系数进行色调映射,从而得到扩展动态范围图中与像素点P对应的像素点P′的R、G、B值,具体地:
若像素点P对应标准动态范围区域,则像素点P对应的扩展系数αP=1,直接将像素点P的R、G、B值作为扩展动态范围图中与像素点P对应的像素点P′的R、G、B值;
若像素点P对应扩展动态范围区域,则像素点P对应的扩展系数αP≥1,将像素 点P的R、G、B值乘以扩展系数αP,得到一组的新的R、G、B值;将该新的R、G、B值作为扩展动态范围图中与像素点P对应的像素点P′的R、G、B值。
在一些实施例中,上述若像素点P对应扩展动态范围区域,则像素点对应的扩展系数αP≥1,包括:
若像素点P对应扩展动态范围区域,则将像素点P的R、G、B值转为灰度值Gray;
根据色调映射函数F(x)获取灰度值对应的Gray色调映射函数值F(Gray)。其中,色调映射函数F(x)为单调非递减函数,且F(x)≥1。
将色调映射函数值F(Gray)作为像素点对应的扩展系数αP
应理解,根据本实施例提供的方法,针对标准动态范围区域和扩展动态范围区域的像素点,采用对应的扩展系数对像素点P的R、G、B值进行映射,得到扩展动态范围图中与像素点P对应的像素点P′的R、G、B值。对应标准动态范围区域的像素点,对应的扩展系数取为1,即令其R、G、B值保持不变,不对其进行增亮。也就是说,原图上对应低亮度区域的像素点,其亮度值保持不变。而针对对应扩展动态范围区域的像素点,将其R、G、B值乘以一个扩展系数,得到一组的新的R、G、B值;也就是说,原图上对应高亮度区域的像素点,通过色调映射会进行增亮。而图像的动态范围为图像中像素点的最大亮度值与最小亮度值的比值,即图像中像素点的最大亮度值/图像中像素点的最小亮度值,对高亮度区域像素点进行增亮后,图像中最高的亮度值会增大,由此,图像的动态范围会增大,从而可以实现图像的动态范围扩展。同时,该方案中对原图上对应低亮度区域的像素点不对其进行增亮,也可以减小图像处理的数据量,进而加快图像动态范围扩展的速度,提高实时性。并且,需要说明的是,本申请实施例中,原图对应的阈值分割结果分为标准动态范围区域和扩展动态范围区域,不同区域的划分反映了原图不同区域的亮度信息,原图中像素点对应区域反映了原图中像素点的亮度值。因此,本申请实施例提供的方法,如图12所示,实际上是结合了原图中像素点的亮度值和R、G、B值进行对原图中的像素点进行色调映射以实现对原图的动态范围扩展,相对于只利用原图中像素点的亮度值或R、G、B值进行色调映射,利用的信息更丰富,动态范围扩展的效果更好。示例性对,对于背景为白纸或白墙的原图,若只基于原图中像素点的亮度值或R、G、B值确定扩展系数进行色调映射,那么由于白纸或白墙背景对应像素点的亮度值或R、G、B值较高,在进行色调映射时该部分对应的扩展系数也会比高,按照该扩展系数对该区域进行提亮,则在色调映射后的得到图像中白纸或白墙背景亮度会特别高,不符合真实环境中的视觉效果。而本申请实施例提供的方法,结合了亮度水平信息划分图像中的标准动态范围区域和扩展动态范围区域,若白纸或白墙背景在真实环境中亮度较低,则原图中白纸或白墙背景会被划分为标准动态范围区域,从而在后续进行色调映射时可以不进行提亮,从而在色调映射后得到的图像中白纸或白墙背景的亮度会更符合真实环境中的视觉效果。
示例性地,将原图上的对应像素点的R、G、B值转为灰度值Gray,可以采用以下公式:Gray=R*0.299+G*0.587+B*0.114。
示例性地,将原图上的对应像素点的R、G、B值转为灰度值Gray,还可以采用以下公式:Gray=(R+G+B)/3。
基于上述实施例,在根据曝光图的亮度信息获取阈值分割结果时,优选曝光图为 短曝光图(曝光不足的图像)或中曝光图(曝光正常的图像)。由于短曝光图或中曝光图的明暗度比较合适,尤其是图像较亮部分的信息没有丢失,由此在后续针对原图上处于扩展动态范围区域的像素点进行增亮时,可以获得更好的颜色渐变和层次;而由于本申请实施例提供的方案,对于原图上处于标准动态范围区域的像素点不进行增亮,其像素值保持不变,所以即使是短曝光图或中曝光图上较暗部分的信息有所丢失,对于本申请实施例提供的方案的动态范围扩展效果也不会有影响。而长曝光图(过度曝光的图像)亮度太高,图像较亮部分的信息将丢失,若基于长曝光图实施本申请实施例提供的方案,则动态范围扩展效果则没有基于短曝光图或中曝光图的效果理想。
示例性地,参见图11,所述色调映射函数F(x)为单调非递减函数,其值域为1-N,定义域为0-255。
示例性地,若上述实施例中,采样的是单级阈值分割,只得到一个扩展动态范围区域。那么在进行动态范围扩展时,对于扩展动态范围区域内的每一个像素点P,将像素点P的灰度值Gray输入色调映射函数F(x),得到像素点对应的扩展系数F(Gray),该F(Gray)的取值范围是1-N,将像素点P的R、G、B值乘以扩展系数F(Gray),即可实现扩展动态范围区域的动态范围扩展,得到扩展动态范围图。假设显示器所支持的动态范围是原图动态范围的N倍,通过上述操作,扩展动态范围图中像素点的最大亮度值可以扩大至原图中像素点的最大亮度值的N倍,这样,扩展动态范围图的动态范围可以扩大至原图动态范围的N倍,即映射到了显示器所支持的动态范围,可以充分利用显示器的显示性能,为用户带来尽可能接近真实环境的视觉效果。
示例性地,若上述实施例中,采样的是多级阈值分割,得到了多个扩展动态范围区域,如D1,D2,…,Dn。那么在进行动态范围扩展时,可以对各个级别的扩展动态范围区域内的像素点,采用相同或不同的色调映射函数计算相应的扩展系数。例如,对于第1级扩展动态范围区域D1内的每一个像素点P1,将像素点的P1灰度值Gray1输入第一色调映射函数F1(X),得到像素点P对应的扩展系数F1(Gray1),该F1(Gray1)的取值范围是1-N1。将像素点P的R、G、B值乘以扩展系数F1(Gray1),则第1级扩展动态范围区域D1内像素点的亮度值最多可以扩大N1倍。对于第2级扩展动态范围区域D2内的每一个像素点P2,将像素点P2的灰度值Gray2输入第二色调映射函数F2(X),得到像素点P2对应的扩展系数F2(Gray2),该F2(Gray2)的取值范围是N1-N2。将像素点P2的R、G、B值乘以扩展系数F2(Gray2),则第2级扩展动态范围区域D2内像素点的亮度值最多可以扩大N2倍。依此类推,对于第n级扩展动态范围区域Dn内的每一个像素点Pn,将像素点Pn的灰度值Grayn输入第色调映射函数Fn(x),得到像素点Pn对应的扩展系数Fn(Grayn),该Fn(Grayn)的取值范围是Nn-1-Nn,将像素点Pn的R、G、B值乘以扩展系数Fn(Grayn),则第n级扩展动态范围区域Dn内像素点的亮度值最多可以扩大Nn倍。其中,Nn≥Nn-1≥…≥N2≥N1≥1。由此,可实现所有级别的扩展动态范围区域的动态范围扩展,得到最终的扩展动态范围图。假设显示器所支持的动态范围是原图动态范围的Nn倍,通过上述操作,最终的扩展动态范围图中像素点的最大亮度值可以扩大至原图中像素点的最大亮度值的Nn倍,这样,扩展动态范围图的动态范围可以扩大至原图动态范围的Nn倍,即映射到了显示器所支持的动态范围,可以充分利用显示器的显示性能,为用户带来尽可能接近真实环境的视觉效果。
本申请实施例还提供一种扩展图像显示动态范围的装置,包括:
原图信息获取模块,用于获取原图信息。该原图信息包括原图和亮度水平信息。其中,亮度水平信息用于指示原图中像素点的亮度值。
阈值分割模块,用于基于原图信息中包括的亮度水平信息和亮度阈值,获取原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。像素点对应的区域包括扩展动态范围区域和标准动态范围区域。其中,扩展动态范围区域中像素点对应的亮度值高于或等于亮度阈值,标准动态范围区域中像素点对应的亮度值低于亮度阈值。
在对原图进行动态范围扩展时,可以基于上述原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而得到原图对应的扩展动态范围图。其中,扩展动态范围图的动态范围大于原图的动态范围。
存储模块,用于将原图及原图对应的阈值分割结果进行关联保存。
本申请实施例还提供另一种扩展图像显示动态范围的装置,包括:
数据获取模块,用于获取原图及原图对应的阈值分割结果。该阈值分割结果包括原图中的各个像素点对应的区域类别标记。区域包括扩展动态范围区域和标准动态范围区域。标准动态范围区域中像素点的亮度值低于亮度阈值,扩展动态范围区域中像素点的亮度值高于或等于亮度阈值。
动态范围扩展模块,用于根据原图对应的阈值分割结果,对原图中的扩展动态范围区域进行动态范围扩展,从而生成原图对应的扩展动态范围图;该扩展动态范围图的动态范围大于原图的动态范围。
应理解,上述装置实施例具有实现上述方法实施例的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例还提供一种电子设备,电子设备包括存储器和一个或多个处理器;存储器与处理器耦合;其中,存储器中存储有计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被处理器执行时,使得电子设备执行如上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图3所示的电子设备300的结构。
在一些实施例中,电子设备包括一个或多个摄像头,摄像头用于采集原图信息。
在一些实施例中,电子设备通信模块,通信模块用于与其他设备进行数据传输,获取原图及原图对应的阈值分割结果。
本申请实施例还提供一种芯片系统,如图13所示,该芯片系统1000包括至少一个处理器1001和至少一个接口电路1002。
上述处理器1001和接口电路1002可通过线路互联。例如,接口电路1002可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1002可用于向其它装置(例如处理器1001)发送信号。示例性的,接口电路1002可读取存储器中存储的指令,并将该指令发送给处理器1001。当所述指令被处理器1001执行时,可使得电子设备执行上述方法实施例中手机100执行的各个功能或步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。例如,该计算机可以是上述手机。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种扩展图像动态范围的方法,其特征在于,包括:
    获取原图信息,所述原图信息包括原图和亮度水平信息,所述亮度水平信息用于指示所述原图中像素点的亮度值;
    基于所述亮度水平信息和亮度阈值,获取所述原图对应的阈值分割结果;其中,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括标准动态范围区域和扩展动态范围区域;所述标准动态范围区域中像素点的亮度值低于所述亮度阈值,所述扩展动态范围区域中像素点的亮度值高于或等于所述亮度阈值;
    将所述原图及所述原图对应的阈值分割结果进行关联保存,用于后续基于所述原图对应的阈值分割结果对所述原图中的所述扩展动态范围区域进行动态范围扩展,从而生成所述原图对应的扩展动态范围图,所述扩展动态范围图的动态范围大于所述原图的动态范围。
  2. 根据权利要求1所述的方法,其特征在于,所述亮度水平信息为在所述原图的同一场景下采集的曝光图,所述曝光图中的拍摄对象与所述原图中的拍摄对象相同;
    所述曝光图中各个像素点的亮度值指示所述原图中对应像素点的亮度值;
    所述基于所述亮度水平信息和亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果。
  3. 根据权利要求2所述的方法,其特征在于,根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值确定一个亮度阈值,并基于所述一个亮度阈值对所述曝光图进行单级阈值分割,得到所述原图对应的阈值分割结果;所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括一个标准动态范围区域和一个扩展动态范围区域。
  4. 根据权利要求2所述的方法,其特征在于,根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值确定多个亮度阈值,并基于所述多个亮度阈值对所述曝光图进行多级阈值分割,得到所述原图对应的阈值分割结果;所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括一个标准动态范围区域和多个级别的扩展动态范围区域。
  5. 根据权利要求2-4中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述曝光图中各个像素点的亮度值,构建对应的亮度直方图;所述亮度直方图用于显示所述曝光图中亮度值分别为0,1,…,255时的像素点个数;所述亮度直方图包括x轴和y轴;所述x轴的数值表示亮度值,取值依次为0,1,…,255;所述y轴的数值表示像素点的个数;
    基于所述亮度直方图,获取所述曝光图中亮度值分别为0,1,…,k,…,255的像素点个数,记为y0,y1,…,yk,…,y255;其中,y0是所述曝光图中亮度值x为0的像素点个数,y1是所述曝光图中亮度值x为1的像素点个数,yk是所述曝光图中亮度值x为k的像素点个数,y255是所述曝光图中亮度值x为255的像素点个数;
    按亮度值x由大到小的顺序,将对应的像素点个数逐个累加,当累加至亮度值k对应像素点个数yk时累加得到的总像素点个数大于预设数量,则将亮度值k作为所述亮度阈值。
  6. 根据权利要求2-4中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述曝光图中各个像素点的亮度值,构建对应的亮度直方图;所述亮度直方图用于显示所述曝光图中亮度值分别为0,1,…,255时的像素点个数;
    基于所述亮度直方图,采用大津法确定所述亮度阈值。
  7. 根据权利要求2-4中任一项所述的方法,其特征在于,所述方法还包括:
    计算所述曝光图上所有像素点的亮度值的均值M和标准差STD,通过以下公式计算所述亮度阈值T:
    T=M+β·STD
    其中,β为标准差系数。
  8. 根据权利要求2或3所述的方法,其特征在于,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记,包括:
    所述阈值分割结果为数据序列;
    所述原图中的各个像素点对应的区域类别标记用所述数据序列中各个对应的元素的值表示;所述数据序列中的元素的值包括0和1;
    所述数据序列中值为0的元素在原图中对应的像素点对应所述标准动态范围区域;所述数据序列中值为1的元素在原图中对应的像素点对应所述扩展动态范围区域。
  9. 根据权利要求2或3所述的方法,其特征在于,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记,包括:
    所述阈值分割结果为二维矩阵;
    所述原图中的各个像素点对应的区域类别标记用所述二维矩阵中各个对应的元素的值表示;所述二维矩阵中的元素的值包括0和1;
    所述二维矩阵中值为0的元素在原图中对应的像素点对应所述标准动态范围区域;所述二维矩阵中值为1的元素在原图中对应的像素点对应所述扩展动态范围区域。
  10. 根据权利要求1所述的方法,其特征在于,在将所述原图及所述原图对应的阈值分割结果进行关联保存之前,所述方法还包括:
    设所述原图对应的阈值分割结果为I,对所述原图对应的阈值分割结果I进行降采样处理,得到经过降采样处理后的所述原图对应的阈值分割结果I′;所述将所述原图及所述原图对应的阈值分割结果进行关联保存,包括:
    将所述原图及经过降采样处理后的所述原图对应的阈值分割结果I′进行关联保存。
  11. 一种扩展图像动态范围的方法,其特征在于,包括:
    获取原图及所述原图对应的阈值分割结果,所述原图对应的阈值分割结果包括所 述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括标准动态范围区域和扩展动态范围区域,所述标准动态范围区域中像素点的亮度值低于亮度阈值,所述扩展动态范围区域中像素点的亮度值高于或等于所述亮度阈值;
    按照所述原图对应的阈值分割结果,对所述原图中的所述扩展动态范围区域进行动态范围扩展,从而生成所述原图对应的扩展动态范围图;所述扩展动态范围图的动态范围大于所述原图的动态范围。
  12. 根据权利要求11所述的方法,其特征在于,所述按照所述原图对应的阈值分割结果,对所述原图中的所述扩展动态范围区域进行动态范围扩展,从而生成所述原图对应的扩展动态范围图,包括:
    对所述原图中的每一个像素点P:
    根据所述原图对应的阈值分割结果中包括的所述原图中的各个像素点对应的区域类别标记,判断所述像素点P对应所述标准动态范围区域还是对应所述扩展动态范围区域;
    若所述像素点P对应所述标准动态范围区域,则像素点对应的扩展系数αP=1,直接将所述像素点P的R、G、B值作为所述扩展动态范围图中与所述像素点P对应的像素点P′的R、G、B值;
    若所述像素点对应所述扩展动态范围区域,则像素点对应的扩展系数αP≥1,将所述像素点P的R、G、B值乘以所述扩展系数αP,得到一组的新的R、G、B值;将所述新的R、G、B值作为所述扩展动态范围图中与所述像素点P对应的像素点P′的R、G、B值。
  13. 根据权利要求12所述的方法,其特征在于,所述若所述像素点对应所述扩展动态范围区域,则对应的扩展系数αP≥1,包括:
    若所述像素点对应所述扩展动态范围区域,则将所述像素点P的R、G、B值转为灰度值Gray;
    根据色调映射函数F(x)获取所述灰度值Gray对应的色调映射函数值F(Gray);其中,所述色调映射函数F(x)为单调非递减函数,且F(x)≥1;
    将所述色调映射函数值F(Gray)作为所述像素点对应的扩展系数αP
  14. 根据权利要求11所述的方法,其特征在于,所述按照所述原图对应的阈值分割结果,对所述原图中的所述扩展动态范围区域进行动态范围扩展,从而生成所述原图对应的扩展动态范围图,包括:
    所述原图对应的阈值分割结果为经过降采样处理后的所述原图对应的阈值分割结果I′;
    对所述原图对应的阈值分割结果I′进行上采样处理,得到经过上采样处理后的所述原图对应的阈值分割结果I〞;按照所述经过上采样处理后的所述原图对应的阈值分割结果I〞,对所述原图中的所述扩展动态范围区域进行动态范围扩展,以得到所述扩展动态范围图。
  15. 一种电子设备,其特征在于,所述电子设备包括存储器和一个或多个处理器;所述存储器与所述处理器耦合;其中,所述存储器中存储有计算机程序代码,所述计 算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1-14中任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-14中任一项所述的方法。
  17. 一种扩展图像动态范围的方法,其特征在于,包括:
    获取原图信息,所述原图信息包括原图和亮度水平信息,所述亮度水平信息用于指示所述原图中像素点的亮度值;
    基于所述亮度水平信息和亮度阈值,获取所述原图对应的阈值分割结果;其中,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括标准动态范围区域和扩展动态范围区域;所述标准动态范围区域中像素点的亮度值低于所述亮度阈值,所述扩展动态范围区域中像素点的亮度值高于或等于所述亮度阈值;
    将所述原图及所述原图对应的阈值分割结果进行关联保存,用于后续基于所述原图对应的阈值分割结果,基于所述原图中各个像素点的亮度值与RGB值进行色调映射,对所述原图中的所述扩展动态范围区域进行动态范围扩展,从而生成所述原图对应的扩展动态范围图,所述扩展动态范围图的动态范围大于所述原图的动态范围。
  18. 根据权利要求17所述的方法,其特征在于,所述亮度水平信息为在所述原图的同一场景下采集的曝光图,所述曝光图中的拍摄对象与所述原图中的拍摄对象相同;所述曝光图中各个像素点的亮度值指示所述原图中对应像素点的亮度值;所述基于所述亮度水平信息和亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果。
  19. 根据权利要求18所述的方法,其特征在于,根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值确定一个亮度阈值,并基于所述一个亮度阈值对所述曝光图进行单级阈值分割,得到所述原图对应的阈值分割结果;所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括一个标准动态范围区域和一个扩展动态范围区域。
  20. 根据权利要求18所述的方法,其特征在于,根据所述曝光图中各个像素点的亮度值和所述亮度阈值,获取所述原图对应的阈值分割结果,包括:
    根据所述曝光图中各个像素点的亮度值确定多个亮度阈值,并基于所述多个亮度阈值对所述曝光图进行多级阈值分割,得到所述原图对应的阈值分割结果;所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记;所述区域类别标记用于指示所述原图中的各个像素点对应的区域的类别;所述区域包括一个标准动态范围区域和多个级别的扩展动态范围区域。
  21. 根据权利要求18-20中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述曝光图中各个像素点的亮度值,构建对应的亮度直方图;所述亮度直方图用于显示所述曝光图中亮度值分别为0,1,…,255时的像素点个数;所述亮度直方 图包括x轴和y轴;所述x轴的数值表示亮度值,取值依次为0,1,…,255;所述y轴的数值表示像素点的个数;
    基于所述亮度直方图,获取所述曝光图中亮度值分别为0,1,…,k,…,255的像素点个数,记为y0,y1,…,yk,…,y255;其中,y0是所述曝光图中亮度值x为0的像素点个数,y1是所述曝光图中亮度值x为1的像素点个数,yk是所述曝光图中亮度值x为k的像素点个数,y255是所述曝光图中亮度值x为255的像素点个数;
    按亮度值x由大到小的顺序,将对应的像素点个数逐个累加,当累加至亮度值k对应像素点个数yk时累加得到的总像素点个数大于预设数量,则将亮度值k作为所述亮度阈值。
  22. 根据权利要求18-20中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述曝光图中各个像素点的亮度值,构建对应的亮度直方图;所述亮度直方图用于显示所述曝光图中亮度值分别为0,1,…,255时的像素点个数;
    基于所述亮度直方图,采用大津法确定所述亮度阈值。
  23. 根据权利要求18-20中任一项所述的方法,其特征在于,所述方法还包括:
    计算所述曝光图上所有像素点的亮度值的均值M和标准差STD,通过以下公式计算所述亮度阈值T:
    T=M+β·STD
    其中,β为标准差系数。
  24. 根据权利要求18或19所述的方法,其特征在于,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记,包括:
    所述阈值分割结果为数据序列;
    所述原图中的各个像素点对应的区域类别标记用所述数据序列中各个对应的元素的值表示;所述数据序列中的元素的值包括0和1;
    所述数据序列中值为0的元素在原图中对应的像素点对应所述标准动态范围区域;所述数据序列中值为1的元素在原图中对应的像素点对应所述扩展动态范围区域。
  25. 根据权利要求18或19所述的方法,其特征在于,所述原图对应的阈值分割结果包括所述原图中的各个像素点对应的区域类别标记,包括:
    所述阈值分割结果为二维矩阵;
    所述原图中的各个像素点对应的区域类别标记用所述二维矩阵中各个对应的元素的值表示;所述二维矩阵中的元素的值包括0和1;
    所述二维矩阵中值为0的元素在原图中对应的像素点对应所述标准动态范围区域;所述二维矩阵中值为1的元素在原图中对应的像素点对应所述扩展动态范围区域。
  26. 根据权利要求17所述的方法,其特征在于,在将所述原图及所述原图对应的阈值分割结果进行关联保存之前,所述方法还包括:
    设所述原图对应的阈值分割结果为I,对所述原图对应的阈值分割结果I进行降采样处理,得到经过降采样处理后的所述原图对应的阈值分割结果I′;所述将所述原图及所述原图对应的阈值分割结果进行关联保存,包括:
    将所述原图及经过降采样处理后的所述原图对应的阈值分割结果I′进行关联保存。
PCT/CN2023/088196 2022-07-14 2023-04-13 一种扩展图像动态范围的方法及电子设备 WO2024011976A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23762126.3A EP4328852A1 (en) 2022-07-14 2023-04-13 Method for expanding dynamic range of image and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210827757.2A CN114897745B (zh) 2022-07-14 2022-07-14 一种扩展图像动态范围的方法及电子设备
CN202210827757.2 2022-07-14

Publications (1)

Publication Number Publication Date
WO2024011976A1 true WO2024011976A1 (zh) 2024-01-18

Family

ID=82730251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/088196 WO2024011976A1 (zh) 2022-07-14 2023-04-13 一种扩展图像动态范围的方法及电子设备

Country Status (3)

Country Link
EP (1) EP4328852A1 (zh)
CN (1) CN114897745B (zh)
WO (1) WO2024011976A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897745B (zh) * 2022-07-14 2022-12-20 荣耀终端有限公司 一种扩展图像动态范围的方法及电子设备
CN115128570B (zh) * 2022-08-30 2022-11-25 北京海兰信数据科技股份有限公司 一种雷达图像的处理方法、装置及设备
CN115760652B (zh) * 2023-01-06 2023-06-16 荣耀终端有限公司 扩展图像动态范围的方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295194A (zh) * 2013-05-15 2013-09-11 中山大学 亮度可控与细节保持的色调映射方法
WO2019183813A1 (zh) * 2018-03-27 2019-10-03 华为技术有限公司 一种拍摄方法及设备
WO2021036991A1 (zh) * 2019-08-30 2021-03-04 华为技术有限公司 高动态范围视频生成方法及装置
CN114897745A (zh) * 2022-07-14 2022-08-12 荣耀终端有限公司 一种扩展图像动态范围的方法及电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014170886A1 (en) * 2013-04-17 2014-10-23 Digital Makeup Ltd System and method for online processing of video images in real time
CN106017694A (zh) * 2016-05-31 2016-10-12 成都德善能科技有限公司 一种基于图像传感器的温度测量系统
CN106886386B (zh) * 2017-01-23 2019-06-04 苏州科达科技股份有限公司 从低动态图像生成高动态图像的方法
CN108305232B (zh) * 2018-03-01 2019-07-12 电子科技大学 一种单帧高动态范围图像生成方法
CN115078319A (zh) * 2018-06-27 2022-09-20 因纽美瑞克斯公司 用于透明化液滴成像的光片荧光显微成像装置及检测方法
CN110599433B (zh) * 2019-07-30 2023-06-06 西安电子科技大学 一种基于动态场景的双曝光图像融合方法
CN112150399B (zh) * 2020-09-27 2023-03-07 安谋科技(中国)有限公司 基于宽动态范围的图像增强方法及电子设备
CN113344810A (zh) * 2021-05-31 2021-09-03 新相微电子(上海)有限公司 基于动态数据分布的图像增强方法
CN113592727A (zh) * 2021-06-30 2021-11-02 国网吉林省电力有限公司延边供电公司 基于nsst域的电力设备红外图像增强方法
CN113360964A (zh) * 2021-08-09 2021-09-07 武汉理工大学 一种高动态范围下汇聚式双目视觉引导的机器人定位方法
CN113691724B (zh) * 2021-08-24 2023-04-28 Oppo广东移动通信有限公司 Hdr场景检测方法与装置、终端及可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295194A (zh) * 2013-05-15 2013-09-11 中山大学 亮度可控与细节保持的色调映射方法
WO2019183813A1 (zh) * 2018-03-27 2019-10-03 华为技术有限公司 一种拍摄方法及设备
WO2021036991A1 (zh) * 2019-08-30 2021-03-04 华为技术有限公司 高动态范围视频生成方法及装置
CN114897745A (zh) * 2022-07-14 2022-08-12 荣耀终端有限公司 一种扩展图像动态范围的方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU YING, FENGWEI WANG, WEIHUA LIU, DA AI, YUN LI, FANCHAO YANG: "High Dynamic Range Imaging Algorithm based on Luminance Partition Fuzzy Fusion", JOURNAL OF COMPUTER APPLICATIONS, JISUANJI YINGYONG, CN, vol. 40, no. 1, 10 January 2020 (2020-01-10), CN , pages 233 - 238, XP093098601, ISSN: 1001-9081, DOI: 10.11772/j.issn.1001-9081.2019061032 *

Also Published As

Publication number Publication date
CN114897745A (zh) 2022-08-12
EP4328852A1 (en) 2024-02-28
CN114897745B (zh) 2022-12-20

Similar Documents

Publication Publication Date Title
WO2024011976A1 (zh) 一种扩展图像动态范围的方法及电子设备
US9692959B2 (en) Image processing apparatus and method
WO2021036715A1 (zh) 一种图文融合方法、装置及电子设备
CN117063461A (zh) 一种图像处理方法和电子设备
CN113706414B (zh) 视频优化模型的训练方法和电子设备
WO2021190348A1 (zh) 图像处理方法和电子设备
CN110930329A (zh) 星空图像处理方法及装置
CN113596428A (zh) 映射曲线参数的获取方法和装置
CN114096994A (zh) 图像对齐方法及装置、电子设备、存储介质
CN114463191B (zh) 一种图像处理方法及电子设备
CN110570370B (zh) 图像信息的处理方法、装置、存储介质及电子设备
CN113538227B (zh) 一种基于语义分割的图像处理方法及相关设备
CN111767016B (zh) 显示处理方法及装置
WO2023011302A1 (zh) 拍摄方法及相关装置
CN117132515A (zh) 一种图像处理方法及电子设备
CN115150542B (zh) 一种视频防抖方法及相关设备
WO2022115996A1 (zh) 图像处理方法及设备
CN114172596A (zh) 信道噪声检测方法及相关装置
CN115691370A (zh) 显示控制方法及相关装置
CN114793283A (zh) 图像编码方法、图像解码方法、终端设备及可读存储介质
CN116453131B (zh) 文档图像矫正方法、电子设备及存储介质
CN115760652B (zh) 扩展图像动态范围的方法和电子设备
CN117201930B (zh) 一种拍照方法和电子设备
CN116205822B (zh) 一种图像处理的方法、电子设备和计算机可读存储介质
CN116723416B (zh) 图像处理方法及电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2023762126

Country of ref document: EP

Effective date: 20230907