WO2022000397A1 - 低照度图像增强方法、装置及计算机设备 - Google Patents

低照度图像增强方法、装置及计算机设备 Download PDF

Info

Publication number
WO2022000397A1
WO2022000397A1 PCT/CN2020/099841 CN2020099841W WO2022000397A1 WO 2022000397 A1 WO2022000397 A1 WO 2022000397A1 CN 2020099841 W CN2020099841 W CN 2020099841W WO 2022000397 A1 WO2022000397 A1 WO 2022000397A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
weight map
low
virtual exposure
exposure
Prior art date
Application number
PCT/CN2020/099841
Other languages
English (en)
French (fr)
Inventor
王文成
吴小进
高在瑞
Original Assignee
潍坊学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 潍坊学院 filed Critical 潍坊学院
Priority to PCT/CN2020/099841 priority Critical patent/WO2022000397A1/zh
Publication of WO2022000397A1 publication Critical patent/WO2022000397A1/zh

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to a low-illumination image enhancement method, device and computer equipment.
  • Digital image processing systems are widely used in industrial production, video surveillance, intelligent transportation, remote sensing monitoring and other fields, and play an important role in people's production and life and military fields.
  • the reflected light on the surface of the object is weak, and the obtained image is distorted in color and contains a lot of noise, which seriously affects the image quality.
  • low-light environments such as dim light or night not only affect the visual perception of the human eye, but also affect the target recognition accuracy of subsequent machine systems, and even lead to System crashes.
  • the traditional infrared supplementary light scheme can obtain a clear image in a completely dark environment, but can only form a monochrome grayscale image, and the color information is lost and the noise is large. Because each frame of image obtains the detailed information of different areas of the scene, it is related to the exposure degree of the acquisition device. When the image is acquired with low exposure, the details of the image highlights are visible, but the details of the dark areas are seriously lost; on the contrary, when the image is acquired with high exposure, the details of the dark areas are visible, but the highlight information is also lost due to overexposure. . Therefore, it has become an urgent need in various fields to enhance the details of low-light images under poor lighting conditions and to restore the color information of scene images as much as possible.
  • the multi-exposure fusion technology is used for image enhancement, and the multi-exposure fusion technology can generate a new image with rich details and information by fusing multiple images with different exposures, so that people can obtain high-quality images.
  • this method requires multiple images of the same scene, it is limited by application scenarios.
  • the multi-exposure fusion technology is used for image enhancement.
  • the multi-exposure fusion technology can generate high-quality images by fusing multiple images with different exposures.
  • this method requires multiple images of the same scene. Therefore, due to the limitation of application scenarios, the present application provides a low-illumination image enhancement method, apparatus, and computer equipment.
  • the present application provides a low-illumination image enhancement method, including:
  • An enhanced image is obtained by fusing the virtual exposure image sequence.
  • performing fusion processing on the virtual exposure image sequence to obtain an enhanced image comprising:
  • the multi-scale weight map decomposition image and the multi-resolution virtual exposure map decomposition image of the corresponding scale are fused to obtain an enhanced image.
  • the information measurement factor includes at least one of contrast, saturation, and saliency.
  • the information measurement factors include: contrast, saturation and saliency
  • the generating a weight map of each image in the virtual exposure image sequence according to the information measurement factors includes:
  • the weight map corresponding to each image is obtained by normalizing the contrast sub-weight map, the saturation sub-weight map and the saliency sub-weight map.
  • calculating the contrast sub-weight map of each image according to the contrast of each image includes:
  • Laplacian filtering is performed on the normalized images to obtain the contrast sub-weight map of each image.
  • calculating the saturation sub-weight map of each image according to the saturation of each image includes:
  • the saturation sub-weight map of each image is obtained according to the saturation coefficient.
  • calculating the saliency sub-weight map of each image according to the saliency of each image includes:
  • the saliency sub-weight map corresponding to the fusion source of each image is obtained according to the global mean and the fusion source after Gaussian blurring.
  • the constructing the virtual exposure enhancement function includes:
  • a virtual exposure enhancement function is constructed according to the control parameters and the image grayscale transformation function.
  • setting control parameters include:
  • control parameters with the largest exposure are decomposed in sequence to obtain the control parameters of each image.
  • the enhanced image obtained by fusing the multi-scale weight map decomposition image with the multi-resolution virtual exposure map decomposition image of the corresponding scale includes:
  • the new Laplacian pyramid is reconstructed to obtain the enhanced image.
  • the present application provides a low-illumination image enhancement device, including:
  • the acquisition module is used to acquire a single low-light original image
  • a virtual exposure image sequence generation module configured to construct a virtual exposure enhancement function, and process the single low-illuminance original image through the virtual exposure enhancement function to obtain a virtual exposure image sequence
  • the fusion module is configured to perform fusion processing on the virtual exposure image sequence to obtain an enhanced image.
  • the present application provides a computer device, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor causes the processor to execute any one of the first aspect. A step of the method.
  • a virtual exposure enhancement function is constructed by acquiring a single low-illumination original image, and a virtual exposure image is obtained by processing the single low-illumination original image through the virtual exposure enhancement function. Sequence, the virtual exposure image sequence is fused to obtain an enhanced image, no need to perform image calibration, no need to acquire multiple images and record the exposure time information of each image, the calculation process is simple, the application scope is flexible, and it can be applied to video surveillance , scene restoration, etc.
  • FIG. 1 is a flowchart of a low-illumination image enhancement method provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a low-illumination image enhancement method provided by another embodiment of the present application.
  • FIG. 3 is a flowchart of a low-illumination image enhancement method provided by another embodiment of the present application.
  • FIG. 4 is a functional structural diagram of a low-illumination image enhancement apparatus provided by an embodiment of the present application.
  • FIG. 1 is a flowchart of a low-light image enhancement method provided by an embodiment of the present application. As shown in FIG. 1 , the low-light image enhancement method includes:
  • the detailed information of different areas of the scene obtained in each frame of image is related to the exposure degree of the acquisition device.
  • the traditional solution is to use multi-exposure fusion technology to generate a new image with rich details by fusing images of the same scene with different exposures. It overcomes the limitation that the dynamic range of ordinary digital cameras and display equipment is narrower than the real scene, so that people can obtain high-quality images.
  • scene-wise restrictions apply. Since a single image contains much less information than a multi-exposure image, it is difficult to estimate the camera response curve based on a single image as it is based on multiple images.
  • a single low-illuminance original image is converted into a virtual exposure image sequence by constructing a virtual exposure enhancement function, and then multi-image fusion processing is performed on the virtual exposure image sequence to obtain an enhanced image.
  • multi-image fusion processing is performed on the virtual exposure image sequence to obtain an enhanced image.
  • a single low-illumination original image is obtained, a virtual exposure enhancement function is constructed, a single low-illumination original image is processed by the virtual exposure enhancement function to obtain a virtual exposure image sequence, and the virtual exposure image sequence is fused to obtain enhancement.
  • Image does not need to perform image calibration, does not need to acquire multiple images and record the exposure time information of each image, the calculation process is simple, the application range is flexible, and can be applied to video surveillance, scene restoration and other fields.
  • An embodiment of the present invention provides another low-light image enhancement method, as shown in the flowchart in FIG. 2 , the low-light image enhancement method includes:
  • information measurement factors include, but are not limited to, contrast, saturation, and saliency.
  • the weight map corresponding to each image is obtained by normalizing the contrast sub-weight map, saturation sub-weight map and saliency sub-weight map.
  • the target content is highly correlated, but each has a different focus.
  • a properly exposed image can present rich texture details and color information.
  • a small weight should be given to the smooth area and unsaturated area formed by overexposure or underexposure, while good exposure is rich in details.
  • the regions of the regions are assigned larger weights.
  • calculating the contrast sub-weight map of each image according to the contrast of each image includes:
  • Laplacian filtering is performed on the normalized images to obtain the contrast sub-weight map of each image.
  • Contrast represents the amount of image detail. The greater the contrast, the better the image detail performance, and the easier it is for the human eye to distinguish. Taking the absolute value after Laplace filtering as the contrast factor, the weight coefficient is obtained by calculating the edge change of the image.
  • the specific expression and filter template are:
  • C is the contrast
  • I is the image to be contrasted
  • h is the Laplacian filter
  • the contrast calculated by this weight coefficient is mainly to distinguish the jump degree of the point relative to the surrounding, so the absolute value of the pixel at this position is taken as the final contrast parameter.
  • calculating the saturation sub-weight map of each image according to the saturation of each image including:
  • the saturation sub-weight map of each image is obtained according to the saturation coefficient.
  • Saturation is an important indicator that reflects the vividness of the image and represents the vividness of the color. The higher the saturation, the more vivid the image will be.
  • Saturation quantifies the RGB channels of each pixel in the image. Saturation is obtained by calculating the standard deviation of the three chrominance channels. The steps are: first extract the R, G and B three-color components of the image, then calculate the average value of the RGB three-color components of each pixel in the image, and finally calculate a single The color standard deviation of the image determines the saturation coefficient S ij,k .
  • the specific calculation formula is as follows:
  • ij, k represents the multiple-exposure image in the k-th image (i, j) at pixel
  • I R, I G, I B are the pixel values of R, G, B three color channels
  • is the mean between the three.
  • the saliency sub-weight map of each image is calculated according to the saliency of each image, including:
  • the saliency sub-weight map corresponding to the fusion source of each image is obtained according to the global mean and the fusion source after Gaussian blurring.
  • Saliency features can accurately describe how important a pixel is relative to its neighborhood. Therefore, constructing a fusion weight map based on saliency can effectively highlight the important parts of each fusion source without introducing noise.
  • the construction process of the saliency sub-weight map can be expressed as:
  • Aij,k is the saliency sub-weight map corresponding to the fusion source Ik
  • Iu,k is the global mean value of the fusion source in the Lab color space of the three channels
  • the saliency map is used to highlight the saliency area in the image, enhance the contrast between the saliency area and adjacent areas, and thus improve the global contrast of the image.
  • the final weight W is the product of the contrast sub-weight, the saturation sub-weight and the saliency sub-weight, and the three weights constrain the final weight at the same time. Finally, the three information measurement factors are multiplied, and then W is normalized to ensure that the weight coefficient is used for each pixel value.
  • the weight map is generated based on 3 information measurement factors, and the n virtual exposure image sequences are fused.
  • the specific formula of fusion is as follows:
  • ij,k represents the pixel point at the kth image (i,j) in the multi-exposure image.
  • C ij,k , S ij,k , A ij,k are the contrast, saturation and saliency of the pixel at the kth image (i,j), respectively.
  • ⁇ c , ⁇ s , and ⁇ e are used to control the degree of influence of the contrast measurement factor C, the saturation measurement factor S, and the saliency measurement factor A on the scalar weight map W.
  • the Laplacian pyramid is used to decompose the image, and the image fusion is realized by means of multi-resolution.
  • the N virtual exposure images are decomposed into Gaussian pyramid respectively, and the N weight maps are decomposed into Laplacian pyramid respectively to obtain images and weight maps of different resolutions.
  • G/L represent Gaussian pyramid operation and Laplacian respectively.
  • decompose the l-th Laplacian pyramid of image A as L ⁇ A ⁇ l
  • decompose the l-th Gaussian pyramid of image B as G ⁇ B ⁇ l .
  • the specific implementation process of S24 includes but is not limited to the following methods:
  • weighted summation is performed on each layer according to the weight of the weight map pyramid and the Laplacian pyramid coefficient of the corresponding position to obtain a new fused Laplacian pyramid.
  • N represents the number of input images
  • I represents the input images with different exposures
  • ij represents the pixel point (i, j)
  • l represents the number of layers of pyramid decomposition (0 ⁇ l ⁇ M), for example, the highest level is 5.
  • ⁇ d is the up-sampling operator
  • d is the sampling factor
  • d 2 l-1 .
  • the traditional multi-exposure fusion algorithm usually does not perform any image transformation on the source image, and does not consider the correlation between pixels, but directly fuses the corresponding pixels in the source image to obtain a new image. However, this operation cannot better represent the features in the source image, and the loss of texture details is also serious.
  • the halo effect can be effectively avoided through the multi-resolution fusion method, and the increase of the number of layers can improve the overall visual effect of the image and highlight the scene details, enhance the realism, and alleviate the existing over-enhancement phenomenon.
  • the halo and color distortion in the enhancement result are effectively avoided, and the fidelity of visual information is improved adaptively.
  • the basic parameters in the default setting do not depend on the external input of the system, and the parameters to be transformed are based on the image in the process of algorithm implementation.
  • the content features are automatically calculated and obtained, which has good adaptability and robustness, and has a certain universality.
  • the databases used include self-built database (300 images in total), LDR database, IEC database, and PMEA database.
  • the scenes in the images include indoor, outdoor, city, landscape, night, cloudy, etc. It is the case of low illumination in some areas of the image and a wide overall dynamic range.
  • each image will be divided into regions according to different angles, and the contrast effect before and after image enhancement will be displayed through different regions of the same image.
  • the low-light area is enhanced, while the high-light area is suppressed.
  • the color of the image is natural, the details are clear, and the target hidden in the low-light is displayed.
  • This embodiment provides The method can improve the effect of illumination on image quality.
  • This embodiment compares with the processing results of various mainstream algorithms from the perspectives of subjective visual evaluation and objective quantitative analysis.
  • the comparison results prove that the method provided in this embodiment has obvious improvement in terms of color and contrast, the visual effect is obviously stronger than other methods, the image details are recovered clearly, and the color is well maintained.
  • the overall color, contrast, etc. of the enhanced image obtained by the method provided in this embodiment are close to the reference image of normal illumination, which meets human visual requirements.
  • An embodiment of the present invention provides another low-light image enhancement method, as shown in the flowchart in FIG. 3 , the low-light image enhancement method includes:
  • setting the control parameters specifically includes:
  • control parameters with the largest exposure are decomposed in sequence to obtain the control parameters of each image.
  • I i is the ith image to be fused in the image sequence
  • I e is the image after fusion
  • F ⁇ is the fusion function
  • the simplest processing method is to use the tone mapping operator to directly perform grayscale enhancement on the original image.
  • Each grayscale enhancement is equivalent to a
  • the gray scale range extension and brightness enhancement are carried out on a certain area in a targeted manner, so that I i ⁇ I 0 .
  • the processed image is:
  • f i should satisfy monotonicity and boundedness:
  • this luminance transformation is not the result of real exposure, it is called a virtual exposure enhancement function.
  • a control parameter k needs to be set, so as to obtain different transformation functions and achieve different exposures:
  • Equation (2) can be transformed into:
  • the input-output relationship of the virtual exposure enhancer set based on the image brightness control parameters is expressed as:
  • x represents the input quantity
  • y represents the output quantity
  • k represents the control coefficient
  • each pixel value in the ith virtual image I i can be expressed as:
  • k i is the i-th exposure control parameter, which are respectively k 1 , k 2 , k 3 . . . , k N , which are brought into the virtual exposure enhancement function, and N virtual exposure enhanced images will be obtained.
  • the brightness of the generated image depends on the control parameters.
  • the k value by taking the gray average value of the image after virtual exposure as the maximum gray expected value.
  • the estimated The value is constrained, and the maximum and minimum thresholds are set to be represented by k H and k L respectively, namely the final estimate
  • the value is expressed by the formula as:
  • the k value of the enhanced image gray average value is ⁇ , which is the control parameter with the maximum exposure degree.
  • N the number of virtual exposure sequence images that need to be obtained.
  • N image sequences can be obtained. Its calculation formula is as follows:
  • the set number of images is 5, that is, 5 virtual exposure image sequences with different exposure values are obtained.
  • the virtual exposure enhancement function to virtually expose the low-illumination image is equivalent to expanding and stretching the brightness of a certain area in a targeted manner.
  • the low-exposure image can better show the high-brightness area in the real scene, while the high-exposure image can Good representation of low-light areas in real scenes.
  • the low-illumination original image needs to be added to the image sequence to be fused, so that the enhanced image is more natural as a whole.
  • An embodiment of the present invention provides a low-illumination image enhancement device, as shown in the functional structure diagram shown in FIG. 4 , the low-illumination image enhancement device includes:
  • an acquisition module 41 configured to acquire a single low-illuminance original image
  • the virtual exposure image sequence generation module 42 is configured to construct a virtual exposure enhancement function, and process a single low-illuminance original image through the virtual exposure enhancement function to obtain a virtual exposure image sequence;
  • the fusion module 43 is configured to perform fusion processing on the virtual exposure image sequence to obtain an enhanced image.
  • the fusion module 43 includes:
  • a weight map generating unit configured to generate a weight map of each image in the virtual exposure image sequence according to the information measurement factor
  • the first decomposition unit is used to perform Laplacian pyramid decomposition on the weight map to obtain a multi-scale weight map decomposition image
  • the second decomposition unit is used to perform Gaussian pyramid decomposition on each image in the virtual exposure image sequence to obtain a multi-resolution virtual exposure map decomposition image;
  • the fusion unit is used for weighted summation of the multi-scale weight map decomposition image and the multi-resolution virtual exposure map decomposition image of the corresponding scale to obtain a new Laplacian pyramid after fusion;
  • the reconstruction unit is used to reconstruct the new Laplacian pyramid to obtain an enhanced image.
  • the information measurement factor includes at least one of contrast, saturation, and saliency.
  • the virtual exposure image sequence generation module 42 includes a virtual exposure enhancement function construction module and a processing module, where the processing module is configured to process a single low-illuminance original image through the virtual exposure enhancement function to obtain a virtual exposure image sequence.
  • the module for constructing a virtual exposure enhancement function includes a setting unit for controlling parameters and a construction unit for constructing a virtual exposure enhancement function according to the control parameters and the image grayscale transformation function.
  • a single low-illuminance original image is acquired by an acquisition module
  • a virtual exposure image sequence generation module constructs a virtual exposure enhancement function
  • a single low-illumination original image is processed by the virtual exposure enhancement function to obtain a virtual exposure image sequence
  • the fusion module The enhanced image is obtained by fusing the virtual exposure image sequence, which can alleviate the existing over-enhancement phenomenon, effectively avoid the halo and color distortion in the enhancement result, and improve the fidelity of visual information adaptively.
  • the parameters do not depend on the external input of the system, and the parameters to be transformed are automatically calculated and obtained based on the characteristics of the image content during the implementation of the algorithm, which has good adaptability and robustness, and has a certain universality, which can be applied to video surveillance. , scene restoration, etc.
  • the present embodiment provides a computer device, including a memory and a processor, where a computer program is stored in the memory, and when the computer program is executed by the processor, the processor causes the processor to execute the steps of the methods described in the foregoing embodiments.
  • low-intensity image enhancement method low-intensity image enhancement device and computer equipment belong to a general inventive concept, and the contents in the embodiments of the low-intensity image enhancement method, low-intensity image enhancement device and computer equipment are applicable to each other.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Abstract

一种低照度图像增强方法、装置及计算机设备,低照度图像增强方法包括获取单幅低照度原始图像(S11);构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列(S12);对虚拟曝光图像序列进行融合处理得到增强图像(S13)。该方法不需要进行图像校准、不需要获取多幅图像并记录每一张图像的曝光时间信息,计算流程简单,应用范围灵活,可以应用到视频监控、场景恢复等领域。

Description

低照度图像增强方法、装置及计算机设备 技术领域
本申请属于图像处理技术领域,具体涉及一种低照度图像增强方法、装置及计算机设备。
背景技术
数字图像处理系统广泛应用到工业生产、视频监控、智能交通、遥感监测等领域,在人们的生产生活和军事领域中发挥着重要的作用。然而,在室内、夜间、阴暗天气等不良光照条件下,物体表面的反射光较弱,获取到的图像颜色失真而且含有大量噪声,严重地影响了图像质量。特别是在视频监控、智能交通以及自动驾驶等需要全天候图像分析的领域,光线昏暗或者夜晚等低照度环境不仅影响了人眼的视觉感受,而且给后续的机器系统目标识别精度造成影响,甚至导致系统瘫痪。传统的红外补光方案能在完全无光环境中获得清晰的成像,但只能形成单色灰度图像,彩色信息丢失并且噪声较大。由于每帧图像获得场景不同区域的细节信息与采集设备的曝光程度有关。当采用低曝光度获取时,图像高亮区细节信息可见,但暗区的细节丢失严重;相反,用高曝光度获取图像时,暗区细节可见,但高亮区由于曝光过度信息同样会丢失。因此,增强光照不良条件下低照度图像细节和并且尽可能地还原场景图像色彩信息已经成为各个领域的迫切需求。
相关技术中,应用多曝光融合技术进行图像增强,多曝光融合技术能够通过将多个不同曝光度图像进行融合生成一幅细节信息丰富的新图像,使得人们获得高质量图像。但是,由于该方法需要同一场景的多幅图像, 因此受到应用场景方面的限制。
发明内容
为至少在一定程度上克服相关技术中,应用多曝光融合技术进行图像增强,多曝光融合技术能够通过将多个不同曝光度图像进行融合生成高质量图像,但是,由于该方法需要同一场景的多幅图像,因此受到应用场景方面的限制的问题,本申请提供一种低照度图像增强方法、装置及计算机设备。
第一方面,本申请提供一种低照度图像增强方法,包括:
获取单幅低照度原始图像;
构造虚拟曝光增强函数,通过所述虚拟曝光增强函数对所述单幅低照度原始图像进行处理得到虚拟曝光图像序列;
对所述虚拟曝光图像序列进行融合处理得到增强图像。
进一步的,所述对所述虚拟曝光图像序列进行融合处理得到增强图像,包括:
根据信息测量因子生成所述虚拟曝光图像序列中各幅图像的权重图;
对所述权重图进行拉普拉斯金字塔分解得到多尺度权重图分解图像;
对所述虚拟曝光图像序列中各幅图像进行高斯金字塔分解得到多分辨率虚拟曝光图分解图像;
将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行融合得到增强图像。
进一步的,所述信息测量因子包括:对比度、饱和度和显著性中至少一种。
进一步的,所述信息测量因子包括:对比度、饱和度和显著性,所述根据信息测量因子生成所述虚拟曝光图像序列中各幅图像的权重图,包括:
根据各幅图像的对比度计算各幅图像的对比度子权重图;
根据各幅图像的饱和度计算各幅图像的饱和度子权重图;
根据各幅图像的显著性计算各幅图像的显著性子权重图;
对所述对比度子权重图、饱和度子权重图和显著性子权重图进行归一化处理得到各幅图像对应的权重图。
进一步的,所述根据各幅图像的对比度计算各幅图像的对比度子权重图,包括:
将所述各幅图像转为灰度图像;
将所述灰度图像的像素值归一化至[0,1]区间;
对归一化后图像进行拉普拉斯滤波得到各幅图像的对比度子权重图。
进一步的,所述根据各幅图像的饱和度计算各幅图像的饱和度子权重图,包括:
将所述各幅图像的R、G和B三色分量提取出来;
计算图像中每个像素的RGB三色分量均值;
根据RGB三色分量均值计算单个图像的色彩标准差确定饱和度系数;
根据饱和度系数得到各幅图像的饱和度子权重图。
进一步的,所述根据各幅图像的显著性计算各幅图像的显著性子权重图,包括:
计算所述各幅图像的融合源在Lab色彩空间三通道的全局均值;
对融合源在Lab色彩空间进行高斯模糊处理;
根据全局均值和高斯模糊处理后融合源得到各幅图像融合源所对应的显著性子权重图。
进一步的,所述构造虚拟曝光增强函数包括:
设置控制参数;
获取图像灰度变换函数;
根据所述控制参数和图像灰度变换函数构造虚拟曝光增强函数。
进一步的,所述设置控制参数包括:
获取曝光程度最大的控制参数;
对曝光程度最大的控制参数进行序列分解得到每个图像的控制参数。
进一步的,所述将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行融合得到增强图像,包括:
将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行加权求和,得到融合后的新拉普拉斯金字塔;
将新拉普拉斯金字塔进行重构,得到所述增强图像。
第二方面,本申请提供一种低照度图像增强装置,包括:
获取模块,用于获取单幅低照度原始图像;
虚拟曝光图像序列生成模块,用于构造虚拟曝光增强函数,通过所述虚拟曝光增强函数对所述单幅低照度原始图像进行处理得到虚拟曝光图像序列;
融合模块,用于对所述虚拟曝光图像序列进行融合处理得到增强图像。
第三方面,本申请提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行第一方面中任一项所述方法的步骤。
本申请的实施例提供的技术方案可以包括以下有益效果:
本发明实施例提供的低照度图像增强方法、装置及计算机设备,通过获取单幅低照度原始图像,构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列,对虚拟曝光图像序列进行融合处理得到增强图像,不需要进行图像校准、不需要获取多幅图像并记录每一张图像的曝光时间信息,计算流程简单,应用范围灵活,可以应用到视频监控、场景恢复等领域。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1为本申请一个实施例提供的一种低照度图像增强方法的流程图。
图2为本申请另一个实施例提供的一种低照度图像增强方法的流程图。
图3为本申请另一个实施例提供的一种低照度图像增强方法的流程图。
图4为本申请一个实施例提供的一种低照度图像增强装置的功能结构图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将对本申请的技术方案进行详细的描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所得到的所有其它实施方式,都属于本申请所保护的范围。
图1为本申请一个实施例提供的低照度图像增强方法的流程图,如图1所示,该低照度图像增强方法包括:
S11:获取单幅低照度原始图像;
S12:构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列;
S13:对虚拟曝光图像序列进行融合处理得到增强图像。
每帧图像获得场景不同区域的细节信息与采集设备的曝光程度有关。当采用低曝光度获取时,图像高亮区细节信息可见,但暗区的细节丢失严重;相反,用高曝光度获取图像时,暗区细节可见,但高亮区由于曝光过度信息同样会丢失。传统解决方法是采用多曝光融合技术,通过将同一场景不同曝光度图像进行融合生成一副细节信息丰富的新图像,并广泛应用于HDR(High-Dynamic Range,高动态范围)图像的生成,弥补了普通数码摄像及显示器材的动态范围窄于现实场景的局限性,使得人们获得高质 量图像。但是,由于该方法需要同一场景的多幅图像,因此会应用场景方面的限制。由于单幅图像包含的信息比多曝光图像信息少得多,基于单幅图像难以像基于多幅图像那样做相机响应曲线估计。
本实施例中,通过构造虚拟曝光增强函数将单幅低照度原始图像转化成虚拟曝光图像序列,再对虚拟曝光图像序列进行多图像融合处理,得到增强图像。不需要担心相机响应曲线的校准以及跟踪每张图像的曝光时间,实用性更强。
本实施例中,通过获取单幅低照度原始图像,构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列,对虚拟曝光图像序列进行融合处理得到增强图像,不需要进行图像校准、不需要获取多幅图像并记录每一张图像的曝光时间信息,计算流程简单,应用范围灵活,可以应用到视频监控、场景恢复等领域。
本发明实施例提供另一种低照度图像增强方法,如图2所示的流程图,该低照度图像增强方法包括:
S21:根据信息测量因子计算虚拟曝光图像序列中各幅图像的权重图;
一些实施例中,信息测量因子包括但不限于:对比度、饱和度和显著性。
根据信息测量因子计算虚拟曝光图像序列中各幅图像的权重图,包括:
根据各幅图像的对比度计算各幅图像的对比度子权重图;
根据各幅图像的饱和度计算各幅图像的饱和度子权重图;
根据各幅图像的显著性计算各幅图像的显著性子权重图;
将对比度子权重图、饱和度子权重图和显著性子权重图进行归一化处理得到各幅图像对应的权重图。
由于虚拟曝光图像序列来自同一图像,因此目标内容高度相关,但是各自的侧重点不同。曝光亮度合适的图像能呈现丰富的纹理细节和色彩信息。为了使得融合后的图像符合人眼视觉系统特性,在这多幅虚拟曝光图像中,对于曝光过度或者曝光不足而形成的平滑区域和不饱和区域,应给 予较小的权重,而曝光良好细节丰富的区域的区域分配较大的权重。
一些实施例中,根据各幅图像的对比度计算各幅图像的对比度子权重图,包括:
将各幅图像转为灰度图像;
将灰度图像的像素值归一化至[0,1]区间;
对归一化后图像进行拉普拉斯滤波得到各幅图像的对比度子权重图。
对比度代表图像细节的多少,对比度越大则图像细节表现效果越好,人眼更易分辨。以拉普拉斯滤波之后的绝对值作为对比度因子,通过计算图像边缘变化获得权重系数。具体表达方式及滤波器模板为:
C=h*I(10)
Figure PCTCN2020099841-appb-000001
其中C表示对比度,I为待求对比度的图像,h为拉普拉斯滤波器。
此权重系数计算的对比度主要是为了区分该点相对周围的跳变程度,所以取该位置像素的绝对值作为最后的对比度参数。
一些实施例中,根据各幅图像的饱和度计算各幅图像的饱和度子权重图,包括:
将各幅图像的R、G和B三色分量提取出来;
计算图像中每个像素的RGB三色分量均值;
根据RGB三色分量均值计算单个图像的色彩标准差确定饱和度系数;
根据饱和度系数得到各幅图像的饱和度子权重图。
饱和度是反映的是图像生动情况的重要指标,代表了色彩的鲜艳程度。饱和度高,则图像表现的越形象生动。饱和度可以量化图像里面每个像素的RGB通道。饱和度通过计算三个色度通道的标准差获得,其步骤为:首先将图像的R、G和B三色分量提取出来,然后计算图像中每个像素的RGB三色分量均值,最后计算单个图像的色彩标准差确定饱和度系数S ij,k。具体 计算公式如下:
Figure PCTCN2020099841-appb-000002
Figure PCTCN2020099841-appb-000003
式中:ij,k表示多曝光图像中第k个图像(i,j)处的像素点,I R、I G、I B分别为R、G、B三个色彩通道的像素值,μ为其三者之间的均值。
一些实施例中,根据各幅图像的显著性计算各幅图像的显著性子权重图,包括:
计算各幅图像的融合源在Lab色彩空间三通道的全局均值;
对融合源在Lab色彩空间进行高斯模糊处理;
根据全局均值和高斯模糊处理后融合源得到各幅图像融合源所对应的显著性子权重图。
显著性特征可以准确表述像素相对于其所在邻域的重要程度。因此,依据显著性构建融合权重图可以在不引入噪声的前提下有效凸显出各融合源中的重要部分。显著性子权重图的构建过程可以表示为:
A ij,k=||I u,k-I g,k||  (14)
A ij,k是融合源I k所对应的显著性子权重图,I u,k是融合源在Lab色彩空间三通道的全局均值,I g,k是融合源在Lab色彩空间经高斯滤波模糊后得到,滤波器的截止频率ω g=π/2.75。显著图用于突出图像中的显著性区域,增强显著性区域与相邻区域的对比度,从而提高图像的全局对比度。
最终的权重W是对比度子权重、饱和度子权重和显著性子权重共同的乘积,三个权重同时约束最终权重。最后选择将三个信息测量因子相乘,之后归一化W,确保权重系数使用到每一个像素值上。
基于3个信息测量因子生成权重图,对n幅经虚拟曝光图像序列进行融合,融合的具体公式如下:
Figure PCTCN2020099841-appb-000004
其中,ij,k表示多曝光图像中第k个图像(i,j)处的像素点。C ij,k、S ij,k、A ij,k分别为第k个图像(i,j)处的像素点的对比度、饱和度和显著性。ω c、ω s、ω e分别用于控制的对比度测量因子C、饱和度测量因子S、显著性测量因子A对标量权重图W的影响程度,对于各个融合源来说,权重图同等重要,一些实施例中,取ω c=ω s=ω e=1,即三个测量因子对生成权重图的影响相同。
S22:对权重图进行拉普拉斯金字塔分解得到多尺度权重图分解图像;
由于每幅图中的每个像素的权重不一,权重变化过快会使融合后的图像产生“裂缝”,容易导致最后的增强结果产生离散的光晕效应。为了避免权重变化过快引起的光晕效应问题,使用拉普拉斯金字塔来分解图像,通过多分辨率的方式来实现图像融合。
S23:对虚拟曝光图像序列中各幅图像进行高斯金字塔分解得到多分辨率虚拟曝光图分解图像;
首先,将N幅虚拟曝光图像分别进行高斯金字塔分解,分别对N幅权重图进行拉普拉斯金字塔分解,得到不同分辨率的图像和权重图,G/L分别表示高斯金字塔操作和拉普拉斯操作,并记图像A的第l层拉普拉斯金字塔分解为L{A} l,记图像B的第l层高斯金字塔分解后是G{B} l
S24:将多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行融合得到增强图像。
一些实施例中,S24的具体实现过程包括但不限于以下方法:
S241:将多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行加权求和,得到融合后的新拉普拉斯金字塔;
S242:将新拉普拉斯金字塔进行重构,得到增强图像。
然后,利用公式(18),在每一层上依据权重图金字塔的权重与对应位置的拉普拉斯金字塔系数进行加权求和,得到融合后的新拉普拉斯金字塔。
Figure PCTCN2020099841-appb-000005
其中N表示输入图像个数,I表示输入的不同曝光图像,ij表示像素点 (i,j),
Figure PCTCN2020099841-appb-000006
表示归一化后的权重,l表示金字塔分解的层数(0≤l≤M),例如,最高层数为5。
最后,将拉普拉斯金字塔L{R} l进行重构,最后得到融合后的图像R,融合过程如下所示。
Figure PCTCN2020099841-appb-000007
其中,↑ d为上采样操作算子,d是采样因子,d=2 l-1
传统的多曝光融合算法通常不对源图像进行任何图像变换,也不考虑像素之间的相关性,而是直接对源图像中的各对应像素点直接进行融合处理,得到一幅新的图像。但是这样操作不能够较好地表现源图像中的特征,纹理细节丢失也比较严重。
本实施例中,通过多分辨率的融合方式能够有效地避免光晕效应,层数的增加能够提升图像的整体视觉效果并凸显其中的场景细节,真实感增强,能够缓解存在的过增强现象,有效地避免了增强结果中的光晕和颜色失真,自适应提升了视觉信息保真度,并且,在默认设置基本参数不依赖于系统外界输入,所需变换的参数在算法实现过程中基于图像内容特征自动计算获取,具有较好的自适应性和鲁棒性,具有一定的普适性。
为了验证算法的有效性,采用的数据库包括自建数据库(总共300幅图像)、LDR数据库、IEC数据库、PMEA数据库,图像中场景包含室内、室外、城市、风景、夜晚、阴天等,共同点是图像中部分区域光照低、整体动态范围较宽等情况。为了更好地显示实验对比效果,每一幅图像将分别按照不同角度进行区域划分,通过同一幅图像不同区域展示图像增强前后的对比效果。经过本实施例提供的方法处理后,低光照区域得到了增强,而高光照区域得到了抑制,增强后图像颜色自然,细节清晰,隐含在低光照中的目标被显示出来,本实施例提供的方法能够改善光照对图像质量的影响。
为了考察图像增强前后对后续特征提取的影响,设计了边缘检测和图像匹配实验。实验结果表明,进行增强后的图像,边缘信息更加丰富,许多在低照度中不能检测到的信息被挖掘出来,改善图像特征提取的能力。图像细节更完整,增强前的图像匹配结果存在误差,增强后图像可以利用同一种方法完美准确地实现了匹配。因此,本实施例提供的自适应虚拟多曝光融合方法在无论是视觉效果上还是在后续特征提取时都有很明显的改进。
本实施例从主观视觉评价和客观定量分析两个角度与多种主流算法的处理结果进行比较。比较结果证明本实施例提供的方法无论是从色彩还是对比度方面都有了明显提升,视觉效果明显强于其他方法,图像细节恢复清晰,颜色保持良好。本实施例提供的方法得到的增强图像整体颜色、对比度等接近于正常光照的参考图像,符合人类视觉需求。
本发明实施例提供另一种低照度图像增强方法,如图3所示的流程图,该低照度图像增强方法包括:
S31:设置控制参数;
一些实施例中设置控制参数具体包括:
获取曝光程度最大的控制参数;
对曝光程度最大的控制参数进行序列分解得到每个图像的控制参数。
S32:获取图像灰度变换函数;
S33:根据控制参数和图像灰度变换函数构造虚拟曝光增强函数。
多曝光融合技术工作原理用公式表示为:
I e=F{I i}  (1)
其中,I i为图像序列中第i个待融合的图像,I e为融合后图像,F{}为融合函数。
对于仅有单幅低照度图像的情况,为了能够产生其他图像序列用于进行融合,最简单的处理方式是采用色调映射算子对原始图像直接进行灰度增强,每一次灰度增强相当于有针对性地对某一区域进行灰度范围扩展与 亮度提升,使得I i←I 0。假设原始图像表示为I 0,f i(·)为图像灰度变换函数,则处理后的图像为:
I i=f i(I 0)  (2)
其中,f i应当满足单调性和有界性:
Figure PCTCN2020099841-appb-000008
由于该亮度变换并非真实曝光的结果,因此,称为虚拟曝光增强函数。对于一个虚拟曝光增强函数,为了得到不同的亮度变换函数作f i,则需要设置一个控制参数k,这样才能得到获得不同的变换函数,实现不同的曝光:
f i=f(k i)  (4)
结合式(4),则式(2)可以变换为:
I i=f(I 0,k i)   (5)
假设图像变换所需的参数值与图像场景的整体亮度相关并成非线性关系,则基于图像亮度控制参数所设置的虚拟曝光增强器输入输出关系用公式表示为:
f:{y=x+k×x(1-x)}   (6)
其中用x表示的输入量,y表示输出量,k表示控制系数。
用I 0表示原始低照度图像,每个亮度值取值范围都在[0,1],则在第i个虚拟图像I i中的每个像素值都可以被表示为:
Figure PCTCN2020099841-appb-000009
其中,k i为第i曝光度控制参数,分别为k 1,k 2,k 3...,k N,将其带入虚拟曝光增强函数,将得到N个虚拟曝光增强图像。
随着k的增大,变换曲线斜率越大,图像亮度增强程度越高。因此生成的图像亮度取决于控制参数,通过设置合适的控制参数k,就可以获得期望亮度的图像。
为了提高算法的自适应性,我们将虚拟曝光后图像的灰度平均值作为最大灰度期望值计算k值。假设一副虚拟曝光图像灰度平均值为μ k,期望的虚拟曝光图像灰度平均值为ξ,将满足曝光的图像灰度平均值μ k最逼近于ξ时的k值作为使得曝光程度最大的控制参数。此外,由于原始图像的信息内容及曝光程度各异,为了约束虚拟曝光增强的程度,根据经验对估计后的
Figure PCTCN2020099841-appb-000010
取值进行约束,设置最大最小阈值分别用用k H和k L表示,即
Figure PCTCN2020099841-appb-000011
最终估计得到的
Figure PCTCN2020099841-appb-000012
值用公式表示为:
Figure PCTCN2020099841-appb-000013
Figure PCTCN2020099841-appb-000014
一些实施例中,将图像平均灰度期望值ξ设置为0.5(图像灰度值范围为[0,1]),且分别设置k H=12,k L=6。
经过公式(8)可以得到使得增强后图像灰度平均值为ξ的k值为曝光程度最大的控制参数,根据需要得到的虚拟曝光序列图像个数设置N个参数调节虚拟曝光增强函数k i便能得到N个图像序列。其计算公式如下:
k i=(i×k)/N  (9)
例如,所设置的图像个数为5,即得到5幅不同曝光值的虚拟曝光图像序列。
采用虚拟曝光增强函数对低照度图像进行虚拟曝光,相当于有针对性地对某一区域进行扩展与亮度拉伸,低曝光图像能够较好地展示出真实场景中高亮度区域,而高曝光图像能够较好地展示出真实场景中的低亮度区域。此外,通过观测不同环境下情况的原始低照度图像,发现曝光不足的区域虽然有部分信息被隐藏在黑暗中,但是曝光良好的区域视觉效果很好,并不需要增强操作。因此,为了更好地利用原始图像的这部分信息,低照度原始图像需要添加到在待融合的图像序列中,使得增强后的图像整体更自然。
本实施例中,通过通过对图像灰度分析获取自适应控制参数,根据控制参数和图像灰度变换函数构造虚拟曝光增强函数,通过设置合适的控制参,就可以获得多幅期望亮度的图像,便于后期进行图像融合起来生成信 息丰富清晰图像。
本发明实施例提供一种低照度图像增强装置,如图4所示的功能结构图,该低照度图像增强装置包括:
获取模块41,用于获取单幅低照度原始图像;
虚拟曝光图像序列生成模块42,用于构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列;
融合模块43,用于对虚拟曝光图像序列进行融合处理得到增强图像。
一些实施例中,融合模块43包括:
权重图生成单元,用于根据信息测量因子生成虚拟曝光图像序列中各幅图像的权重图;
第一分解单元,用于对权重图进行拉普拉斯金字塔分解得到多尺度权重图分解图像;
第二分解单元,用于对虚拟曝光图像序列中各幅图像进行高斯金字塔分解得到多分辨率虚拟曝光图分解图像;
融合单元,用于将多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行加权求和,得到融合后的新拉普拉斯金字塔;
重构单元,用于将新拉普拉斯金字塔进行重构,得到增强图像。
一些实施例中,信息测量因子包括:对比度、饱和度和显著性中至少一种。
一些实施例中,虚拟曝光图像序列生成模块42包括构造虚拟曝光增强函数模块和处理模块,处理模块用于通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列。
构造虚拟曝光增强函数模块包括设置单元,用于控制参数;构造单元,用于根据控制参数和图像灰度变换函数构造虚拟曝光增强函数。
本实施例中,通过获取模块获取单幅低照度原始图像,虚拟曝光图像序列生成模块构造虚拟曝光增强函数,通过虚拟曝光增强函数对单幅低照度原始图像进行处理得到虚拟曝光图像序列,融合模块对虚拟曝光图像序 列进行融合处理得到增强图像,能够缓解存在的过增强现象,有效地避免了增强结果中的光晕和颜色失真,自适应提升了视觉信息保真度,并且,在默认设置基本参数不依赖于系统外界输入,所需变换的参数在算法实现过程中基于图像内容特征自动计算获取,具有较好的自适应性和鲁棒性,具有一定的普适性,可以应用到视频监控、场景恢复等领域。
本实施例提供一种一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,计算机程序被所述处理器执行时,使得处理器执行上述实施例中所述方法的步骤。
需要说明的是,上述低照度图像增强方法、低照度图像增强装置及计算机设备属于一个总的发明构思,低照度图像增强方法、低照度图像增强装置及计算机设备实施例中的内容可相互适用。
可以理解的是,上述各实施例中相同或相似部分可以相互参考,在一些实施例中未详细说明的内容可以参见其他实施例中相同或相似的内容。
需要说明的是,在本申请的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本申请的描述中,除非另有说明,“多个”的含义是指至少两个。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻 辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。
需要说明的是,本发明不局限于上述最佳实施方式,本领域技术人员在本发明的启示下都可得出其他各种形式的产品,但不论在其形状或结构上作任何变化,凡是具有与本申请相同或相近似的技术方案,均落在本发明的保护范围之内。

Claims (12)

  1. 一种低照度图像增强方法,其特征在于,包括:
    获取单幅低照度原始图像;
    构造虚拟曝光增强函数,通过所述虚拟曝光增强函数对所述单幅低照度原始图像进行处理得到虚拟曝光图像序列;
    对所述虚拟曝光图像序列进行融合处理得到增强图像。
  2. 根据权利要求1所述的低照度图像增强方法,其特征在于,所述对所述虚拟曝光图像序列进行融合处理得到增强图像,包括:
    根据信息测量因子生成所述虚拟曝光图像序列中各幅图像的权重图;
    对所述权重图进行拉普拉斯金字塔分解得到多尺度权重图分解图像;
    对所述虚拟曝光图像序列中各幅图像进行高斯金字塔分解得到多分辨率虚拟曝光图分解图像;
    将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行融合得到增强图像。
  3. 根据权利要求2所述的低照度图像增强方法,其特征在于,所述信息测量因子包括:对比度、饱和度和显著性中至少一种。
  4. 根据权利要求3所述的低照度图像增强方法,其特征在于,所述信息测量因子包括:对比度、饱和度和显著性,所述根据信息测量因子生成所述虚拟曝光图像序列中各幅图像的权重图,包括:
    根据各幅图像的对比度计算各幅图像的对比度子权重图;
    根据各幅图像的饱和度计算各幅图像的饱和度子权重图;
    根据各幅图像的显著性计算各幅图像的显著性子权重图;
    对所述对比度子权重图、饱和度子权重图和显著性子权重图进行归一化处理得到各幅图像对应的权重图。
  5. 根据权利要求4所述的低照度图像增强方法,其特征在于,所述根据各幅图像的对比度计算各幅图像的对比度子权重图,包括:
    将所述各幅图像转为灰度图像;
    将所述灰度图像的像素值归一化至[0,1]区间;
    对归一化后图像进行拉普拉斯滤波得到各幅图像的对比度子权重图。
  6. 根据权利要求4所述的低照度图像增强方法,其特征在于,所述根据各幅图像的饱和度计算各幅图像的饱和度子权重图,包括:
    将所述各幅图像的R、G和B三色分量提取出来;
    计算图像中每个像素的RGB三色分量均值;
    根据RGB三色分量均值计算单个图像的色彩标准差确定饱和度系数;
    根据饱和度系数得到各幅图像的饱和度子权重图。
  7. 根据权利要求4所述的低照度图像增强方法,其特征在于,所述根据各幅图像的显著性计算各幅图像的显著性子权重图,包括:
    计算所述各幅图像的融合源在Lab色彩空间三通道的全局均值;
    对融合源在Lab色彩空间进行高斯模糊处理;
    根据全局均值和高斯模糊处理后融合源得到各幅图像融合源所对应的显著性子权重图。
  8. 根据权利要求1所述的低照度图像增强方法,其特征在于,所述构造虚拟曝光增强函数包括:
    设置控制参数;
    获取图像灰度变换函数;
    根据所述控制参数和图像灰度变换函数构造虚拟曝光增强函数。
  9. 根据权利要求8所述的低照度图像增强方法,其特征在于,所述设置控制参数包括:
    获取曝光程度最大的控制参数;
    对曝光程度最大的控制参数进行序列分解得到每个图像的控制参数。
  10. 根据权利要求2所述的低照度图像增强方法,其特征在于,所述将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行 融合得到增强图像,包括:
    将所述多尺度权重图分解图像与对应尺度的多分辨率虚拟曝光图分解图像进行加权求和,得到融合后的新拉普拉斯金字塔;
    将新拉普拉斯金字塔进行重构,得到所述增强图像。
  11. 一种低照度图像增强装置,其特征在于,包括:
    获取模块,用于获取单幅低照度原始图像;
    虚拟曝光图像序列生成模块,用于构造虚拟曝光增强函数,通过所述虚拟曝光增强函数对所述单幅低照度原始图像进行处理得到虚拟曝光图像序列;
    融合模块,用于对所述虚拟曝光图像序列进行融合处理得到增强图像。
  12. 一种计算机设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行权利要求1至10中任一项权利要求所述方法的步骤。
PCT/CN2020/099841 2020-07-02 2020-07-02 低照度图像增强方法、装置及计算机设备 WO2022000397A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/099841 WO2022000397A1 (zh) 2020-07-02 2020-07-02 低照度图像增强方法、装置及计算机设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/099841 WO2022000397A1 (zh) 2020-07-02 2020-07-02 低照度图像增强方法、装置及计算机设备

Publications (1)

Publication Number Publication Date
WO2022000397A1 true WO2022000397A1 (zh) 2022-01-06

Family

ID=79317788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099841 WO2022000397A1 (zh) 2020-07-02 2020-07-02 低照度图像增强方法、装置及计算机设备

Country Status (1)

Country Link
WO (1) WO2022000397A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581433A (zh) * 2022-03-22 2022-06-03 中国工程物理研究院流体物理研究所 一种获取金属球腔内表面形貌检测图像的方法及系统
CN114638764A (zh) * 2022-03-25 2022-06-17 江苏元贞智能科技有限公司 基于人工智能的多曝光图像融合方法及系统
CN115456917A (zh) * 2022-11-11 2022-12-09 中国石油大学(华东) 有益于目标准确检测的图像增强方法、装置、设备及介质
CN116033278A (zh) * 2022-12-18 2023-04-28 重庆邮电大学 一种面向单色-彩色双相机的低照度图像预处理方法
CN116128916A (zh) * 2023-04-13 2023-05-16 中国科学院国家空间科学中心 一种基于空间能流对比度的红外弱小目标增强方法
CN116664452A (zh) * 2023-07-28 2023-08-29 吉林省星博医疗器械有限公司 一种多通道荧光图像多尺度增强方法和系统
CN117006947A (zh) * 2023-06-05 2023-11-07 西南交通大学 一种低光照图像增强的高层建筑结构位移测量方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034986A (zh) * 2012-11-29 2013-04-10 奇瑞汽车股份有限公司 一种基于曝光融合的夜视图像增强方法
CN109685727A (zh) * 2018-11-28 2019-04-26 深圳市华星光电半导体显示技术有限公司 图像处理方法
CN110852982A (zh) * 2019-11-19 2020-02-28 常州工学院 自适应的曝光量调整多尺度熵融合的水下图像增强方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034986A (zh) * 2012-11-29 2013-04-10 奇瑞汽车股份有限公司 一种基于曝光融合的夜视图像增强方法
CN109685727A (zh) * 2018-11-28 2019-04-26 深圳市华星光电半导体显示技术有限公司 图像处理方法
CN110852982A (zh) * 2019-11-19 2020-02-28 常州工学院 自适应的曝光量调整多尺度熵融合的水下图像增强方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN CHEN, HU SHI-QIANG,ZHANG JUN: "Application of exposure fusion to single image dehazing", JOURNAL OF COMPUTER APPLICATIONS, JISUANJI YINGYONG, CN, vol. 32, no. 1, 1 January 2012 (2012-01-01), CN , pages 241 - 244, XP055884440, ISSN: 1001-9081, DOI: 10.3724/SP.J.1087.2012.00241 *
CHEN MENG: "Research of Multiple Virtual Exposure Night Vision Image Contrast Enhancement Algorithm", INFORMATION SCIENCE AND TECHNOLOGY, CHINESE MASTER’S THESES FULL-TEXT DATABASE, 15 February 2016 (2016-02-15), XP055884443 *
JIN XIAOYUAN, XU WANGMING;WU SHIQIAN: "An Illumination-Adaptive Face Image Enhancement Method Using Virtual Exposure Fusion", JOURNAL OF WUHAN UNIVERSITY OF SCIENCE AND TECHNOLOGY, vol. 43, no. 1, 29 February 2020 (2020-02-29), pages 67 - 73, XP055884441, ISSN: 1674-3644, DOI: 10.3969/j.issn.1674-3644.2020.01.010 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581433A (zh) * 2022-03-22 2022-06-03 中国工程物理研究院流体物理研究所 一种获取金属球腔内表面形貌检测图像的方法及系统
CN114581433B (zh) * 2022-03-22 2023-09-19 中国工程物理研究院流体物理研究所 一种获取金属球腔内表面形貌检测图像的方法及系统
CN114638764A (zh) * 2022-03-25 2022-06-17 江苏元贞智能科技有限公司 基于人工智能的多曝光图像融合方法及系统
CN115456917A (zh) * 2022-11-11 2022-12-09 中国石油大学(华东) 有益于目标准确检测的图像增强方法、装置、设备及介质
CN116033278A (zh) * 2022-12-18 2023-04-28 重庆邮电大学 一种面向单色-彩色双相机的低照度图像预处理方法
CN116128916A (zh) * 2023-04-13 2023-05-16 中国科学院国家空间科学中心 一种基于空间能流对比度的红外弱小目标增强方法
CN117006947A (zh) * 2023-06-05 2023-11-07 西南交通大学 一种低光照图像增强的高层建筑结构位移测量方法及系统
CN117006947B (zh) * 2023-06-05 2024-03-29 西南交通大学 一种低光照图像增强的高层建筑结构位移测量方法及系统
CN116664452A (zh) * 2023-07-28 2023-08-29 吉林省星博医疗器械有限公司 一种多通道荧光图像多尺度增强方法和系统
CN116664452B (zh) * 2023-07-28 2023-09-29 吉林省星博医疗器械有限公司 一种多通道荧光图像多尺度增强方法和系统

Similar Documents

Publication Publication Date Title
WO2022000397A1 (zh) 低照度图像增强方法、装置及计算机设备
EP3631754B1 (en) Image processing apparatus and method
US11037278B2 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN110378845B (zh) 一种基于卷积神经网络的极端条件下的图像修复方法
WO2022062346A1 (zh) 图像增强方法、装置及电子设备
US11348210B2 (en) Inverse tone mapping method and corresponding device
CN107292830B (zh) 低照度图像增强及评价方法
WO2022116989A1 (zh) 图像处理方法、装置、设备和存储介质
WO2019056549A1 (zh) 图像增强方法以及图像处理装置
Moriwaki et al. Hybrid loss for learning single-image-based HDR reconstruction
CN113096029A (zh) 基于多分支编解码器神经网络的高动态范围图像生成方法
CN112734650A (zh) 一种基于虚拟多曝光融合的不均匀光照图像增强方法
Guthier et al. Flicker reduction in tone mapped high dynamic range video
Pan et al. Multi-exposure high dynamic range imaging with informative content enhanced network
CN114862698A (zh) 一种基于通道引导的真实过曝光图像校正方法与装置
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
CN112991236B (zh) 一种基于模板的图像增强方法及装置
CN114240767A (zh) 一种基于曝光融合的图像宽动态范围处理方法及装置
TWI590192B (zh) 適應性高動態範圍影像合成演算法
WO2023272506A1 (zh) 图像处理方法及装置、可移动平台及存储介质
Chen et al. Back-projection residual low-light image enhancement network with color correction matrix
US20230186612A1 (en) Image processing methods and systems for generating a training dataset for low-light image enhancement using machine learning models
KR101005625B1 (ko) 카메라 색상 특성 곡선을 이용한 영상 밝기 변화에 따른 색 보정 방법
Li et al. Hdr image reconstruction using locally weighted linear regression
Li et al. A lightweight exposure bracketing strategy for HDR imaging without access to camera raw

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943540

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20943540

Country of ref document: EP

Kind code of ref document: A1