WO2021068618A1 - 图像融合方法、装置、计算处理设备和存储介质 - Google Patents

图像融合方法、装置、计算处理设备和存储介质 Download PDF

Info

Publication number
WO2021068618A1
WO2021068618A1 PCT/CN2020/106295 CN2020106295W WO2021068618A1 WO 2021068618 A1 WO2021068618 A1 WO 2021068618A1 CN 2020106295 W CN2020106295 W CN 2020106295W WO 2021068618 A1 WO2021068618 A1 WO 2021068618A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
exposure image
image
exposure
fusion weight
Prior art date
Application number
PCT/CN2020/106295
Other languages
English (en)
French (fr)
Inventor
王涛
陈雪琴
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Priority to US17/762,532 priority Critical patent/US20220383463A1/en
Publication of WO2021068618A1 publication Critical patent/WO2021068618A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to the field of image processing technology, and in particular to an image fusion method, device, computing processing equipment, and storage medium.
  • An image fusion method includes:
  • the first exposure image fusion weight map includes the fusion weight corresponding to each pixel of the exposure image
  • each exposure image use the area area of each overexposed region in the exposure image to smoothly filter the first exposure image fusion weight map corresponding to the exposure image, to obtain the corresponding exposure image Fusion weight map of the second exposure image;
  • each of the second exposure image fusion weight maps image fusion processing is performed on a plurality of exposure images to obtain a fusion image.
  • the acquiring the first exposure image fusion weight map corresponding to each of the exposure images includes:
  • the first exposure image fusion weight map is obtained according to the difference between the pixel value of each pixel of the exposure image and the preset pixel reference value.
  • the obtaining the first exposure image fusion weight map according to the difference between the pixel value of each pixel of the exposure image and a preset pixel reference value includes:
  • the first exposure image fusion weight map is obtained; wherein, the greater the difference value corresponding to the pixel point in the exposure image, the greater the difference value of the pixel point The lower the fusion weight corresponding to the fusion weight map of the first exposure image.
  • the obtaining the area of each exposure area in each of the exposure images includes:
  • the area area of each overexposed area in each of the exposed images is obtained.
  • the area area of each overexposed area in the exposure image is used to smoothly filter the first exposure image fusion weight map corresponding to the exposure image to obtain
  • the corresponding fusion weight map of the second exposure image includes:
  • the first image corresponding to the exposure image is The exposure image fusion weight map is smoothly filtered to obtain a second exposure image fusion weight map corresponding to the exposure image, including:
  • the step of performing image fusion processing on multiple exposure images according to each of the second exposure image fusion weight maps to obtain a fusion image further includes:
  • the performing image fusion processing on multiple exposure images according to each of the second exposure image fusion weight maps to obtain a fusion image includes:
  • a weighted summation is performed on a plurality of exposure images to obtain the fusion image.
  • An image fusion device comprising:
  • the image acquisition module is used to acquire multiple exposure images with different exposure levels based on the same target scene
  • the first weight obtaining module is configured to obtain a first exposure image fusion weight map corresponding to each of the exposure images; wherein the first exposure image fusion weight map includes the fusion weight corresponding to each pixel of the exposure image;
  • An area area acquisition module for acquiring the area area of each overexposed area in each of the exposure images
  • the second weight acquisition module is configured to, for each exposure image, use the area area of each overexposed area in the exposure image to smoothly filter the first exposure image fusion weight map corresponding to the exposure image, Obtaining a second exposure image fusion weight map corresponding to the exposure image;
  • the image fusion module is used to perform image fusion processing on multiple exposure images according to each of the second exposure image fusion weight maps to obtain a fusion image.
  • a computing processing device including:
  • a memory in which computer-readable codes are stored
  • One or more processors when the computer-readable code is executed by the one or more processors, the computing processing device executes the image fusion method described in any one of the foregoing.
  • a computer program including computer-readable code, when the computer-readable code runs on a computing processing device, causes the computing processing device to execute the image fusion method described in any one of the above.
  • a computer-readable storage medium having the above-mentioned computer program stored thereon, which implements the steps of any one of the above-mentioned methods when the computer program is executed by a processor.
  • the above-mentioned image fusion method, device, calculation processing equipment and storage medium acquire multiple exposure images with different exposure levels based on the same target scene, and then acquire the first exposure image fusion weight map corresponding to each exposure image.
  • the first The exposure image fusion weight map includes the fusion weight corresponding to each pixel of the exposure image. Furthermore, the area area of each overexposed area in each exposure image is obtained. For each exposure image, each overexposed area in the exposure image is used The first exposure image fusion weight map corresponding to the exposure image is smoothly filtered to obtain the second exposure image fusion weight map corresponding to the exposure image. Finally, according to each second exposure image fusion weight map, the multiple Perform image fusion processing on the exposure images to obtain a fusion image.
  • the characteristics of the different overexposed areas of each exposed image can be taken into account during image fusion. , To avoid the loss of details in small overexposed areas, and to make the obtained fusion image more realistic.
  • FIG. 1 is a schematic flowchart of an image fusion method in an embodiment
  • FIG. 2 is a schematic flowchart of an implementable manner of step S200 in an embodiment
  • FIG. 3 is a schematic flowchart of an implementable manner of step S300 in an embodiment
  • FIG. 4 is a schematic flowchart of an implementable manner of step S400 in an embodiment
  • Figure 5 is a structural block diagram of an image fusion device in an embodiment
  • Fig. 6 is an internal structure diagram of a computing processing device in an embodiment.
  • an image fusion method which includes the following steps:
  • step S100 based on the same target scene, multiple exposure images with different exposure levels are acquired.
  • the target scene refers to a scene where images with different exposure levels are obtained.
  • Step S200 Obtain a first exposure image fusion weight map corresponding to each exposure image; wherein the first exposure image fusion weight map includes the fusion weight corresponding to each pixel of the exposure image.
  • image fusion refers to the process of image processing and computer technology on the image data of the same target collected by multiple source channels to maximize the extraction of the beneficial information in the respective channels, and finally integrate them into high-quality images to improve the image
  • the utilization of information, the improvement of computer interpretation accuracy and reliability, and the enhancement of the spatial resolution and spectral resolution of the original image are conducive to monitoring.
  • the first exposure image fusion weight map refers to a distribution map composed of fusion weight values corresponding to each pixel of the multiple exposure images when multiple exposure images are fused.
  • step S300 the area of each overexposed area in each exposed image is obtained.
  • overexposure refers to the situation where the brightness of the acquired image is too high due to various reasons. Severe overexposure will cause the picture in the image to become whitish and a lot of image detail will be lost. Specific to this application, there may be one or more overexposed areas in each exposed image.
  • a brightness value can be preset.
  • the preset brightness value is 240.
  • the pixel value of a certain area in the exposed image is greater than 240, the area is considered to be overexposed. area. There may be multiple discontinuous overexposed areas in the same exposure image.
  • Step S400 For each exposure image, use the area area of each overexposed area in the exposure image to smoothly filter the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion corresponding to the exposure image Weight graph.
  • fusing exposure images of different exposure values directly according to the first exposure image fusion weight map obtained in step S200 may cause unnatural halo, which makes the excessive image fusion very unnatural, in order to avoid unnatural light.
  • Halo directly perform full-image smoothing filtering on the first exposure image fusion weight map, and further perform image fusion based on the first exposure image fusion weight map after full-image smoothing filtering, the resulting fused image can avoid unnatural halo to a certain extent Appears, but at the same time it may ignore the details of the small overexposed area, or even ignore the small area as a whole, resulting in the phenomenon of loss of detail in the small overexposed area.
  • Step S500 Perform image fusion processing on multiple exposure images according to the fusion weight map of each second exposure image to obtain a fusion image.
  • the exposure images of different exposure values in the exposure image can be fused, Effectively avoid the loss of details in small overexposed areas, and retain the texture information of small overexposed areas.
  • the above-mentioned image fusion method obtains multiple exposure images with different exposure levels based on the same target scene, and then obtains a first exposure image fusion weight map corresponding to each exposure image.
  • the first exposure image fusion weight map includes each exposure image.
  • the fusion weight corresponding to the pixel point and further, obtain the area area of each overexposed area in each exposure image, for each exposure image, use the area area of each overexposed area in the exposure image to correspond to the exposure image Perform smooth filtering on the fusion weight map of the first exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the characteristics of the different overexposed areas of each exposed image can be taken into account during image fusion. , To avoid the loss of details in small overexposed areas, and to make the obtained fusion image more realistic.
  • step S200 obtaining a first exposure image fusion weight map corresponding to each exposure image includes:
  • the first exposure image fusion weight map is obtained according to the difference between the pixel value of each pixel of the exposure image and the preset pixel reference value.
  • each pixel of each exposure image corresponds to a pixel value (gray value), and according to the difference between each pixel value and the preset pixel reference value, an exposure image fusion weight map can be obtained, and This exposure image fusion weight map is determined as the first exposure image fusion weight map.
  • Step S210 Calculate the difference between the pixel value of each pixel of the exposed image and the preset pixel reference value.
  • each exposure image corresponds to a plurality of pixels, and the difference between the pixel value corresponding to each pixel of the exposure image and the preset pixel reference value is calculated to obtain a set of difference values.
  • an exposure image with a size of 3*3 is taken as an example for illustration. The actual processed image is generally very large, but the corresponding calculation methods are the same, and no detailed description will be given here.
  • Step S220 Obtain the first exposure image fusion weight map according to the ratio between the difference value and the preset pixel reference value; wherein, the greater the difference value corresponding to the pixel point in the exposure image, the larger the pixel point fusion weight in the first exposure image The lower the fusion weight corresponding to the graph. .
  • the first exposure image fusion weight map can be obtained directly according to the ratio between each difference value of the pixel difference value and the preset pixel reference value.
  • the purpose of taking the ratio of each difference value to the preset pixel reference value is to normalize the obtained weight.
  • the larger the difference value corresponding to the pixel in the exposed image the larger the difference between the pixel value of the pixel and the preset pixel reference value.
  • the larger the difference the greater the degree of distortion. Therefore, when the image is fused , The lower the fusion weight corresponding to the pixel point, in this way, can solve the problem of excessive naturalness of each area during image fusion.
  • the fusion weight map of the first exposure image is represented as (1-10/128, 1-20/128, 1-30/128; 1-20 /128, 1-30/128, 1-40/128; 1-30/128, 1-40/128, 1-50/128).
  • other weight calculation methods may be used to obtain the fusion weight map of the first exposure image according to the nature of the actually processed image and user requirements, which is not specifically limited here.
  • the first exposure image fusion weight map is obtained by calculating the difference between the pixel value of each pixel of the exposed image and the preset pixel reference value, and according to the ratio between the difference and the preset pixel reference value ; Wherein, the greater the difference value corresponding to the pixel in the exposure image, the lower the fusion weight corresponding to the fusion weight map of the pixel in the first exposure image.
  • the first exposure image fusion weight map is determined according to the ratio of the difference between the pixel value of each pixel of the different exposure images and the preset pixel reference value to the preset pixel reference value, which includes each exposure The characteristics of the image can maximize the useful information in each exposure image.
  • step S300 obtaining the area area of each overexposed area in each exposure image includes:
  • Step S310 Perform overexposure area detection on each exposure image to obtain an overexposure area mask map corresponding to each exposure image.
  • the binary 0 and 1 take the binary 0 and 1 as an example.
  • the detected pixel is an overexposed point, it is represented by 1, and if the detected pixel is a non-overexposed point It is represented by 0, and the final detection result is used as the mask map of the overexposed area.
  • an existing 3*3 exposure image when the brightness value of the detection point is greater than a given preset threshold, it is considered as an overexposure point, which is less than or equal to the given preset threshold.
  • the threshold is set, it is considered as a non-overexposure point.
  • the mask map of the overexposed area can be expressed as (1, 1, 1; 1, 1, 0; 1, 0, 0).
  • an exposure image with a size of 3*3 is taken as an example for illustration.
  • the actual processed image is generally very large, but the corresponding calculation methods are the same, and no detailed description will be given here.
  • step S320 according to each overexposure area mask image, the exposure image corresponding to the overexposure area mask image is segmented to obtain the corresponding overexposure area.
  • the overexposed area mask map obtained in step S310 it can be seen that the upper left corner of the overexposed area mask map is "1", indicating that the upper left corner of the corresponding exposed image is the overexposed area. Similarly, you can get The lower right corner of the overexposed area mask image is all "0", indicating that the lower right corner of the corresponding exposed image is a non-overexposed area.
  • the pixel neighborhood traversal method (the specific algorithm of the area segmentation is not limited here) can be used to segment the above-mentioned overexposed area mask map to obtain the corresponding overexposed area. For example, the pixel neighborhood traversal method is used to segment the above-mentioned exposed image with a size of 3*3 to obtain an overexposed area. Of course, there may be multiple overexposed areas in the exposed image.
  • Step S330 Obtain the area of each overexposed area in each exposed image.
  • step S320 After the overexposed area is obtained in step S320, the area of each overexposed area is calculated, and the area area of each overexposed area in each exposed image can be obtained.
  • the overexposed area mask map corresponding to each exposed image is obtained, and then, according to each overexposed area mask map, the overexposed area is masked.
  • the exposure image corresponding to the code image is segmented to obtain the corresponding overexposed area, and finally, the area area of each overexposed area in each exposure image is obtained.
  • the calculation of the area of each overexposed area in each exposed image can be used for subsequent image fusion processing based on the area of different overexposed areas, so that the obtained fused image can take into account the differences of each overexposed image at the same time.
  • the characteristics of the over-exposed area avoid the loss of details in the small over-exposed area and retain the texture information of the small over-exposed area.
  • step S400 for each exposure image, use the area area of each overexposed area in the exposure image to compare the first exposure image corresponding to the exposure image.
  • the fusion weight map is smoothed and filtered to obtain the second exposure image fusion weight map corresponding to the exposure image, including:
  • the smoothing coefficient is a coefficient of the smoothing method, and the smoothing coefficient determines the smoothing level and the response speed to the difference between the predicted value and the actual result.
  • the smoothing coefficient in this application, when the area of the area is small, a smaller smoothing coefficient may be used, and when the area of the area is large, a larger smoothing coefficient may be used to preserve the details of the image when the area of the area is small.
  • the square root of the area of the current overexposed area can also be used as the smoothing coefficient.
  • the smoothing filter can be implemented in a Gaussian blur.
  • the smoothing coefficient obtained above can be used as the radius of the Gaussian blur.
  • FIG. 4 it is a schematic flowchart of an implementable manner of step S400, according to the preset correspondence between the area of the overexposed area and the smoothing coefficient, and each overexposed image in the exposed image.
  • the area area of the region smoothly filtering the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image, including:
  • step S410 according to the corresponding relationship, a smoothing coefficient corresponding to the area of each overexposed area in the exposed image is obtained.
  • the area value corresponding to the size of the overexposed area is searched in the preset correspondence, and the smooth coefficient corresponding to the area of each overexposed area in the exposed image is obtained according to the found area value and the corresponding relation.
  • the smooth coefficients corresponding to the area of all overexposed areas of each exposed image can be obtained.
  • Step S420 Perform smooth filtering on the first exposure image fusion weight map corresponding to the exposure image according to the smoothing coefficient corresponding to the area area of each overexposed area in the exposure image, to obtain the second exposure image fusion weight corresponding to the exposure image Figure.
  • smooth filtering is performed on the first exposure image fusion weight map corresponding to the exposure image according to the smoothing coefficient obtained in step S410 to obtain the second exposure image fusion weight map.
  • the weight distribution in the first exposure image fusion weight map is (0.1, 0.05, 0.08; 0.1, 0.06, 0.9; 0.09, 0.1, 0.12)
  • the weight 0.9 is a singular value.
  • the filtering method of will get different filtering results, but the filtering results will generally be within a certain range.
  • the distribution of the second exposure image fusion weight map may be (0.1, 0.05, 0.08; 0.1, 0.06, 0.1; 0.09, 0.1, 0.12).
  • the above method can be used to smoothly filter the first exposure image fusion weight map to obtain the second exposure image fusion weight map.
  • the smooth coefficient corresponding to the area area of each overexposed area in the exposed image is obtained according to the corresponding relationship, and the smooth coefficient corresponding to the area area of each overexposed area in the exposed image is compared with The first exposure image fusion weight map corresponding to the exposure image is smoothly filtered to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the acquisition process of the second exposure image fusion weight map can take into account the characteristics of the different overexposed areas of each overexposed image at the same time, avoid the phenomenon of loss of details in the small overexposed areas, and retain the texture information of the small overexposed areas , To obtain a more realistic fusion image.
  • step S500 is to perform image fusion processing on multiple exposure images according to the fusion weight map of each second exposure image to obtain a fusion image, which also includes:
  • filtering the first exposure image fusion weight map according to the area of the overexposed area may cause a certain boundary effect. Therefore, a value smaller than the preset threshold is used as the filter radius to fuse the weight of the entire second exposure image obtained.
  • the smooth filtering of the image can avoid the boundary effect that may exist in the above-mentioned processing process, and make the fused image obtained according to the fusion weight map of the second exposure image more realistic.
  • the preset value smaller than the preset threshold can be set to 3*3 or 5*5 or other smaller values. Use this value as the filter radius to smoothly filter the second exposure image fusion weight map, which can eliminate the possibility The boundary effect exists.
  • the preset value is large, the phenomenon of excessive blur in different areas may occur. Therefore, the preset value here needs to be set to a value smaller than the preset threshold to avoid excessive blur.
  • step S500 performing image fusion processing on multiple exposure images according to the fusion weight map of each second exposure image, to obtain a fusion image, includes:
  • the multiple exposure images are weighted and summed to obtain the fusion image.
  • the second exposure image fusion weight map containing the overall characteristics of each overexposed image and the characteristic information of the different overexposed regions of each overexposed image is obtained, and the weighted summation of the exposure images in the exposed images is performed , Get the fusion image.
  • This operation can fully consider the characteristics of each overexposed image, while taking into account the characteristics of different overexposed areas of each overexposed image, avoid the phenomenon of loss of details in small overexposed areas, and retain the texture information of small overexposed areas , To obtain a more realistic fusion image.
  • an image fusion device including: an image acquisition module 501, a first weight acquisition module 502, a region area acquisition module 503, a second weight acquisition module 504, and an image fusion module 505, of which:
  • the image acquisition module 501 is configured to acquire multiple exposure images with different exposure levels based on the same target scene;
  • the first weight obtaining module 502 is configured to obtain a first exposure image fusion weight map corresponding to each exposure image; wherein the first exposure image fusion weight map includes the fusion weight corresponding to each pixel of the exposure image;
  • the area area obtaining module 503 is used to obtain the area area of each overexposed area in each exposure image
  • the second weight acquisition module 504 is configured to, for each exposure image, use the area area of each overexposed area in the exposure image to smoothly filter the fusion weight map of the first exposure image corresponding to the exposure image to obtain the same The corresponding fusion weight map of the second exposure image;
  • the image fusion module 505 is configured to perform image fusion processing on multiple exposure images according to the fusion weight map of each second exposure image to obtain a fusion image.
  • the first weight acquisition module 502 is further configured to perform for each exposure image, according to the difference between the pixel value of each pixel of the exposure image and the preset pixel reference value, obtain the first exposure image fusion Weight graph.
  • the first weight acquisition module 502 is also used to calculate the difference between the pixel value of each pixel of the exposed image and the preset pixel reference value; according to the ratio between the difference and the preset pixel reference value , The first exposure image fusion weight map is obtained; wherein, the greater the difference value corresponding to the pixel in the exposure image, the lower the fusion weight corresponding to the pixel in the first exposure image fusion weight map.
  • the area area acquisition module 503 is also used to perform overexposure area detection on each exposure image to obtain an overexposure area mask map corresponding to each exposure image; according to each overexposure area mask In the figure, the exposure image corresponding to the overexposed area mask map is segmented to obtain the corresponding overexposed area; the area area of each overexposed area in each exposed image is obtained.
  • the second weight acquisition module 504 is further configured to determine the corresponding relationship between the area of the overexposed area and the smoothing coefficient, and the area of each overexposed area in the exposed image. Perform smooth filtering on the first exposure image fusion weight map to obtain a second exposure image fusion weight map corresponding to the exposure image.
  • the second weight obtaining module 504 is further configured to obtain a smoothing coefficient corresponding to the area of each overexposed area in the exposed image according to the corresponding relationship;
  • the smoothing coefficient corresponding to the area area performs smooth filtering on the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the second weight acquisition module 504 is further configured to use the preset value as the filter radius to smoothly filter the second exposure image fusion weight map to obtain an updated second exposure image fusion weight map; wherein, The preset value is less than the preset threshold.
  • the image fusion module 505 is further configured to perform a weighted summation on multiple exposure images according to the fusion weight corresponding to each pixel in the fusion weight map of each second exposure image to obtain a fusion image.
  • Each module in the above-mentioned image fusion device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the computing processing device according to the embodiments of the present application.
  • This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
  • a computing processing device may be a terminal, and its internal structure diagram may be as shown in FIG. 6.
  • the computing processing equipment includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus.
  • the processor of the computing processing device is used to provide computing and control capabilities.
  • the memory of the computing processing device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer program codes. These program codes can be read from one or more computer program products or written into the one or more computer program products.
  • These computer program products include Program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computing processing device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize an image fusion method.
  • the display screen of the computing processing device may be a liquid crystal display or an electronic ink display screen, and the input device of the computing processing device may be a touch layer covered on the display screen, or it may be a button, a trackball or a trackball set on the housing of the computing processing device.
  • the touchpad can also be an external keyboard, touchpad, or mouse.
  • FIG. 6 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computing processing device to which the solution of the present application is applied.
  • the specific computer device It may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • a computing processing device including a memory and a processor, and a computer program is stored in the memory, the computer program includes computer readable code, and the processor implements the following steps when the processor executes the computer program:
  • each exposure image use the area area of each overexposed area in the exposure image to smoothly filter the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image ;
  • image fusion processing is performed on the multiple exposure images to obtain a fusion image.
  • the processor further implements the following steps when executing the computer program: For each exposure image, obtain the first exposure image according to the difference between the pixel value of each pixel of the exposure image and the preset pixel reference value Fusion weight map.
  • the processor further implements the following steps when executing the computer program: calculating the difference between the pixel value of each pixel of the exposed image and the preset pixel reference value; according to the difference between the preset pixel reference value The ratio of, the first exposure image fusion weight map is obtained; wherein, the greater the difference value corresponding to the pixel in the exposure image, the lower the fusion weight corresponding to the pixel in the first exposure image fusion weight map.
  • the processor further implements the following steps when executing the computer program: detecting the overexposed area of each exposed image to obtain the overexposed area mask map corresponding to each exposed image; according to each overexposed area The mask map is to segment the exposure image corresponding to the overexposed area mask map to obtain the corresponding overexposed area; obtain the area area of each overexposed area in each exposed image.
  • the processor further implements the following steps when executing the computer program: according to the preset corresponding relationship between the area of the overexposed area and the smoothing coefficient, and the area area of each overexposed area in the exposed image, pairing and exposing The first exposure image fusion weight map corresponding to the image is smoothly filtered to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the processor further implements the following steps when executing the computer program: according to the corresponding relationship, obtain the smoothing coefficient corresponding to the area of each overexposed area in the exposed image; according to each overexposed area in the exposed image
  • the smoothing coefficient corresponding to the area area of the region performs smooth filtering on the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the processor further implements the following steps when executing the computer program: using the preset value as the filter radius, smoothly filtering the second exposure image fusion weight map to obtain an updated second exposure image fusion weight map; Wherein, the preset value is less than the preset threshold.
  • the processor further implements the following steps when executing the computer program: according to the fusion weight corresponding to each pixel in the fusion weight map of each second exposure image, perform a weighted summation on the multiple exposure images to obtain the fusion image .
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • each exposure image use the area area of each overexposed area in the exposure image to smoothly filter the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image ;
  • image fusion processing is performed on the multiple exposure images to obtain a fusion image.
  • the following steps are also implemented: For each exposure image, the first exposure is obtained according to the difference between the pixel value of each pixel of the exposure image and the preset pixel reference value. Image fusion weight map.
  • the following steps are also implemented: calculating the difference between the pixel value of each pixel of the exposed image and the preset pixel reference value; according to the difference between the difference and the preset pixel reference value
  • the first exposure image fusion weight map is obtained by the ratio between the two, wherein, the larger the difference value corresponding to the pixel in the exposure image, the lower the fusion weight corresponding to the pixel in the first exposure image fusion weight map.
  • the following steps are also implemented: detecting the overexposure area of each exposure image to obtain the overexposure area mask map corresponding to each exposure image; according to each overexposure Area mask map, perform area segmentation on the exposure image corresponding to the overexposed area mask map to obtain the corresponding overexposed area; obtain the area area of each overexposed area in each exposed image.
  • the following steps are also implemented: according to the preset corresponding relationship between the area of the overexposed area and the smoothing coefficient, and the area area of each overexposed area in the exposed image, the comparison and The first exposure image fusion weight map corresponding to the exposure image is smoothly filtered to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the following steps are also implemented: according to the corresponding relationship, a smoothing coefficient corresponding to the area of each overexposed area in the exposed image is obtained; according to each overexposed area in the exposed image The smoothing coefficient corresponding to the area area of the exposure area performs smooth filtering on the first exposure image fusion weight map corresponding to the exposure image to obtain the second exposure image fusion weight map corresponding to the exposure image.
  • the following steps are also implemented: use the preset value as the filter radius to smoothly filter the second exposure image fusion weight map to obtain an updated second exposure image fusion weight map ; Among them, the preset value is less than the preset threshold.
  • the following steps are also implemented: according to the fusion weight corresponding to each pixel in the fusion weight map of each second exposure image, a weighted sum of multiple exposure images is performed to obtain the fusion image.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Abstract

本申请涉及一种图像融合方法、装置、计算机设备和存储介质。包括:基于同一目标场景,获取多张不同曝光程度的曝光图像;获取每一曝光图像对应的第一曝光图像融合权重图;其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重;获取每一曝光图像中的每一过曝区域的区域面积;对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图;根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。从而能够兼顾不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,使获得的融合图像更加真实。

Description

图像融合方法、装置、计算处理设备和存储介质
本申请要求在2019年10月12日提交中国专利局、申请号为201910967375.8、发明名称为“图像融合方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种图像融合方法、装置、计算处理设备和存储介质。
背景技术
随着图像处理技术的不断发展,对不同曝光程度的图像进行融合以获得高质量图像成为图像处理领域的一个研究热点。传统技术中,通常采用对多张不同曝光值的图像通过一定的规则直接融合的方式得到融合图像。
然而,不同曝光图像之间存在完全不同的边缘信息和亮度变化,直接对图像进行融合容易出现小的过曝区域细节丢失的现象。
发明内容
基于此,有必要针对上述技术问题,提供一种图像融合方法、装置、计算处理设备和存储介质。
一种图像融合方法,所述方法包括:
基于同一目标场景,获取多张不同曝光程度的曝光图像;
获取每一所述曝光图像对应的第一曝光图像融合权重图;其中,所述第一曝光图像融合权重图包括所述曝光图像各像素点对应的融合权重;
获取每一所述曝光图像中的每一过曝区域的区域面积;
对于每一所述曝光图像,利用所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图;
根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
在其中一个实施例中,所述获取每一所述曝光图像对应的第一曝光图像融合权重图,包括:
对于每一所述曝光图像,根据所述曝光图像各像素点的像素值与预设像素基准值之间的差值,得到所述第一曝光图像融合权重图。
在其中一个实施例中,所述根据所述曝光图像各像素点的像素值与预设像素基准值之间的差值,得到所述第一曝光图像融合权重图,包括:
计算所述曝光图像各像素点的像素值与所述预设像素基准值之间的差值;
根据所述差值与所述预设像素基准值之间的比值,得到所述第一曝光图像融合权重图;其中,所述曝光图像中的像素点对应的差值越大,所述像素点在所述第一曝光图像融合权重图对应的融合权重越低。
在其中一个实施例中,所述获取每一所述曝光图像中的每一曝光区域的区域面积,包括:
对每一所述曝光图像进行过曝区域检测,得到与每一所述曝光图像对应的过曝区域掩码图;
根据每一所述过曝区域掩码图,对与所述过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域;
获取每一所述曝光图像中的每一过曝区域的区域面积。
在其中一个实施例中,所述利用所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图,包括:
根据预设的过曝区域面积与平滑系数的对应关系,以及所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,所述根据预设的过曝区域面积与平滑系数的对应关系,以及所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图,包括:
根据所述对应关系,得到与所述曝光图像中的每一过曝区域的区域面积对应的平滑系数;
根据所述曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝 光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,所述根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,之前还包括:
以预设数值作为滤波半径,对所述第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,所述预设数值小于预设阈值。
在其中一个实施例中,所述根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,包括:
根据每一所述第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到所述融合图像。
一种图像融合装置,所述装置包括:
图像获取模块,用于基于同一目标场景,获取多张不同曝光程度的曝光图像;
第一权重获取模块,用于获取每一所述曝光图像对应的第一曝光图像融合权重图;其中,所述第一曝光图像融合权重图包括所述曝光图像各像素点对应的融合权重;
区域面积获取模块,用于获取每一所述曝光图像中的每一过曝区域的区域面积;
第二权重获取模块,用于对于每一所述曝光图像,利用所述曝光图像中的每一过曝区域的区域面积对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图;
图像融合模块,用于根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
一种计算处理设备,包括:
存储器,其中存储有计算机可读代码;
一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行上述任一项所述的图像融合方法。
一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行上述任一项所述的图像融合方法。
一种计算机可读存储介质,其上存储有如上所述的计算机程序,所述计算机程序被处理器执行时实现上述任一项所述方法的步骤。
上述图像融合方法、装置、计算处理设备和存储介质,通过基于同一目标场景,获取多张不同曝光程度的曝光图像,接着,获取每一曝光图像对应的第一曝光图像融合权重图其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重,进一步,获取每一曝光图像中的每一过曝区域的区域面积,对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图,最后,根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。其中,通过利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,能够在图像融合时兼顾每一曝光图像不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,使获得的融合图像更加真实。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像融合方法的流程示意图;
图2为一个实施例中步骤S200的一种可实施方式的流程示意图;
图3为一个实施例中步骤S300的一种可实施方式的流程示意图;
图4为一个实施例中步骤S400的一种可实施方式的流程示意图;
图5为一个实施例中图像融合装置的结构框图;
图6为一个实施例中计算处理设备的内部结构图。
具体实施例
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实 施例仅仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
可以理解本申请中所使用的术语“第一”、“第二”等可在本文中用于描述各种条件关系,但这些条件关系不受这些术语限制。这些术语仅用于将一个条件关系与另一个条件关系区分开来。
在一个实施例中,如图1所示,提供了一种图像融合方法,包括以下步骤:
步骤S100,基于同一目标场景,获取多张不同曝光程度的曝光图像。
其中,目标场景是指获取不同曝光程度的图像的场景。
具体地,针对同一目标场景,在不同曝光值下,获取多张不同曝光程度的曝光图像。
步骤S200,获取每一曝光图像对应的第一曝光图像融合权重图;其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重。
其中,图像融合是指将多源信道所采集到的关于同一目标的图像数据经过图像处理和计算机技术等,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像,以提高图像信息的利用率、改善计算机解译精度和可靠性、提升原始图像的空间分辨率和光谱分辨率,利于监测。
第一曝光图像融合权重图是指将多个曝光图像进行融合时多个曝光图像的各个像素点对应的融合权重值构成的分布图。
步骤S300,获取每一曝光图像中的每一过曝区域的区域面积。
其中,过曝是指由于各种原因造成获取到的图像中亮度过高的情况。过曝严重会导致图像中的画面发白,大量图像细节丢失。具体到本申请,每一曝光图像中可能存在一个或多个过曝区域。
具体地,根据实际对图片的质量要求,可以预设一个亮度值,例如,预设亮度值为240,当曝光图像中的某一区域的像素值均大于240时,则认为该区域为过曝区域。同一曝光图像中可能存在着多个不连续的过曝区域。
步骤S400,对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到 与曝光图像对应的第二曝光图像融合权重图。
具体地,直接根据步骤S200中得到的第一曝光图像融合权重图对不同曝光值的曝光图像进行融合可能会出现不自然光晕,使图像融合时的过度非常不自然,为了避免出现不自然的光晕,直接对第一曝光图像融合权重图进行全图平滑滤波,并进一步根据全图平滑滤波后的第一曝光图像融合权重图进行图像融合,得到的融合图像虽然可以一定程度避免不自然光晕的出现,但同时可能会忽略小的过曝区域的细节表现,甚至将小的区域整个忽略掉,导致出现小的过曝区域细节丢失的现象,因此,鉴于每一曝光图像中可能存在一个或多个过曝区域,并且过曝区域面积大小不同,需要在进行图像融合前针对不同过曝区域的面积进行细分操作。首先,获取每一曝光图像中的至少一个过曝区域面积,接着,利用曝光图像中的每一过曝区域的区域面积对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,可以得到第二曝光图像融合权重图。
步骤S500,根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
具体地,本申请中,根据步骤S400中利用曝光图像中的每一过曝区域的区域面积进行处理后的第二曝光图像融合权重图对曝光图像中的不同曝光值的曝光图像进行融合,可以有效避免出现小的过曝区域细节丢失的现象,保留小的过曝区域的纹理信息。
上述图像融合方法,通过基于同一目标场景,获取多张不同曝光程度的曝光图像,接着,获取每一曝光图像对应的第一曝光图像融合权重图其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重,进一步,获取每一曝光图像中的每一过曝区域的区域面积,对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图,最后,根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。其中,通过利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,能够在图像融合时兼顾每一曝光图像不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,使获得的融合图像更加真实。
在其中一个实施例中,如图2所示,为步骤S200的一种可实施方式的流程示意图,步骤S200,获取每一曝光图像对应的第一曝光图像融合权重图,包括:
对于每一曝光图像,根据曝光图像各像素点的像素值与预设像素基准值之间的差值,得到第一曝光图像融合权重图。
具体地,每一曝光图像的每一像素点均对应一个像素值(灰度值),根据每一像素值与预设像素基准值之间的差值,可以得到一个曝光图像融合权重图,将这一曝光图像融合权重图确定为第一曝光图像融合权重图。
对于每一曝光图像,获取第一曝光图像融合权重图的具体步骤如下:
步骤S210,计算曝光图像各像素点的像素值与预设像素基准值之间的差值。
具体地,每一曝光图像对应多个像素点,计算曝光图像的每一像素点对应的像素值与预设像素基准值之间的差值,可以得到一组差值。以一个简单的例子进行说明,现有一个3*3的曝光图像,对应的像素值为(138,148,158;148,158,168;158,168,178),假设预设像素基准值为128,则对应的像素差值为(138-128,148-128,158-128;148-128,158-128,168-128;158-128,168-128,178-128)=(10,20,30;20,30,40;30,40,50)。当然,这里以一个大小为3*3的曝光图像为例进行说明,实际处理的图像一般非常大,但相应的计算方式是一致的,此处不再进行详细说明。
步骤S220,根据差值与预设像素基准值之间的比值,得到第一曝光图像融合权重图;其中,曝光图像中的像素点对应的差值越大,像素点在第一曝光图像融合权重图对应的融合权重越低。。
具体地,在步骤S210中得到像素差值后,可以直接根据像素差值中的每一差值与预设像素基准值之间的比值得到第一曝光图像融合权重图。其中,对每一差值与预设像素基准值取比值的目的在于,对得到的权重进行归一化处理。曝光图像中的像素点对应的差值越大,说明该像素点的像素值与预设像素基准值之间的差距越大,差距越大说明其失真程度也更大,因此,在图像融合时,该像素点对应的融合权重就越低,如此,可以解决图像融合时各区域过度自然的问题。例如,曝光图像中的像素点对应的差 值分别为(10,20,30;20,30,40;30,40,50)/128=(10/128,20/128,30/128;20/128,30/128,40/128;30/128,40/128,50/128)。采用数值1对上述第一曝光图像融合权重图进行反转,此时的第一曝光图像融合权重图表现为(1-10/128,1-20/128,1-30/128;1-20/128,1-30/128,1-40/128;1-30/128,1-40/128,1-50/128)。可选地,还可以根据实际处理的图像性质和用户需求,采用其他权重计算方式获取第一曝光图像融合权重图,此处不做具体限定。
上述实施例中,通过计算曝光图像各像素点的像素值与预设像素基准值之间的差值,并根据差值与预设像素基准值之间的比值,得到第一曝光图像融合权重图;其中,曝光图像中的像素点对应的差值越大,像素点在第一曝光图像融合权重图对应的融合权重越低。其中,第一曝光图像融合权重图是根据不同曝光图像各像素点的像素值与预设像素基准值之间的差值与预设像素基准值之间的比值确定的,其包含着每一曝光图像的自身特性,可以最大化每一曝光图像中的有用信息。
在其中一个实施例中,如图3所示,为步骤S300的一种可实施方式的流程示意图,步骤S300,获取每一曝光图像中的每一过曝区域的区域面积,包括:
步骤S310,对每一曝光图像进行过曝区域检测,得到与每一曝光图像对应的过曝区域掩码图。
具体地,以二进制0、1为例进行说明,在对曝光图像进行过曝区域检测时,若检测到的像素点为过曝点则用1表示,若检测到的像素点为非过曝点则用0表示,将最终检测结果作为过曝区域掩码图。以一个简单的例子进行说明,现有一个3*3的曝光图像,当检测点的亮度值大于一个给定的预设阈值时,则认为其为过曝点,小于或等于该给定的预设阈值时,则认为其为非过曝点。当实际曝光图像为表现为(过曝点,过曝点,过曝点;过曝点,过曝点,非过曝点;过曝点,非过曝点,非过曝点)时,对应的过曝区域掩码图可表示为(1,1,1;1,1,0;1,0,0)。当然,这里以一个大小为3*3的曝光图像为例进行说明,实际处理的图像一般非常大,但相应的计算方式是一致的,此处不再进行详细说明。
步骤S320,根据每一过曝区域掩码图,对与过曝区域掩码图对应的曝 光图像进行区域分割,得到对应的过曝区域。
具体地,根据步骤S310中得到的过曝区域掩码图,可知,过曝区域掩码图的左上角都是“1”,说明对应的曝光图像左上角为过曝区域,同样地,可以得到过曝区域掩码图的右下角都是“0”,说明对应的曝光图像右下角为非过曝区域。对过曝区域掩码图中数值为“1”的图像区域进行区域分割,可以得到相应的过曝区域。可以采用像素邻域遍历法(此处并不对区域分割的具体算法进行限定)对上述过曝区域掩码图进行区域分割,得到对应的过曝区域。例如,采用像素邻域遍历法对上述大小为3*3的曝光图像进行分割,得到一个过曝区域,当然,曝光图像中还可能存在多个过曝区域。
步骤S330,获取每一曝光图像中的每一过曝区域的区域面积。
具体地,在步骤S320中得到过曝区域后,计算每一个过曝区域的面积,可以得到每一曝光图像中的每一过曝区域的区域面积。
上述实施例中,通过对每一曝光图像进行过曝区域检测,得到与每一曝光图像对应的过曝区域掩码图,接着,根据每一过曝区域掩码图,对与过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域,最后,获取每一曝光图像中的每一过曝区域的区域面积。其中,对每一曝光图像中的每一过曝区域的面积的计算,可以为后续根据不同过曝区域面积对图像进行融合处理,能够使获取到的融合图像同时兼顾每一过曝图像的不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,保留小的过曝区域的纹理信息。
在其中一个实施例中,为步骤S400的一种可实施方式,步骤S400,对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图,包括:
根据预设的过曝区域面积与平滑系数的对应关系,以及曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
其中,平滑系数是平滑法的一个系数,平滑系数决定了平滑水平以及对预测值与实际结果之间差异的响应速度。平滑系数越接近于1,实际值对 平滑值的影响程度下降越迅速;平滑系数越接近于0,实际值对平滑值影响程度的下降越缓慢。根据平滑系数的特性,在本申请中,当区域面积较小时,可取较小的平滑系数,当区域面积较大时,可取较大的平滑系数,以保留区域面积较小时图像的细节。可选地,还可以将当前过曝区域面积的平方根作为平滑系数。
具体地,过曝区域面积与平滑系数之间存在着一个对应关系,这一对应关系可以根据实际的需求预置到处理器中。根据预设的对应关系和每一过曝区域的区域面积,可以得到一组平滑系数,并根据得到的平滑系数对第一曝光图像融合权重图进行平滑滤波,可以得到第二曝光图像融合权重图。例如,平滑滤波可以采用高斯模糊(Gaussian模糊)的方式实现,此时,可以将上述得到的平滑系数作为高斯模糊的半径。上述仅为平滑滤波的一种实现方式,此处不对平滑滤波的具体方式进行限定。
在其中一个实施例中,如图4所示,为步骤S400的一种可实施方式的流程示意图,根据预设的过曝区域面积与平滑系数的对应关系,以及曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图,包括:
步骤S410,根据对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑系数。
具体地,在预设的对应关系中查找与过曝区域面积大小对应的面积值,根据查找到的面积值和对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑系数,按照同样的方式,可以得到每一曝光图像所有过曝区域面积对应的平滑系数。
步骤S420,根据曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
具体地,根据步骤S410中得到的平滑系数对曝光图像对应的第一曝光图像融合权重图进行平滑滤波,可以得到第二曝光图像融合权重图。例如,当第一曝光图像融合权重图中的权重分布为(0.1,0.05,0.08;0.1,0.06,0.9;0.09,0.1,0.12)时,可以明显的看到权重0.9为一个奇异值, 根据不同的滤波方式会得到不同的滤波结果,但滤波结果一般会在一定的范围内,进行滤波后的第二曝光图像融合权重图分布可能为(0.1,0.05,0.08;0.1,0.06,0.1;0.09,0.1,0.12)。当然上述举了一个明显的例子,当权重中存在不易察觉的权重值时,可以采取上述方法对第一曝光图像融合权重图进行平滑滤波,得到第二曝光图像融合权重图。
上述实施例中,通过根据对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑系数,并根据曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得与曝光图像对应的第二曝光图像融合权重图。其中,第二曝光图像融合权重图的获取过程中能够同时兼顾每一过曝图像的不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,保留小的过曝区域的纹理信息,获得更加真实的融合图像。
在其中一个实施例中,步骤S500,根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,之前还包括:
以预设数值作为滤波半径,对第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,预设数值小于预设阈值。
具体地,根据过曝区域面积对第一曝光图像融合权重图进行滤波,可能引起一定的边界效应,因此,采用一个小于预设阈值的数值作为滤波半径,对得到的整个第二曝光图像融合权重图进行平滑滤波,可以避免受上述处理过程中可能存在的边界效应,使根据第二曝光图像融合权重图得到的融合图像更加真实。此处,小于预设阈值的预设数值可以设置为3*3或者5*5或者其他较小的数值,以这一数值作为滤波半径对第二曝光图像融合权重图进行平滑滤波,可以消除可能存在的边界效应。而当预设数值较大时可能会出现不同区域过度模糊的现象发生,因此,此处的预设数值需要设置为一个小于预设阈值的数值,以避免过度模糊的现象发生。
在其中一个实施例中,步骤S500,根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,包括:
根据每一第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到融合图像。
具体地,根据上述方法中得到包含每一过曝图像的整体特性和每一过曝图像的不同过曝区域特性信息的第二曝光图像融合权重图,对曝光图像中的曝光图像进行加权求和,得到融合图像。这一操作能够充分考虑每一过曝图像的特性,同时兼顾每一过曝图像的不同过曝区域的特性,避免出现小的过曝区域细节丢失的现象,保留小的过曝区域的纹理信息,获得更加真实的融合图像。
在一个实施例中,如图5所示,提供了一种图像融合装置,包括:图像获取模块501、第一权重获取模块502、区域面积获取模块503、第二权重获取模块504和图像融合模块505,其中:
图像获取模块501,用于基于同一目标场景,获取多张不同曝光程度的曝光图像;
第一权重获取模块502,用于获取每一曝光图像对应的第一曝光图像融合权重图;其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重;
区域面积获取模块503,用于获取每一曝光图像中的每一过曝区域的区域面积;
第二权重获取模块504,用于对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图;
图像融合模块505,用于根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
在其中一个实施例中,第一权重获取模块502还用于执行对于每一曝光图像,根据曝光图像各像素点的像素值与预设像素基准值之间的差值,得到第一曝光图像融合权重图。
在其中一个实施例中,第一权重获取模块502还用于计算曝光图像各像素点的像素值与预设像素基准值之间的差值;根据差值与预设像素基准值之间的比值,得到第一曝光图像融合权重图;其中,曝光图像中的像素点对应的差值越大,像素点在第一曝光图像融合权重图对应的融合权重越低。
在其中一个实施例中,区域面积获取模块503还用于执行对每一曝光图 像进行过曝区域检测,得到与每一曝光图像对应的过曝区域掩码图;根据每一过曝区域掩码图,对与过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域;获取每一曝光图像中的每一过曝区域的区域面积。
在其中一个实施例中,第二权重获取模块504还用于根据预设的过曝区域面积与平滑系数的对应关系,以及曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,第二权重获取模块504还用于根据对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑系数;根据曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,第二权重获取模块504还用于以预设数值作为滤波半径,对第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,预设数值小于预设阈值。
在其中一个实施例中,图像融合模块505还用于根据每一第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到融合图像。
关于图像融合装置的具体限定可以参见上文中对于图像融合方法的限定,在此不再赘述。上述图像融合装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的计算处理设备中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形 式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
在一个实施例中,提供了一种计算处理设备,该计算处理设备可以是终端,其内部结构图可以如图6所示。该计算处理设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算处理设备的处理器用于提供计算和控制能力。该计算处理设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序代码,这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中,这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算处理设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种图像融合方法。该计算处理设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算处理设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算处理设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算处理设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算处理设备,包括存储器和处理器,存储器中存储有计算机程序,该计算机程序包括计算机可读代码,该处理器执行计算机程序时实现以下步骤:
基于同一目标场景,获取多张不同曝光程度的曝光图像;
获取每一曝光图像对应的第一曝光图像融合权重图;其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重;
获取每一曝光图像中的每一过曝区域的区域面积;
对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图;
根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:对于每一曝光图像,根据曝光图像各像素点的像素值与预设像素基准值之间的差值,得到第一曝光图像融合权重图。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:计算曝光图像各像素点的像素值与预设像素基准值之间的差值;根据差值与预设像素基准值之间的比值,得到第一曝光图像融合权重图;其中,曝光图像中的像素点对应的差值越大,像素点在第一曝光图像融合权重图对应的融合权重越低。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:对每一曝光图像进行过曝区域检测,得到与每一曝光图像对应的过曝区域掩码图;根据每一过曝区域掩码图,对与过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域;获取每一曝光图像中的每一过曝区域的区域面积。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:根据预设的过曝区域面积与平滑系数的对应关系,以及曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:根据对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑系数;根据曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:以预设数值作为滤波半径,对第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,预设数值小于预设阈值。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:根据每一第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到融合图像。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
基于同一目标场景,获取多张不同曝光程度的曝光图像;
获取每一曝光图像对应的第一曝光图像融合权重图;其中,第一曝光图像融合权重图包括曝光图像各像素点对应的融合权重;
获取每一曝光图像中的每一过曝区域的区域面积;
对于每一曝光图像,利用曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图;
根据每一第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:对于每一曝光图像,根据曝光图像各像素点的像素值与预设像素基准值之间的差值,得到第一曝光图像融合权重图。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:计算曝光图像各像素点的像素值与预设像素基准值之间的差值;根据差值与预设像素基准值之间的比值,得到第一曝光图像融合权重图;其中,曝光图像中的像素点对应的差值越大,像素点在第一曝光图像融合权重图对应的融合权重越低。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:对每一曝光图像进行过曝区域检测,得到与每一曝光图像对应的过曝区域掩码图;根据每一过曝区域掩码图,对与过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域;获取每一曝光图像中的每一过曝区域的区域面积。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据预设的过曝区域面积与平滑系数的对应关系,以及曝光图像中的每一过曝区域的区域面积,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据对应关系,得到与曝光图像中的每一过曝区域的区域面积对应的平滑 系数;根据曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与曝光图像对应的第二曝光图像融合权重图。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:以预设数值作为滤波半径,对第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,预设数值小于预设阈值。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据每一第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到融合图像。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (12)

  1. 一种图像融合方法,其特征在于,所述方法包括:
    基于同一目标场景,获取多张不同曝光程度的曝光图像;
    获取每一所述曝光图像对应的第一曝光图像融合权重图;其中,所述第一曝光图像融合权重图包括所述曝光图像各像素点对应的融合权重;
    获取每一所述曝光图像中的每一过曝区域的区域面积;
    对于每一所述曝光图像,利用所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图;
    根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取每一所述曝光图像对应的第一曝光图像融合权重图,包括:
    对于每一所述曝光图像,根据所述曝光图像各像素点的像素值与预设像素基准值之间的差值,得到所述第一曝光图像融合权重图。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述曝光图像各像素点的像素值与预设像素基准值之间的差值,得到所述第一曝光图像融合权重图,包括:
    计算所述曝光图像各像素点的像素值与所述预设像素基准值之间的差值;
    根据所述差值与所述预设像素基准值之间的比值,得到所述第一曝光图像融合权重图;其中,所述曝光图像中的像素点对应的差值越大,所述像素点在所述第一曝光图像融合权重图对应的融合权重越低。
  4. 根据权利要求1所述的方法,其特征在于,所述获取每一所述曝光图像中的每一曝光区域的区域面积,包括:
    对每一所述曝光图像进行过曝区域检测,得到与每一所述曝光图像对应的过曝区域掩码图;
    根据每一所述过曝区域掩码图,对与所述过曝区域掩码图对应的曝光图像进行区域分割,得到对应的过曝区域;
    获取每一所述曝光图像中的每一过曝区域的区域面积。
  5. 根据权利要求1所述的方法,其特征在于,所述利用所述曝光 图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图,包括:
    根据预设的过曝区域面积与平滑系数的对应关系,以及所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图。
  6. 根据权利要求5所述的方法,其特征在于,所述根据预设的过曝区域面积与平滑系数的对应关系,以及所述曝光图像中的每一过曝区域的区域面积,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图,包括:
    根据所述对应关系,得到与所述曝光图像中的每一过曝区域的区域面积对应的平滑系数;
    根据所述曝光图像中的每一过曝区域的区域面积对应的平滑系数,对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图。
  7. 根据权利要求1所述的方法,其特征在于,所述根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,之前还包括:
    以预设数值作为滤波半径,对所述第二曝光图像融合权重图进行平滑滤波,得到更新后的第二曝光图像融合权重图;其中,所述预设数值小于预设阈值。
  8. 根据权利要求1所述的方法,其特征在于,所述根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像,包括:
    根据每一所述第二曝光图像融合权重图中各像素点对应的融合权重,对多张曝光图像进行加权求和,得到所述融合图像。
  9. 一种图像融合装置,其特征在于,所述装置包括:
    图像获取模块,用于基于同一目标场景,获取多张不同曝光程度的曝光图像;
    第一权重获取模块,用于获取每一所述曝光图像对应的第一曝光 图像融合权重图;其中,所述第一曝光图像融合权重图包括所述曝光图像各像素点对应的融合权重;
    区域面积获取模块,用于获取每一所述曝光图像中的每一过曝区域的区域面积;
    第二权重获取模块,用于对于每一所述曝光图像,利用所述曝光图像中的每一过曝区域的区域面积对与所述曝光图像对应的第一曝光图像融合权重图进行平滑滤波,得到与所述曝光图像对应的第二曝光图像融合权重图;
    图像融合模块,用于根据每一所述第二曝光图像融合权重图,对多张曝光图像进行图像融合处理,得到融合图像。
  10. 一种计算处理设备,其特征在于,包括:
    存储器,其中存储有计算机可读代码;
    一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行如权利要求1-8中任一项所述的图像融合方法。
  11. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求1-8中任一项所述的图像融合方法。
  12. 一种计算机可读介质,其中存储了如权利要求11所述的计算机程序。
PCT/CN2020/106295 2019-10-12 2020-07-31 图像融合方法、装置、计算处理设备和存储介质 WO2021068618A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/762,532 US20220383463A1 (en) 2019-10-12 2020-07-31 Method and device for image fusion, computing processing device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910967375.8 2019-10-12
CN201910967375.8A CN110717878B (zh) 2019-10-12 2019-10-12 图像融合方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021068618A1 true WO2021068618A1 (zh) 2021-04-15

Family

ID=69212556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/106295 WO2021068618A1 (zh) 2019-10-12 2020-07-31 图像融合方法、装置、计算处理设备和存储介质

Country Status (3)

Country Link
US (1) US20220383463A1 (zh)
CN (1) CN110717878B (zh)
WO (1) WO2021068618A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592777A (zh) * 2021-06-30 2021-11-02 北京旷视科技有限公司 双摄拍照的图像融合方法、装置和电子系统
CN113891012A (zh) * 2021-09-17 2022-01-04 北京极豪科技有限公司 一种图像处理方法、装置、设备以及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717878B (zh) * 2019-10-12 2022-04-15 北京迈格威科技有限公司 图像融合方法、装置、计算机设备和存储介质
CN111311532B (zh) * 2020-03-26 2022-11-11 深圳市商汤科技有限公司 图像处理方法及装置、电子设备、存储介质
CN112823374A (zh) * 2020-03-30 2021-05-18 深圳市大疆创新科技有限公司 红外图像处理方法、装置、设备及存储介质
CN111641806A (zh) * 2020-05-11 2020-09-08 浙江大华技术股份有限公司 光晕抑制的方法、设备、计算机设备和可读存储介质
CN111882550A (zh) * 2020-07-31 2020-11-03 上海眼控科技股份有限公司 冰雹检测方法、装置、计算机设备和可读存储介质
CN113674193A (zh) * 2021-09-03 2021-11-19 上海肇观电子科技有限公司 图像融合方法、电子设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194850A1 (en) * 2010-02-11 2011-08-11 Samsung Electronics Co., Ltd. Wide dynamic range hardware apparatus and photographing apparatus including the same
US20130258175A1 (en) * 2012-04-02 2013-10-03 Canon Kabushiki Kaisha Image sensing apparatus, exposure control method and recording medium
JP2016115953A (ja) * 2014-12-10 2016-06-23 ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. 画像処理装置および画像処理方法
CN110035239A (zh) * 2019-05-21 2019-07-19 北京理工大学 一种基于灰度—梯度优化的多积分时间红外图像融合方法
CN110087003A (zh) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 多曝光图像融合方法
CN110189285A (zh) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 一种多帧图像融合方法及装置
CN110717878A (zh) * 2019-10-12 2020-01-21 北京迈格威科技有限公司 图像融合方法、装置、计算机设备和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101633893B1 (ko) * 2010-01-15 2016-06-28 삼성전자주식회사 다중노출 영상을 합성하는 영상합성장치 및 방법
CN103247036B (zh) * 2012-02-10 2016-05-18 株式会社理光 多曝光图像融合方法和装置
CN102970549B (zh) * 2012-09-20 2015-03-18 华为技术有限公司 图像处理方法及装置
CN104077759A (zh) * 2014-02-28 2014-10-01 西安电子科技大学 一种基于色觉感知及全局质量因子的多曝光度图像融合方法
CN106534677B (zh) * 2016-10-27 2019-12-17 成都西纬科技有限公司 一种图像过曝优化方法及装置
CN107220956A (zh) * 2017-04-18 2017-09-29 天津大学 一种基于多幅具有不同曝光度的ldr图像的hdr图像融合方法
CN108364275B (zh) * 2018-03-02 2022-04-12 成都西纬科技有限公司 一种图像融合方法、装置、电子设备及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194850A1 (en) * 2010-02-11 2011-08-11 Samsung Electronics Co., Ltd. Wide dynamic range hardware apparatus and photographing apparatus including the same
US20130258175A1 (en) * 2012-04-02 2013-10-03 Canon Kabushiki Kaisha Image sensing apparatus, exposure control method and recording medium
JP2016115953A (ja) * 2014-12-10 2016-06-23 ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. 画像処理装置および画像処理方法
CN110087003A (zh) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 多曝光图像融合方法
CN110035239A (zh) * 2019-05-21 2019-07-19 北京理工大学 一种基于灰度—梯度优化的多积分时间红外图像融合方法
CN110189285A (zh) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 一种多帧图像融合方法及装置
CN110717878A (zh) * 2019-10-12 2020-01-21 北京迈格威科技有限公司 图像融合方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592777A (zh) * 2021-06-30 2021-11-02 北京旷视科技有限公司 双摄拍照的图像融合方法、装置和电子系统
CN113891012A (zh) * 2021-09-17 2022-01-04 北京极豪科技有限公司 一种图像处理方法、装置、设备以及存储介质

Also Published As

Publication number Publication date
CN110717878A (zh) 2020-01-21
CN110717878B (zh) 2022-04-15
US20220383463A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
WO2021068618A1 (zh) 图像融合方法、装置、计算处理设备和存储介质
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN110705583B (zh) 细胞检测模型训练方法、装置、计算机设备及存储介质
CN110363753B (zh) 图像质量评估方法、装置及电子设备
Parihar et al. Fusion‐based simultaneous estimation of reflectance and illumination for low‐light image enhancement
CN111368587B (zh) 场景检测方法、装置、终端设备及计算机可读存储介质
WO2021013049A1 (zh) 前景图像获取方法、前景图像获取装置和电子设备
CN112101386B (zh) 文本检测方法、装置、计算机设备和存储介质
WO2023083171A1 (zh) 图像数据流的处理方法、装置及电子设备
WO2023125750A1 (zh) 一种图像去噪方法、装置和存储介质
Gao et al. Single image dehazing via a dual-fusion method
CN115115554A (zh) 基于增强图像的图像处理方法、装置和计算机设备
CN113344801A (zh) 一种应用于燃气计量设施环境下的图像增强方法、系统、终端及存储介质
CN113781468A (zh) 一种基于轻量级卷积神经网络的舌图像分割方法
CN112418243A (zh) 特征提取方法、装置及电子设备
CN112116596A (zh) 图像分割模型的训练方法、图像分割方法、介质及终端
Kim Low-light image enhancement by diffusion pyramid with residuals
CN115082345A (zh) 图像阴影去除方法、装置、计算机设备和存储介质
CN116486467A (zh) 眼睛检测框的确定方法、装置、设备及存储介质
CN114764839A (zh) 动态视频生成方法、装置、可读存储介质及终端设备
Akai et al. A single backlit image enhancement method by image fusion with a weight map for improvement of dark area’s visibility
CN114372931A (zh) 一种目标对象虚化方法、装置、存储介质及电子设备
CN109712094B (zh) 图像处理方法及装置
CN110705336B (zh) 图像处理方法、系统、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20873796

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20873796

Country of ref document: EP

Kind code of ref document: A1