CN113674193A - Image fusion method, electronic device and storage medium - Google Patents

Image fusion method, electronic device and storage medium Download PDF

Info

Publication number
CN113674193A
CN113674193A CN202111031156.2A CN202111031156A CN113674193A CN 113674193 A CN113674193 A CN 113674193A CN 202111031156 A CN202111031156 A CN 202111031156A CN 113674193 A CN113674193 A CN 113674193A
Authority
CN
China
Prior art keywords
image
pixel value
pixel
weight
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111031156.2A
Other languages
Chinese (zh)
Inventor
陈果
�田润
周骥
冯歆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NextVPU Shanghai Co Ltd
Original Assignee
NextVPU Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NextVPU Shanghai Co Ltd filed Critical NextVPU Shanghai Co Ltd
Priority to CN202111031156.2A priority Critical patent/CN113674193A/en
Publication of CN113674193A publication Critical patent/CN113674193A/en
Priority to PCT/CN2022/114592 priority patent/WO2023030139A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

Provided are an image fusion method, an electronic device, and a storage medium. The image fusion method comprises the following steps: acquiring a first image and a second image shot aiming at the same scene, wherein the exposure time of the first image is longer than that of the second image, and the first image and the second image respectively comprise a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions; determining, from the first image, a first weight of the first pixel value and a second weight of the second pixel value for each pixel location; and respectively fusing the first pixel value and the second pixel value of each pixel position according to the corresponding first weight and the second weight to obtain the target image. By utilizing the image fusion method provided by the disclosure, high-quality and high-efficiency image fusion can be realized.

Description

Image fusion method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image fusion method, an electronic device, and a storage medium.
Background
The Dynamic Range (DR) of the luminance of a real scene is typically larger than the Dynamic Range of a camera. When a camera collects an image, only a part of dynamic range in a real scene can be recorded by one-time exposure, so that an overexposed area or an underexposed area often appears in the collected image, so that the detail information of the scene is lost, and the visual effect of the image is poor.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides an image fusion method, an electronic device, and a storage medium to achieve high-quality and efficient image fusion.
According to an aspect of the present disclosure, an image fusion method is provided. The method comprises the following steps: acquiring a first image and a second image shot aiming at the same scene, wherein the exposure time of the first image is longer than that of the second image, and the first image and the second image respectively comprise a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions; determining, from the first image, a first weight of the first pixel value and a second weight of the second pixel value for each pixel location; and respectively fusing the first pixel value and the second pixel value of each pixel position according to the corresponding first weight and the second weight to obtain the target image.
According to another aspect of the present disclosure, an electronic device is provided. The electronic device includes: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the method according to the above.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium storing a program is provided. The program comprises instructions which, when executed by a processor of the electronic device, cause the electronic device to perform the method according to the above.
According to another aspect of the present disclosure, a computer program product is provided. The computer program product comprises a computer program which, when executed by a processor, implements the above-described method.
According to an embodiment of the present disclosure, the first image and the second image are a long exposure image and a short exposure image, respectively, which are photographed for the same scene, and both include a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions, respectively. According to the long-exposure image, determining the contribution rate of the long-exposure image and the short-exposure image at each pixel position to image fusion, namely determining the first weight of the first pixel value and the second weight of the second pixel value at each pixel position, wherein the first weight and the second weight at different pixel positions are different, so that the self-adaptive fusion of the pixel values at different pixel positions is realized, the pixel value of the target image obtained by fusion is smoother, and the image quality is higher.
The brightness of the long-exposure image is generally higher, and the long-exposure image is more suitable for the visual requirement of human eyes. The contribution rates of the long exposure image and the short exposure image to image fusion are determined according to the long exposure image, so that the target image obtained by fusion can have a good visual effect.
Moreover, the image fusion scheme of the embodiment of the disclosure has small calculation amount, and can realize efficient and real-time image fusion.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 is a flowchart illustrating an image fusion method according to an exemplary embodiment;
FIG. 2 is a diagram showing a comparison of a fused target image and an image fused using a related art according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an image fusion process according to an exemplary embodiment; and
fig. 4 is a block diagram illustrating an example of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
The dynamic range of the brightness of a real scene is usually larger than that of a camera, so that an overexposed area or an underexposed area often appears in a single image acquired by the camera through one exposure. The brightness of the overexposed area is too high, and the brightness of the underexposed area is too low, so that the image details of the overexposed area and the underexposed area cannot be distinguished, and the visual effect of the image is poor.
To improve the visual effect of the image, multiple images of the same real scene, each containing a different dynamic range in the real scene, may be acquired by multiple exposures. Then, the multiple images are fused to obtain a High Dynamic Range (HDR) image.
In the related art, the commonly used image fusion method mainly includes the following two methods:
one is an image fusion method based on the inverse camera response process. The core of the method is to calculate a response curve of a camera during exposure (the horizontal axis of the response curve is generally the brightness value of a real scene, and the vertical axis is the pixel value of an image), and inversely transform the images under different exposures to the real brightness value through the response curve for synthesis. Although the method conforms to the physical principle of the camera and can effectively restore the details lost in the imaging process of the camera, the method needs to calibrate the response curve of the camera in advance, the response curves of different cameras and different sensors are different, if the camera or the sensor is replaced, the response curve needs to be corrected or recalibrated, the operation is inconvenient, and the calculation efficiency is low. Moreover, the HDR image obtained in this way usually has artifacts remaining, and the local contrast is relatively poor.
The other method is to directly fuse images under different exposures to generate an HDR image. The method does not consider the physical parameters of the camera, and does not need to calibrate the response curve of the camera. However, when the exposure time difference of the multiple images to be fused is too large, the overexposed region of the long-exposure image may correspond to the underexposed region of the short-exposure image, resulting in uneven pixel values of the fused image, which is easy to generate artifacts or false boundaries, and poor image quality.
In view of the problems in the related art, the present disclosure provides an image fusion method, an electronic device, and a storage medium to achieve high-quality and high-efficiency image fusion. Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
FIG. 1 shows a flow diagram of an image fusion method 100 according to an embodiment of the present disclosure.
The method 100 may be performed in an electronic device, i.e., the subject of the method 100 is the electronic device. The electronic device may be a stationary computer device such as a desktop computer, a server computer, etc., or a mobile computer device such as a cell phone, a tablet computer, a smart wearable device (e.g., a smart watch, a smart headset, etc.), etc. In some embodiments, the electronic device may also be a camera with computing capabilities. In other embodiments, the electronic device may be an assistive reading device.
As shown in fig. 1, the method 100 includes:
step S110, acquiring a first image and a second image shot aiming at the same scene, wherein the exposure time of the first image is longer than that of the second image, and the first image and the second image respectively comprise a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions;
step S120, determining a first weight of a first pixel value and a second weight of a second pixel value of each pixel position according to the first image; and
and step S130, respectively fusing the first pixel value and the second pixel value of each pixel position according to the corresponding first weight and the second weight to obtain a target image.
According to an embodiment of the present disclosure, the first image and the second image are a long exposure image and a short exposure image, respectively, which are photographed for the same scene, and both include a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions, respectively. According to the long-exposure image, determining the contribution rate of the long-exposure image and the short-exposure image at each pixel position to image fusion, namely determining the first weight of the first pixel value and the second weight of the second pixel value at each pixel position, wherein the first weight and the second weight at different pixel positions are different, so that the self-adaptive fusion of the pixel values at different pixel positions is realized, the pixel value of the target image obtained by fusion is smoother, and the image quality is higher. The brightness of the long-exposure image is generally higher, and the long-exposure image is more suitable for the visual requirement of human eyes. The contribution rates of the long exposure image and the short exposure image to image fusion are determined according to the long exposure image, so that the target image obtained by fusion can have a good visual effect. Moreover, the image fusion method disclosed by the embodiment of the disclosure has small calculation amount, and can realize efficient and real-time image fusion.
The various steps of method 100 are described in detail below.
In step S110, a first image and a second image captured for the same scene are acquired, an exposure time of the first image is longer than an exposure time of the second image, and the first image and the second image respectively include a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions.
According to some embodiments, the first image and the second image may be captured by the same or different cameras for the same scene and then transferred to an electronic device for performing the method 100. Accordingly, the electronic device performs step S110 to acquire a first image and a second image.
The first and second images taken for the same scene may be two different images taken at a shorter time interval with the camera at the same location. It will be appreciated that the content captured in the first and second images is substantially the same since the position of the camera has not changed. The first image and the second image may be taken separately by employing different exposure parameters.
In an embodiment of the present disclosure, the exposure time of the first image is greater than the exposure time of the second image. That is, the first image is a long exposure image and the second image is a short exposure image. Accordingly, the first image appears visually brighter than the second image.
According to some embodiments, multiple images with different exposure times can be taken by the same camera for the same scene, and then two images are selected, wherein the image with the longer exposure time is used as the first image, and the image with the shorter exposure time is used as the second image. Further, in order to achieve better fusion effect of the first image and the second image and to obtain higher quality of the fused target image, according to some embodiments, the first image and the second image may be selected from the plurality of images according to response curves of cameras that capture the plurality of images.
In an embodiment of the disclosure, the first image includes a plurality of first pixel values respectively corresponding to a plurality of pixel positions, and the second image includes a plurality of second pixel values respectively corresponding to a plurality of pixel positions.
The pixel position may be represented by two-dimensional coordinates (x, y), for example. The pixel values (including the first pixel value, the second pixel value) may be, for example, a gray value of the pixel, or channel values of different image channels (e.g., R, G, B values of R, G, B channels, or Y, U, V values of Y, U, V channels, etc.).
Each pixel position corresponds to a first pixel value and a second pixel value in the first image. For example, the pixel position (x, y) corresponds to a first pixel value with coordinates (x, y) in the first image and a second pixel value with coordinates (x, y) in the second image. It should be noted that, in the embodiment of the present disclosure, the sizes (i.e., the number of pixels included in the width direction and the number of pixels included in the length direction) of the first image and the second image are the same. In some embodiments, if the sizes of the original captured images corresponding to the first image and the second image are different, the original captured images may be scaled to obtain the first image and the second image having the same sizes.
In step S120, a first weight of the first pixel value and a second weight of the second pixel value for each pixel position are determined based on the first image.
The first image is a long exposure image with higher brightness. The human eye has a better visual perception of a brighter long-exposure image than a dark short-exposure image (second image). The first weight of the first pixel value and the second weight of the second pixel value of each pixel position are determined according to the first image, namely the contribution rate of the long-exposure image and the short-exposure image to image fusion is determined by the long-exposure image, so that the target image obtained by fusion can meet the visual requirement of human eyes, and a good visual effect is presented.
In the embodiment of the present disclosure, image fusion is performed in units of pixels. That is, for each pixel position, the first pixel value and the second pixel value of the pixel position are fused to obtain a fused pixel value. And then combining the fused pixel values of the multiple pixel positions to obtain a fused image. According to some embodiments, the fused image may be directly taken as the target image. Alternatively, according to other embodiments, the fused image may be further processed (e.g., filtered as described below) to obtain the target image.
The first weight and the second weight are used for respectively representing the contribution of the first pixel value and the second pixel value to the fusion. For example, if the first weight is greater than the second weight, the first pixel value will contribute more to the fusion, and the fused pixel value will be derived by referring more to the first pixel value.
According to some embodiments, for any first pixel value, the larger the first pixel value, the smaller the first weight of the first pixel value.
The larger the first pixel value, the closer the corresponding pixel position is to the overexposed region. By setting the first weight to be inversely proportional to the first pixel value, the larger the first pixel value is, the smaller the first weight is, that is, the smaller the contribution of the first pixel value to the fusion is, so that the pixel value of the overexposure area in the long-exposure image can be suppressed during image fusion, the target image obtained by fusion is smoother, the occurrence of a fault or a false boundary is avoided, and the target image is ensured to have a good visual effect.
According to some embodiments, a sum of a first weight of a first pixel value and a second weight of a second pixel value corresponding to the same pixel position is a predetermined value (e.g., 1). Accordingly, the contribution of the first pixel value and the second pixel value to the fusion can be balanced. The second weight will be larger when the first weight is smaller. For example, when the first pixel value is located in the overexposed area of the first image, the first weight may be set to a smaller value, and accordingly, the second weight (i.e. 1 minus the first weight) is a larger value, so as to suppress the pixel value in the long-exposure image when the images are fused, and consider the pixel value in the short-exposure image more, so that the fused target image is smoother, avoids the occurrence of faults or false boundaries, and presents good visual effect.
Specifically, the first weight and the second weight may be determined in various ways.
According to some embodiments, for any pixel location, the first weight for that pixel location may be determined based on the first pixel value for that pixel location and the smallest pixel value in the first image (i.e., the smallest value of the plurality of first pixel values comprised by the first image). And, a second weight for the pixel location may be further determined based on the determined first weight.
According to some embodiments, the first weight for a pixel location is determined based on a relative pixel value range between a first pixel value for the pixel location and a minimum pixel value of the first image and a preset pixel value range determined based on the minimum pixel value.
The relative pixel value range may be, for example, the difference of the first pixel value and the minimum pixel value. For another example, the relative pixel value range may also be determined using a ratio of the first pixel value to the minimum pixel value.
The predetermined pixel value range may be a first constant determined based on the minimum pixel value, and may be, for example, a difference between the first constant greater than the minimum pixel value and the minimum pixel value. Similarly, the predetermined pixel value range may also be determined using a ratio of the first constant to the minimum pixel value. One skilled in the art may use various mathematical tools to perform the processing to determine the relative pixel value range and the predetermined pixel value range without departing from the principles of the present disclosure.
The first weight and the second weight may be determined according to the following equations (1) and (2):
Figure BDA0003245330630000061
Figure BDA0003245330630000062
in the formula, w1(x, y) is a first weight of a pixel location having coordinates (x, y), w2(x, y) is a second weight for the pixel location with coordinates (x, y), ILEIs a first image, ILE(x, y) is the first of the pixel locations with coordinates (x, y)Pixel value, min (I)LE) Alpha and beta are preset constants larger than zero and min (I) is the minimum pixel value of the first imageLE)<α≤max(ILE),max(ILE) Is the maximum pixel value of the first image (i.e. the maximum value of the plurality of first pixel values comprised by the first image). β is typically an integer greater than 1 and may be, for example, 1 or 2.
In the above formula (1) and formula (2), ILE(x,y)-min(ILE) I.e. the relative pixel value range between the first pixel value and the minimum pixel value, alpha-min (I)LE) I.e. a predetermined pixel value range determined based on the minimum pixel value.
In addition, at α<max(ILE) In the case of (2), the first weight w calculated based on the above equation (1)1(x, y) may be negative (when I)LE(x,y)>At α). At a first weight w1In the case where (x, y) is negative, the corresponding second weight w is based on the above equation (2)2(x, y) is greater than 1. In other embodiments, the current I can be set on the basis of the formula (1) and the formula (2)LE(x,y)>Alpha, the first weight w1The value of (x, y) is 0, and the second weight w2The value of (x, y) is 1, that is, the first weight and the second weight are determined according to the following formula (3) and formula (4):
Figure BDA0003245330630000071
Figure BDA0003245330630000072
according to further embodiments, for any pixel location, the first weight for that pixel location may be determined based on the first pixel value for that pixel location and the maximum pixel value in the first image. And, a second weight for the pixel location may be further determined based on the determined first weight.
According to some embodiments, the first weight for a pixel position is determined based on a relative pixel value range between a maximum pixel value of the first image and the first pixel value for the pixel position and a preset pixel value range determined based on the maximum pixel value.
The relative pixel value range may be, for example, the difference of the maximum pixel value and the first pixel value. For another example, the relative pixel value range may also be determined using the ratio of the maximum pixel value to the first pixel value. The preset pixel value range may be a second constant determined based on the maximum pixel value, and may be, for example, a difference between the maximum pixel value and a second constant smaller than the maximum pixel value. Similarly, the predetermined pixel value range may also be determined using a ratio of the maximum pixel value and a second constant. One skilled in the art may use various mathematical tools to perform the processing to determine the relative pixel value range and the predetermined pixel value range without departing from the principles of the present disclosure.
Accordingly, the first weight and the second weight may be determined according to the following equations (5) and (6):
Figure BDA0003245330630000081
Figure BDA0003245330630000082
in the formula, w1(x, y) is a first weight of a pixel location having coordinates (x, y), w2(x, y) is a second weight for the pixel location with coordinates (x, y), ILEFor the first image, ILE(x, y) is a first pixel value of a pixel position having coordinates (x, y), max (I)LE) Alpha and beta are preset constants larger than zero and min (I) is the maximum pixel value of the first imageLE)≤α<max(ILE),min(ILE) Is the minimum pixel value of the first image. β is typically an integer greater than 1 and may be, for example, 1 or 2.
In the above-mentioned formula (5) and formula (6), max (I)LE)-ILE(x, y) is the relative pixel value range between the maximum pixel value and the first pixel value, max (I)LE) α is based on the maximum pixelA predetermined pixel value range determined by the value.
Note that, at α > min (I)LE) In the case of (3), the first weight w calculated based on the above equation (5)1(x, y) may be greater than 1 (when ILE(x,y)<At α). At a first weight w1(x, y) is greater than 1, and the corresponding second weight w is based on the above equation (6)2(x, y) is less than 0. In other embodiments, the current I can be set on the basis of the formula (5) and the formula (6)LE(x,y)<Alpha, the first weight w1The value of (x, y) is 1, and the second weight w2The value of (x, y) is 0, that is, the first weight and the second weight are determined according to the following equations (7) and (8):
Figure BDA0003245330630000083
Figure BDA0003245330630000084
in step S130, the first pixel value and the second pixel value of each pixel position are respectively fused according to the corresponding first weight and the second weight to obtain the target image.
According to some embodiments, step S130 further comprises:
step S132, performing brightness compensation on the second pixel value according to the exposure parameters of the first image and the second image;
step S134, according to the first weight and the second weight, fusing the first pixel value and the compensated second pixel value to obtain a fused image of the first image and the second image; and
and step S136, determining a target image based on the fused image.
With respect to step S132, according to some embodiments, a brightness compensation coefficient may be determined according to exposure parameters of the first image and the second image; and performing brightness compensation on the second pixel value according to the brightness compensation coefficient. For example, the second pixel value may be multiplied by an illumination compensation coefficient to obtain a compensated second pixel value.
According to some embodiments, the exposure parameter comprises an exposure time and an exposure gain, and the brightness compensation factor is a quotient of a product of the exposure time and the exposure gain of the first image and a product of the exposure time and the exposure gain of the second image. That is, the luminance compensation coefficient is calculated according to the following equation (9):
Figure BDA0003245330630000091
wherein, ratio is a brightness compensation coefficient, t1、g1Exposure time and exposure gain, t, respectively, for the first image2、g2Respectively, the exposure time and the exposure gain of the second image.
With respect to step S134, according to some embodiments, the first pixel value and the compensated second pixel value may be weighted and summed according to the first weight and the second weight to achieve fusion of the first pixel value and the second pixel value.
That is, the first pixel value and the compensated second pixel value may be fused according to the following equation (10-1):
IFuse(x,y)=w1(x,y)×ILF(x,y)+w2(x,y)×(ratio×ISE(x,y)) (10-1)
in the formula IFuse(x,y)、ILE(x,y)、ISE(x, y) are respectively a fused pixel value, a first pixel value and a second pixel value of a pixel position with coordinates (x, y), w1(x,y)、w2(x, y) are a first weight and a second weight of the pixel position with coordinates (x, y), respectively, and ratio is a luminance compensation coefficient.
According to further embodiments, the first pixel value and the compensated second pixel value may be fused by: respectively carrying out logarithmic transformation on the first pixel value and the compensated second pixel value to obtain a first logarithmic pixel value and a second logarithmic pixel value; and performing weighted summation on the first logarithmic pixel value and the second logarithmic pixel value according to the first weight and the second weight. That is, the first pixel value and the compensated second pixel value may be fused according to the following equation (10-2):
IFuse(x,y)=w1(x,y)×log(ILE(x,y))+w2(x,y)×log(ratio×ISE(x,y)) (10-2)
in the formula IFuse(x,y)、ILE(x,y)、ISE(x, y) are respectively a fused pixel value, a first pixel value and a second pixel value of a pixel position with coordinates (x, y), log (I)LE(x,y))、log(ratio×ISE(x, y)) are first and second logarithmic pixel values, w, respectively1(x,y)、w2(x, y) are a first weight and a second weight of the pixel position with coordinates (x, y), respectively, and ratio is a luminance compensation coefficient.
Based on the above-described formula (10-1) or (10-2), a fused pixel value for each pixel position can be obtained. And combining the fused pixel values of the pixel positions in the same image to obtain a fused image of the first image and the second image.
According to the embodiment of the disclosure, the image fusion can be completed by performing simple mathematical operations (such as addition, subtraction, multiplication and division) on the pixel values of the first image and the second image. The calculation amount is small, the calculation efficiency is high, and the real-time image fusion can be realized.
After the fused image of the first image and the second image is obtained through step S134, step S136 may be performed to determine the target image based on the fused image.
In step S136, according to some embodiments, the fused image may be directly taken as the target image.
Fig. 2 shows a comparison of a fused image (i.e., a target image) fused according to an embodiment of the present disclosure and an image fused according to a camera response curve.
As shown in fig. 2, the first image 210 is a long exposure image, and a highlighted overexposed area exists in the middle of the image. The second image 220 is a short exposure image that is dark overall and the image details are difficult to resolve. According to the image fusion method of the embodiment of the present disclosure, the first image 210 and the second image 220 are fused to obtain a fused image (target image) 230. According to an image fusion method based on a camera response curve in the related art, the first image 210 and the second image 220 are fused to obtain an image 240. As shown in fig. 2, an obvious pseudo boundary 242 exists in the middle of an image 240 obtained by fusing camera response curves, so that the image quality is not high and the visual effect is not good enough. In the fused image (target image) 230 obtained by fusion based on the embodiment of the present disclosure, the position corresponding to the pseudo boundary 242 is smoother, no obvious pseudo boundary exists, and the image quality and the visual effect are significantly better than those of the image 240.
According to other embodiments, a plurality of fusion pixel values included in the fusion image may be further normalized to be within a [0, 1] interval, and the normalized image is taken as a target image, so as to avoid data overflow or loss caused by too large or too small fusion pixel values, and further influence on image quality. According to some embodiments, each fused pixel value may be normalized to be within the [0, 1] interval by dividing by the maximum value of the plurality of fused pixel values.
According to further embodiments, the fused image (or the normalized fused image) may be processed to obtain the target image by: filtering the fused image to obtain a texture feature map, an edge feature map and an illumination feature map of the target image; and obtaining a target image based on the texture feature map, the edge feature map and the illumination feature map.
According to some embodiments, the texture feature map of the target image may be obtained by: filtering the fused image to decompose the fused image into a first base feature map (B)1) And a first detail feature map (D)1) (ii) a And a first detail feature map (D)1) Edge preserving compression to obtain texture feature map (FD)1)。
In the above embodiments, a Multi-Scale Edge Preserving Decomposition (MSEPD) algorithm may be employed to filter the fused image using an Edge-Preserving filter operator, wherein image processing may be performed using the Edge-Preserving filter operator to decompose the image into the base feature map and the detail mapSection feature diagram. The edge-preserving filter operator may be any edge-preserving filter operator, such as a bilateral filter operator, a guided filter operator, or the like. By using the image processing mode of multi-scale edge preserving decomposition, the features of different scales in the image can be extracted without carrying out down-sampling operation on the image. The first basic feature map (B) can be obtained by using the following equations (11) and (12)1) And a first detail feature map (D)1):
B1=MSEPD(IFuse,r1) (11)
D1=IFuse-B1 (12)
In the formula IFuseFor fused images (or normalized fused images), r1Is the side length of the filter window (correspondingly, the size of the filter window is r1×r1),MSEPD(IFuse,r1) Expressed by a dimension r1×r1The filtering window adopts the MSEPD algorithm to fuse the image IFusePerforming filtering, B1、D1Respectively a first basic feature diagram and a first detail feature diagram.
First detail feature map D obtained according to the above embodiment1The method includes a large amount of texture detail information and noise, and in order to ensure that the finally obtained target image presents good texture detail, the first detail feature map D is required to be processed1With a small amount of compression or retention, i.e. on the first detail feature map D1Performing edge-preserving compression to obtain texture feature map FD1
According to some embodiments, Sigmoid function may be employed to map the first detail feature D1Performing edge holding compression to obtain a texture feature map FD1. That is, the following equation (13) is used to map the first detail feature map D1Performing edge-holding compression:
FD1=Sigmoid(D11)=1/(1+exp(-δ1×D1)) (13)
in the formula, D1、FD1Respectively, a first detail feature map and a texture feature map, exp represents an exponential function with a natural constant e as a base,δ1is a preset constant.
According to some embodiments, the edge feature map of the target image may be obtained by: for the first basic feature map (B)1) Filtering to obtain the first basic feature map (B)1) Decomposed into a second basic feature map (B)2) And a second detail characteristic diagram (D)2) (ii) a And a second detail feature map (D)2) Performing edge-preserving compression to obtain an edge feature map (FD)2)。
In the above embodiment, the MSEPD algorithm may be adopted to filter the first basic feature map, and the filtering process may be expressed as the following formulas (14), (15):
B2=MSEPD(B1,r2) (14)
D2=B1-B2 (15)
in the formula, B1Is a first basic feature map, r2Is the side length of the filter window (correspondingly, the size of the filter window is r2×r2),MSEPD(B1,r2) The representation adopts the MSEPD algorithm and has the size r2×r2Filter window pair first base feature map B1Performing filtering, B2、D2Respectively a second basic feature diagram and a second detail feature diagram.
Second detail characteristic diagram D obtained according to the above embodiment2The method comprises the step of containing the edge contour information of an object, which needs to be subjected to edge-preserving compression to obtain an edge feature map FD2
According to some embodiments, Sigmoid function may be employed to map the second detail feature D2Performing edge holding compression to obtain an edge feature map FD2. That is, the following equation (16) is used to map the second detail characteristic diagram D2Performing edge-holding compression:
FD2=Sigmoid(D22)=1/(1+exp(-δ2×D2)) (16)
in the formula, D2、FD2Respectively, a second detail feature diagram and an edge feature diagram, exp is expressed inExponential function with the natural constant e as base, delta2Is a preset constant. Understandably, δ2Can be compared with δ in equation (13)1The values of (A) are the same or different.
According to some embodiments, the illumination feature map of the target image may be obtained by: for the second basic feature map (B)2) Filtering to obtain a second basic feature map (B)2) The third basic feature map (B) is decomposed3) (ii) a And a third basic feature map (B)3) Enhancement is performed to obtain an illumination characteristic map (GB).
In the above embodiment, the MSEPD algorithm may be adopted to filter the second basic feature map, and the filtering process may be expressed as the following formula (17):
B3=MSEPD(B2,r3) (17)
in the formula, B2Is a second basic feature map, r3Is the side length of the filter window (correspondingly, the size of the filter window is r3×r3),MSEPD(B2,r3) The representation adopts the MSEPD algorithm and has the size r3×r3Filter window pair second base feature map B2Performing filtering, B3Is a third basic feature map.
Third basic feature map B obtained according to the above embodiment3The method comprises illumination information, and the illumination information can be enhanced to obtain an illumination characteristic diagram GB, so that the brightness and the contrast diagram of a target image are improved.
According to some embodiments, a third base feature map B may be applied3And performing Gamma conversion to obtain an illumination characteristic diagram GB. That is, the following formula (18) is used to map the third basic feature map B3And (3) enhancing:
GB=Gamma(B3)=(B3)γ (18)
in the formula, B3GB is a third basic characteristic diagram and an illumination characteristic diagram respectively, and Gamma is a preset index in Gamma conversion.
After the texture feature map, the edge feature map, and the illumination feature map obtained by filtering, the texture feature map, the edge feature map, and the illumination feature map may be subjected to weighted summation to obtain a target image. That is, the target image can be obtained according to the following formula (19):
IObj=w1×FD1+w2×FD2+w3×GB (19)
in the formula, w1、w2、w3Respectively, texture feature map FD1Edge feature map FD2And the weight of the illumination characteristic diagram GB, and the values of the three can be set by the technical personnel in the field according to the actual situation.
According to the embodiment of the disclosure, the texture feature map, the edge feature map and the illumination feature map are obtained by filtering the fusion image, and the target image is obtained based on the texture feature map, the edge feature map and the illumination feature map, so that the brightness and the contrast of the target image can be improved, the quality of the target image is improved, and the target image has a good visual effect.
In addition, the embodiment of the disclosure adopts the MSEPD algorithm to realize filtering, down-sampling and up-sampling operations are not performed in the filtering process, and the size of the filtering window can be flexibly set (i.e. r is set) according to the calculated amount1、r2、r3The value of (d) is high in calculation efficiency, and efficient and real-time image fusion and display can be realized.
Fig. 3 shows a schematic diagram of an image fusion process 300 according to an embodiment of the present disclosure.
As shown in fig. 3, in step S350, the long-exposure image 310 and the short-exposure image 312 are fused to obtain a fused image 314. The fused image 314 can be obtained by, for example, the aforementioned steps S110, S120, S132, and S134.
In step S352, the fused image 314 is filtered by using the MSEPD algorithm to decompose the fused image 314 into the first base feature map 316 and the first detail feature map 318.
In step S354, the first detail feature map 318 is subjected to edge preserving compression by using a Sigmoid function, so as to obtain the texture feature map 320.
In step S356, the first base feature map 316 is filtered using the MSEPD algorithm to decompose the first base feature map 316 into the second base feature map 322 and the second detail feature map 324.
In step S358, the second detail feature map 324 is subjected to edge preserving compression by using Sigmoid function, so as to obtain the edge feature map 326. It is to be understood that the Sigmoid function employed in step S358 may be different from the Sigmoid function employed in step S354.
In step S360, the second base feature map 322 is filtered using the MSEPD algorithm to separate the third base feature map 328 from the second base feature map 322.
In step S362, Gamma transformation is performed on the third basic feature map 328 to obtain the illumination feature map 330.
In step S364, the texture feature map 320, the edge feature map 326, and the illumination feature map 330 are weighted and summed to obtain the target image 332. The target image 332 is the result image of the image fusion.
According to the embodiment of the disclosure, an image fusion device is also provided. The image fusion device may include an image acquisition unit, a weight determination unit, and a fusion unit, wherein the image acquisition unit may be configured to acquire a first image and a second image captured for the same scene, an exposure time of the first image being greater than an exposure time of the second image, the first image and the second image including a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions, respectively. The weight determination unit may be configured to determine a first weight of the first pixel value and a second weight of the second pixel value for each pixel position based on the first image. The fusion unit may be configured to fuse the first pixel value and the second pixel value of each pixel position, respectively, based on the respective first weight and second weight, to obtain the target image.
Here, the operations of the above units of the image fusion apparatus are similar to the operations of steps S110 to S130 described above, respectively, and are not described again here. According to another aspect of the present disclosure, there is also provided an electronic device including: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the image fusion method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the image fusion method described above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the image fusion method described above.
Referring to fig. 4, an electronic device 400, which is an example of a hardware device (electronic device) that can be applied to aspects of the present disclosure, will now be described. The electronic device 400 may be any machine configured to perform processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a robot, a smart phone, an on-board computer, or any combination thereof. The image fusion method 100 described above may be implemented in whole or at least in part by an electronic device 400 or similar device or system.
Electronic device 400 may include components connected to bus 402 (possibly via one or more interfaces) or in communication with bus 402. For example, electronic device 400 may include a bus 402, one or more processors 404, one or more input devices 406, and one or more output devices 408. The one or more processors 404 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). Input device 406 may be any type of device capable of inputting information to electronic device 400 and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 408 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The electronic device 400 may also include non-transitory memoryThe storage device 410, a non-transitory storage device, may be any storage device that is non-transitory and that may enable data storage, including but not limited to a magnetic disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 410 may be removable from the interface. The non-transitory storage device 410 may have data/programs (including instructions)/code for implementing the above-described methods and steps. The electronic device 400 may also include a communication device 412. The communication device 412 may be any type of device or system that enables communication with external devices and/or with a network, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as BluetoothTMDevices, 1302.11 devices, Wi-Fi devices, Wi-Max devices, cellular communication devices, and/or the like.
Electronic device 400 may also include a working memory 414, which may be any type of working memory that can store programs (including instructions) and/or data useful for the operation of processor 404, and which may include, but is not limited to, random access memory and/or read only memory devices.
Software elements (programs) may be located in the working memory 414 including, but not limited to, an operating system 416, one or more application programs 418, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 418, and the above-described image fusion method 100 may be implemented by instructions of one or more applications 418 being read and executed by the processor 404. More specifically, in the image fusion method 100 described above, the steps S110-S130 may be implemented, for example, by the processor 404 executing the application 418 with the instructions of the steps S110-S130. Further, other steps in the image fusion method 100 described above may be implemented, for example, by the processor 404 executing an application 418 having instructions to perform the respective steps. Executable code or source code of instructions of the software elements (programs) may be stored in a non-transitory computer-readable storage medium, such as storage device 410 described above, and may be stored in working memory 414 (possibly compiled and/or installed) upon execution. Executable code or source code for the instructions of the software elements (programs) may also be downloaded from a remote location.
It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms according to the present disclosure.
It should also be understood that the foregoing method may be implemented in a server-client mode. For example, a client may receive data input by a user and send the data to a server. The client may also receive data input by the user, perform part of the processing in the foregoing method, and transmit the data obtained by the processing to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the results of the execution to the client. The client may receive the results of the execution of the method from the server and may present them to the user, for example, through an output device.
It should also be understood that the components of electronic device 400 may be distributed across a network. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other components of the computing system 400 may also be similarly distributed. As such, electronic device 400 may be interpreted as a distributed computing system that performs processing at multiple locations.
Some exemplary aspects of the disclosure are described below.
Aspect 1. an image fusion method, comprising:
acquiring a first image and a second image shot aiming at the same scene, wherein the exposure time of the first image is longer than that of the second image, and the first image and the second image respectively comprise a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions;
determining, based on the first image, a first weight of a first pixel value and a second weight of a second pixel value for each pixel location; and
and respectively fusing the first pixel value and the second pixel value of each pixel position based on the corresponding first weight and the second weight to obtain the target image.
Aspect 2 the method of aspect 1, wherein for any first pixel value, the larger the first pixel value, the smaller the first weight of the first pixel value.
Aspect 3 the method of aspect 1, wherein the sum of the first weight of the first pixel value and the second weight of the second pixel value corresponding to the same pixel location is 1.
Aspect 4. the method of any of aspects 1-3, wherein for any pixel location, the first weight for that pixel location is determined based on the first pixel value for that pixel location and the smallest pixel value in the first image.
Aspect 5 the method of aspect 4, wherein the first weight for the pixel location is determined based on a relative pixel value range between the first pixel value for the pixel location and the minimum pixel value and a preset pixel value range determined based on the minimum pixel value.
Aspect 6 the method of any of aspects 1-5, wherein the first and second weights are determined according to the following equations:
Figure BDA0003245330630000161
Figure BDA0003245330630000162
wherein, w1(x, y) is a first weight of a pixel location having coordinates (x, y), w2(x, y) is a second weight for the pixel location with coordinates (x, y), ILEFor the first image, ILE(x, y) is a first pixel value of a pixel position having coordinates (x, y), min (I)LE) Alpha and beta are preset constants larger than zero and min (I) is the minimum pixel value in the first imageLE)<α≤max(ILE),max(ILE) Is the maximum pixel value in the first image.
Aspect 7 the method of any of aspects 1-3, wherein for any pixel location, the first weight for that pixel location is determined based on the first pixel value for that pixel location and the largest pixel value in the first image.
Aspect 8 the method of aspect 7, wherein the first weight for the pixel location is determined based on a relative pixel value range between the maximum pixel value and the first pixel value for the pixel location and a preset pixel value range determined based on the maximum pixel value.
Aspect 9 the method of any of aspects 1-3, 7-8, wherein the first and second weights are determined according to the following equation:
Figure BDA0003245330630000163
Figure BDA0003245330630000164
wherein, w1(x, y) is a first weight of a pixel location having coordinates (x, y), w2(x, y) is a second weight for the pixel location with coordinates (x, y), ILEFor the first image, ILE(x, y) is a first pixel value of a pixel position having coordinates (x, y), max (I)LE) Is the maximum pixel value in the first imageα, β are preset constants greater than zero, and min (I)LE)≤α<max(ILE),min(ILE) Is the smallest pixel value in the first image.
Aspect 10 the method of any of aspects 1-9, wherein fusing the first pixel value and the second pixel value for each pixel location, respectively, based on the respective first weight and second weight to obtain the target image comprises:
performing brightness compensation on the second pixel value based on exposure parameters of the first image and the second image;
fusing the first pixel value and the compensated second pixel value based on the first weight and the second weight to obtain a fused image of the first image and the second image; and
determining the target image based on the fused image.
The method of aspect 10, wherein compensating the second pixel values for brightness based on the exposure parameters of the first and second images comprises:
determining a brightness compensation coefficient based on the exposure parameters of the first image and the second image; and
and performing brightness compensation on the second pixel value according to the brightness compensation coefficient.
Aspect 12 the method of aspect 11, wherein the exposure parameters include exposure time and exposure gain, and
wherein the brightness compensation coefficient is a quotient of a product of an exposure time and an exposure gain of the first image and a product of an exposure time and an exposure gain of the second image.
Aspect 13 the method of any of aspects 10-12, wherein fusing the first pixel value and the compensated second pixel value based on the first weight and the second weight comprises:
respectively carrying out logarithmic transformation on the first pixel value and the compensated second pixel value to obtain a first logarithmic pixel value and a second logarithmic pixel value; and
weighted summing the first and second logarithmic pixel values based on the first and second weights.
Aspect 14 the method of any of aspects 10-13, wherein the fused image includes a plurality of fused pixel values corresponding to the plurality of pixel locations,
the method further comprises the following steps: normalizing the plurality of fused pixel values to within a [0, 1] interval.
Aspect 15 the method of any of aspects 10-14, determining the target image based on the fused image, comprising:
filtering the fused image to obtain a texture feature map, an edge feature map and an illumination feature map of the target image; and
and obtaining the target image based on the texture feature map, the edge feature map and the illumination feature map.
The method of aspect 15, wherein filtering the fused image to obtain the texture feature map of the target image comprises:
filtering the fused image to decompose the fused image into a first basic feature map and a first detail feature map; and
and performing edge-preserving compression on the first detail feature map to obtain the texture feature map.
The method of aspect 16, wherein filtering the fused image to obtain the edge feature map of the target image comprises:
filtering the first basic feature map to decompose the first basic feature map into a second basic feature map and a second detail feature map; and
and performing edge-preserving compression on the second detail feature map to obtain the edge feature map.
The method of aspect 17, wherein filtering the fused image to obtain the illumination feature map of the target image comprises:
filtering the second basic feature map to decompose a third basic feature map from the second basic feature map; and
and enhancing the third basic characteristic diagram to obtain the illumination characteristic diagram.
The method of any of aspects 15-18, wherein deriving the target image based on the texture feature map, the edge feature map, and the illumination feature map comprises:
and carrying out weighted summation on the texture feature map, the edge feature map and the illumination feature map to obtain the target image.
Aspect 20. the method of any of aspects 15-19, wherein the filtering is implemented using an edge-preserving filter operator.
An electronic device, comprising:
a processor; and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of aspects 1-20.
Aspect 22 a non-transitory computer readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the method of any of aspects 1-20.
Aspect 23 a computer program product comprising a computer program, wherein the computer program realizes the method according to any of aspects 1-20 when executed by a processor.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (10)

1. An image fusion method, comprising:
acquiring a first image and a second image shot aiming at the same scene, wherein the exposure time of the first image is longer than that of the second image, and the first image and the second image respectively comprise a plurality of first pixel values and a plurality of second pixel values corresponding to a plurality of pixel positions;
determining, based on the first image, a first weight of a first pixel value and a second weight of a second pixel value for each pixel location; and
and respectively fusing the first pixel value and the second pixel value of each pixel position based on the corresponding first weight and the second weight to obtain the target image.
2. The method of claim 1, wherein for any first pixel value, the larger the first pixel value, the smaller the first weight of the first pixel value.
3. The method of claim 1, wherein a sum of a first weight of a first pixel value and a second weight of a second pixel value corresponding to a same pixel location is 1.
4. The method of any of claims 1-3, wherein, for any pixel location, the first weight for that pixel location is determined based on the first pixel value for that pixel location and the smallest pixel value in the first image.
5. The method of claim 4, wherein the first weight for the pixel location is determined based on a relative pixel value range between the first pixel value for the pixel location and the minimum pixel value and a preset pixel value range determined based on the minimum pixel value.
6. The method according to any of claims 1-5, wherein the first and second weights are determined according to the following formula:
Figure FDA0003245330620000011
Figure FDA0003245330620000012
wherein, w1(x, y) is a first weight of a pixel location having coordinates (x, y), w2(x, y) is a second weight for the pixel location with coordinates (x, y), ILEFor the first image, ILE(x, y) is a first pixel value of a pixel position having coordinates (x, y), min (I)LE) Alpha and beta are preset constants larger than zero and min (I) is the minimum pixel value in the first imageLE)<α≤max(ILE),max(ILE) Is the maximum pixel value in the first image.
7. The method of any of claims 1-3, wherein, for any pixel location, the first weight for that pixel location is determined based on the first pixel value for that pixel location and the largest pixel value in the first image.
8. An electronic device, comprising:
a processor; and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-7.
9. A non-transitory computer readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the method of any of claims 1-7.
10. A computer program product comprising a computer program, wherein the computer program realizes the method according to any of claims 1-7 when executed by a processor.
CN202111031156.2A 2021-09-03 2021-09-03 Image fusion method, electronic device and storage medium Pending CN113674193A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111031156.2A CN113674193A (en) 2021-09-03 2021-09-03 Image fusion method, electronic device and storage medium
PCT/CN2022/114592 WO2023030139A1 (en) 2021-09-03 2022-08-24 Image fusion method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111031156.2A CN113674193A (en) 2021-09-03 2021-09-03 Image fusion method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113674193A true CN113674193A (en) 2021-11-19

Family

ID=78548197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111031156.2A Pending CN113674193A (en) 2021-09-03 2021-09-03 Image fusion method, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN113674193A (en)
WO (1) WO2023030139A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030139A1 (en) * 2021-09-03 2023-03-09 上海肇观电子科技有限公司 Image fusion method, electronic device, and storage medium
CN116051449A (en) * 2022-08-11 2023-05-02 荣耀终端有限公司 Image noise estimation method and device
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN117257204A (en) * 2023-09-19 2023-12-22 深圳海业医疗科技有限公司 Endoscope control assembly control method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10255888B2 (en) * 2012-12-05 2019-04-09 Texas Instruments Incorporated Merging multiple exposures to generate a high dynamic range image
US10425599B2 (en) * 2017-02-01 2019-09-24 Omnivision Technologies, Inc. Exposure selector for high-dynamic range imaging and associated method
CN108259774B (en) * 2018-01-31 2021-04-16 珠海市杰理科技股份有限公司 Image synthesis method, system and equipment
JP7130777B2 (en) * 2018-06-07 2022-09-05 ドルビー ラボラトリーズ ライセンシング コーポレイション HDR image generation from a single-shot HDR color image sensor
CN110717878B (en) * 2019-10-12 2022-04-15 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN113674193A (en) * 2021-09-03 2021-11-19 上海肇观电子科技有限公司 Image fusion method, electronic device and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030139A1 (en) * 2021-09-03 2023-03-09 上海肇观电子科技有限公司 Image fusion method, electronic device, and storage medium
CN116051449A (en) * 2022-08-11 2023-05-02 荣耀终端有限公司 Image noise estimation method and device
CN116051449B (en) * 2022-08-11 2023-10-24 荣耀终端有限公司 Image noise estimation method and device
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN116152132B (en) * 2023-04-19 2023-08-04 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN117257204A (en) * 2023-09-19 2023-12-22 深圳海业医疗科技有限公司 Endoscope control assembly control method and system

Also Published As

Publication number Publication date
WO2023030139A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
JP7152540B2 (en) Filtering method and system
CN108335279B (en) Image fusion and HDR imaging
CN113674193A (en) Image fusion method, electronic device and storage medium
CN108694705B (en) Multi-frame image registration and fusion denoising method
US10410327B2 (en) Shallow depth of field rendering
CN110766639B (en) Image enhancement method and device, mobile equipment and computer readable storage medium
WO2016139260A9 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
KR102045538B1 (en) Method for multi exposure image fusion based on patch and apparatus for the same
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
KR102561714B1 (en) Image data processing from composite images
KR20150035315A (en) Method for generating a High Dynamic Range image, device thereof, and system thereof
CN110958363B (en) Image processing method and device, computer readable medium and electronic device
CN107147851B (en) Photo processing method and device, computer readable storage medium and electronic equipment
US20210327026A1 (en) Methods and apparatus for blending unknown pixels in overlapping images
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
EP4218228A1 (en) Saliency based capture or image processing
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
JP2024037722A (en) Content-based image processing
JP6514504B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
CN115578273A (en) Image multi-frame fusion method and device, electronic equipment and storage medium
JP5863236B2 (en) Image processing apparatus and image processing method
CN113469908B (en) Image noise reduction method, device, terminal and storage medium
CN116563190B (en) Image processing method, device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination