WO2019196539A1 - 图像融合方法及其装置 - Google Patents

图像融合方法及其装置 Download PDF

Info

Publication number
WO2019196539A1
WO2019196539A1 PCT/CN2019/073090 CN2019073090W WO2019196539A1 WO 2019196539 A1 WO2019196539 A1 WO 2019196539A1 CN 2019073090 W CN2019073090 W CN 2019073090W WO 2019196539 A1 WO2019196539 A1 WO 2019196539A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
brightness
fusion
detail
pixel
Prior art date
Application number
PCT/CN2019/073090
Other languages
English (en)
French (fr)
Inventor
黄中宁
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019196539A1 publication Critical patent/WO2019196539A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the present application relates to image processing technologies, and in particular, to an image fusion method and apparatus therefor.
  • the human eye has weak or even no perception of infrared light, and the imaging system (including the lens and sensor) in the monitoring device still has good imaging ability for near-infrared light. Therefore, taking infrared fill light and imaging can solve the problem of light pollution. However, infrared images have the problem of no color and poor layering.
  • the present application provides an image fusion method and apparatus therefor.
  • an image fusion method is provided, which is applied to an image acquisition device, the method comprising: converting a visible light image collected by the image acquisition device into a red, green, and blue RGB image; Converting the near-infrared image collected by the device into a first brightness image; converting the RGB image into a second brightness image; performing fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image; The fusion weight map performs brightness fusion on the first brightness image and the second brightness image to obtain a brightness fusion image; and performs RGB fusion according to the second brightness image, the brightness fusion image, and the RGB image A fused RGB image is obtained as an output image of the image acquisition device.
  • performing brightness fusion on the first brightness image and the second brightness image according to the fusion weight map further comprising: performing detail calculation on the first brightness image to obtain a first detail image; The detail calculation is performed on the second brightness map to obtain a second detail image.
  • performing brightness fusion on the first brightness image and the second brightness image according to the fusion weight map including: according to the fusion weight map, the first detail image, and the second detail
  • the image performs luminance blending on the first luminance image and the second luminance image.
  • performing detail calculation on the first brightness image to obtain a first detail image including: performing mean filtering on the first brightness image to obtain a first mean image; and comparing the first brightness image to The first mean image is subjected to difference to obtain a first difference image; and the first difference image is subjected to a cutoff operation to obtain the first detail image.
  • performing a detail calculation on the second brightness image to obtain a second detail image including: performing mean filtering on the second brightness image to obtain a second mean image; and The second mean image is subjected to difference to obtain a second difference image; and the second difference image is subjected to a cutoff operation to obtain the second detail image.
  • the cutting off operation of the difference image is implemented by using the following formula:
  • Detail p is the brightness detail of the pixel point P in the detail image
  • Diff p is the value of the pixel point P in the difference image
  • str is the detail intensity control parameter
  • [deNirMin, deNirMax] is the cutoff interval
  • CLIP() is the cutoff calculation.
  • the fusion brightness Y of the pixel is determined by the following formula:
  • Y nir is a luminance value of a pixel at the position in the first luminance image
  • wt is a fusion weight of a pixel at the position in the fusion weight map
  • Y LL is the location in the second luminance image
  • Detail nir is the brightness detail of the pixel at the position in the first detail image
  • Detail LL is the brightness detail of the pixel at the position in the second detail image
  • CLIP() is the cutoff Calculation.
  • performing the fusion weight calculation according to the first brightness image including: searching, for any pixel point in the first brightness image, a preset brightness mapping model according to the brightness value of the pixel point to determine the brightness The fusion weight corresponding to the value, wherein the brightness mapping model records a correspondence between the brightness value and the fusion weight.
  • performing brightness fusion on the first brightness image and the second brightness image according to the fusion weight map including: determining, for a pixel point of any position, a fusion brightness Y of the pixel point by using the following formula:
  • Y nir is a luminance value of a pixel at the position in the first luminance image
  • wt is a fusion weight of a pixel at the position in the fusion weight map
  • Y LL is the location in the second luminance image
  • RGB merging is performed according to the second brightness image, the brightness fused image, and the RGB image to obtain a fused RGB image, including: determining, for a pixel point at any position, the pixel by using the following formula Point R, G, B three channel values V out :
  • V out CLIP(V in *Y/Y LL , 0,255),
  • V out CLIP(Y,0,255)
  • V in is a three-channel value of R, G, and B of the pixel at the position in the RGB image
  • Y LL is a luminance value of the pixel at the position in the second luminance image
  • Y is the luminance fusion
  • CLIP() is the cutoff calculation.
  • an image fusion device which is applied to an image collection device, and includes: a first visible light processing unit, configured to convert a visible light image collected by the image acquisition device into red, green, and blue An RGB image; a first infrared processing unit, configured to convert a near-infrared image acquired by the image capturing device into a first brightness image; and a second visible light processing unit configured to convert the RGB image into a second brightness image; a second infrared processing unit, configured to perform a fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image; and a brightness fusion unit, configured to: the first brightness image according to the fusion weight map Performing brightness fusion with the second brightness image to obtain a brightness fusion image; and RGB fusion unit, configured to perform RGB fusion according to the second brightness image, the brightness fusion image, and the RGB image to obtain the fused RGB
  • the image serves as an output image of the image acquisition device, and includes: a first visible light processing unit, configured to
  • the second visible light processing unit is further configured to perform a detail calculation on the second brightness image to obtain a second detail image;
  • the second infrared processing unit is further configured to use the first brightness Performing a detail calculation to obtain a first detail image;
  • the brightness blending unit is specifically configured to: pair the first brightness image and the second detail image according to the fusion weight map, the first detail image, and the second detail image The second luminance image is subjected to luminance fusion.
  • the second visible light processing unit is specifically configured to perform mean filtering on the second brightness image to obtain a second mean image; and perform a difference between the second brightness image and the second mean image And obtaining a second difference image; performing a cutoff operation on the second difference image to obtain the second detail image.
  • the second infrared processing unit is specifically configured to perform mean filtering on the first brightness image to obtain a first mean image; and perform a difference between the first brightness image and the first mean image And obtaining a first difference image; performing a cutoff operation on the first difference image to obtain the first detail image.
  • the cutting off operation of the difference image is implemented by using the following formula:
  • Detail p is the brightness detail of the pixel point P in the detail image
  • Diff p is the value of the pixel point P in the difference image
  • str is the detail intensity control parameter
  • [deNirMin, deNirMax] is the cutoff interval
  • CLIP() is the cutoff calculation.
  • the brightness fusion unit is specifically configured to determine a fusion brightness Y of the pixel point for a pixel point at any position by using the following formula:
  • Y nir is a luminance value of a pixel at the position in the first luminance image
  • wt is a fusion weight of a pixel at the position in the fusion weight map
  • Y LL is the location in the second luminance image
  • Detail nir is the brightness detail of the pixel at the position in the first detail image
  • Detail LL is the brightness detail of the pixel at the position in the second detail image
  • CLIP() is the cutoff Calculation.
  • the second infrared processing unit is configured to: for any pixel in the first brightness image, query a preset brightness mapping model according to the brightness value of the pixel point to determine the brightness value corresponding to the pixel value A fusion weight, wherein the brightness mapping model records a correspondence between a luminance value and a fusion weight.
  • the brightness fusion unit is specifically configured to determine a fusion brightness Y of the pixel point for a pixel point at any position by using the following formula:
  • Y nir is a luminance value of a pixel at the position in the first luminance image
  • wt is a fusion weight of a pixel at the position in the fusion weight map
  • Y LL is the location in the second luminance image
  • the RGB merging unit is specifically configured to determine, for the pixel points at any position, the R, G, and B three-channel values V out of the pixel by using the following formula:
  • V out CLIP(V in *Y/Y LL , 0,255),
  • V out CLIP(Y,0,255)
  • V in is a three-channel value of R, G, and B of the pixel at the position in the RGB image
  • Y LL is a luminance value of the pixel at the position in the second luminance image
  • Y is the luminance fusion
  • CLIP() is the cutoff calculation.
  • an image fusion apparatus comprising a processor and a machine readable storage medium storing machine executable instructions executable by the processor,
  • the processor is exemplified by the machine executable instructions to implement the method of image fusion as described in the first aspect of the embodiments of the present application.
  • a machine readable storage medium storing machine executable instructions that, when invoked and executed by a processor, cause the processor to: acquire an image Converting the visible light image collected by the device into a red, green and blue RGB image; converting the near infrared image acquired by the image capturing device into a first brightness image; converting the RGB image into a second brightness image; according to the first brightness image Performing a fusion weight calculation to obtain a fusion weight map of the near-infrared image; performing brightness fusion on the first luminance image and the second luminance image according to the fusion weight map to obtain a luminance fusion image; The second luminance image, the luminance fused image, and the RGB image are RGB-fused to obtain a fused RGB image as an output image of the image acquisition device.
  • the image fusion method of the embodiment of the present application converts the visible light image collected by the image acquisition device into an RGB image, and converts the near-infrared image collected by the image acquisition device into a first brightness image, and then converts the RGB image into a second brightness.
  • Image performing fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image, and further performing brightness fusion on the first brightness image and the second brightness image according to the fusion weight map to obtain a brightness fusion image, and according to the image
  • the second luminance image, the luminance fused image, and the RGB image are RGB-fused, and the fused RGB image is obtained as an output image of the image acquisition device, thereby realizing fusion of the visible light image and the near-infrared image. While improving the brightness of the image in a low illumination environment, the color information of the image is maintained, and the quality of the merged image is improved.
  • FIG. 1 is a flowchart of an image fusion method according to an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of a luminance mapping model according to an exemplary embodiment of the present application.
  • FIG. 3 is a flowchart of an image fusion method according to still another exemplary embodiment of the present application.
  • FIG. 4 is a schematic flow chart of a detail calculation shown in an exemplary embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a fusion weight map calculation according to an exemplary embodiment of the present application.
  • FIG. 6 is a flowchart of an image fusion method according to still another exemplary embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image fusion device according to an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram showing the hardware structure of an image fusion device according to an exemplary embodiment of the present application.
  • the image fusion method may be applied to an image collection device, such as a surveillance camera in a video surveillance scenario. As shown in FIG. 1, the image fusion method may include the following steps.
  • Step S100 Convert the visible light image collected by the image acquisition device into an RGB image.
  • Step S110 Convert the near-infrared image collected by the image acquisition device into a first brightness image.
  • the image acquisition device when the ambient brightness of the area where the image acquisition device is located is low, the image acquisition device can simultaneously collect the visible light image and the near-infrared image through the infrared fill light, and improve the image by the fusion of the visible light image and the near-infrared image. quality.
  • the image acquisition device can also acquire infrared images for subsequent fusion processing, the near-infrared images have more details than the infrared images, and the merged images can retain more details.
  • the visible light image may be converted into an RGB (Red, Green, Blue, red, green and blue) image, and the near infrared image may be converted.
  • RGB Red, Green, Blue, red, green and blue
  • the near infrared image may be converted.
  • It is an infrared brightness image (also referred to as an infrared Y channel image, referred to herein as a first brightness image).
  • the near-infrared image in the embodiment of the present application may be replaced with an infrared image. Since the near-infrared image presents more image details than the infrared image, the near-infrared image is preferred.
  • the image acquisition device can recover the visible light image color by AWB (Automatic White Balance) correction, and perform DENOISE processing, and then perform visible light processing by DEMOISC (demosaicing).
  • AWB Automatic White Balance
  • DENOISE demosaicing
  • the image is interpolated to the original RGB image, and GAMMA correction is performed on the initial RGB image to enhance the image brightness, and the RGB image described in step S100 is obtained for use in subsequent steps.
  • GAMMA correction is performed on the initial RGB image to enhance the image brightness
  • the RGB image described in step S100 is obtained for use in subsequent steps.
  • the present disclosure does not limit the order of the above AWB correction, DENOISE processing, DEMOISC processing, and GAMMA correction.
  • the image acquisition device can interpolate the near-infrared image into the RGB image corresponding to the near-infrared image through DEMOSIC processing, and perform GAMMA correction on the RGB image to enhance the brightness of the image, and further convert the RGB image into a brightness image.
  • the near-infrared image and the corresponding visible light image are aligned by the pixel through the Y-channel registration process.
  • step S100 there is no necessary timing relationship between step S100 and step S110, and the operation in step S100 may be performed first, and then the operation in step S110 may be performed; or the operation in step S110 may be performed first, and then the steps are executed.
  • the operations in S100; the operations in steps S100 and S110 may also be performed concurrently.
  • Step S120 converting the RGB image into a second brightness image.
  • step S100 after the image capturing device converts the visible light image into an RGB image, the RGB image may be converted into a visible light luminance image (also referred to as a visible light Y channel image, referred to herein as a second brightness). image).
  • a visible light luminance image also referred to as a visible light Y channel image, referred to herein as a second brightness).
  • the image capture device may convert the RGB image into a second brightness image by the following formula (1):
  • R p , G p , and B p are respectively R, G, and B three-channel values at the pixel point P in the RGB image
  • y p is a luminance channel value at the pixel point P in the second luminance image
  • the pixel point P For any pixel in the RGB image, the pixel point P in the RGB image is the same as the pixel point P in the second luminance image.
  • Step S130 Perform a fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image.
  • the fusion weight calculation may be performed according to the first brightness image to determine the brightness fusion of the visible light image and the near-infrared image.
  • the weight of the luminance value of the pixel is obtained, thereby obtaining a fusion weight map of the near-infrared image.
  • the weighting map of the near-infrared image records the weight of the luminance value of each pixel in the near-infrared image when the visible light image and the near-infrared image are luminance-fused.
  • the threshold may be set according to an actual scene
  • the weight of the pixel point luminance value of the near-infrared image may be increased to Increase the brightness of the fused image.
  • the weight of the luminance value of the pixel of the near-infrared image may be reduced to fuse the image. More details in the visible image can be preserved.
  • the performing the fusion weight calculation according to the first brightness image may include: for any pixel point in the first brightness image, the preset brightness mapping model may be queried according to the brightness value of the pixel point, The fusion weight corresponding to the brightness value is determined.
  • a brightness mapping model may be preset, and the brightness mapping model may record the correspondence between the brightness value and the fusion weight.
  • the brightness mapping model may be respectively queried according to the brightness value of each pixel point to obtain the fusion weight of each pixel point.
  • FIG. 2 is a schematic diagram of a brightness mapping model according to an embodiment of the present application.
  • the brightness mapping model is controlled by three parameters: min_wt, min_limit, and max_limit.
  • the abscissa of the model is the luminance value and the ordinate is the fusion weight.
  • the fusion weight For any pixel in the first luminance image, when its luminance value is less than min_limit, its fusion weight is 255. When its luminance value is greater than max_limit, its fusion weight is min_wt. When the brightness value is in [min_limit, max_limit], the fusion weight gradually decreases with the increase of the brightness value, and the specific mapping relationship between the brightness value and the fusion weight can be determined according to the actual brightness mapping model.
  • the values of the parameters min_wt, min_limit, and max_limit may be empirical values, for example, may be 180, 200, 250, respectively.
  • the fusion weight map may also be subjected to mean filtering processing, and the specific implementation thereof is not Make a statement.
  • step S120 there is no necessary timing relationship between the step S120 and the step S130, and the operation in step S120 may be performed first, and then the operation in step S130 may be performed; or the operation in step S130 may be performed first, and then the steps are performed.
  • the operations in S120; the operations in steps S120 and S130 may also be performed concurrently.
  • Step S140 Perform brightness fusion on the first brightness image and the second brightness image according to the fusion weight map to obtain a brightness fusion image.
  • the first brightness image obtained in step S110 and the second brightness image obtained in step S120 may be subjected to brightness fusion processing according to the fusion weight map.
  • a brightness fusion image is obtained.
  • the fusion luminance Y of the pixel is determined by the following formula (2):
  • Y nir is the luminance value of the pixel at the position in the first luminance image
  • wt is the fusion weight value of the pixel at the position in the fusion weight map
  • Y LL is the luminance of the pixel at the location in the second luminance image
  • Step S150 Perform RGB fusion according to the second brightness image, the brightness fusion image, and the RGB image to obtain the fused RGB image as an output image of the image collection device.
  • the RGB fusion processing may be performed according to the brightness fused image, the second brightness image obtained in step S120, and the RGB image obtained in step S100 to obtain the fused image.
  • RGB image may be performed according to the brightness fused image, the second brightness image obtained in step S120, and the RGB image obtained in step S100 to obtain the fused image.
  • the RGB fusion according to the second luminance image, the luminance fused image, and the RGB image to obtain the fused RGB image may include:
  • the R, G, and B three-channel values V out of the pixel are respectively determined by the following formulas (3)-(4):
  • V out CLIP(V in *Y/Y LL ,0,255) (3)
  • V out CLIP (Y, 0, 255) (4).
  • V in may be the R, G, and B three-channel values of the pixel at the position in the RGB image (the RGB image before the fusion), and Y LL is the brightness value of the pixel at the position in the second luminance image, where Y is The luminance value of the pixel at the position in the luminance fused image, CLIP() is the cutoff calculation.
  • V out when Y LL> 0 when, if V in the R channel value of the pixel of the RGB image in this position, the V out is a pixel of the position RGB image after fusion R channel value; if V in When it is the G channel value of the pixel at the position in the RGB image, then V out is the G channel value of the pixel at the position in the RGB image after the fusion; if Vin is the B channel of the pixel at the position in the RGB image When the value is, V out is the B channel value of the pixel at the position in the fused RGB image.
  • the ideal brightness information of the scene is acquired by the near-infrared image, and the color information is obtained from the visible light image, and the visible light image and the near
  • the infrared image combines brightness and color to enhance the brightness of the image while improving the brightness of the image in a low illumination environment (ie, a scene with low ambient brightness), and improves the quality of the image after fusion.
  • the calculation method provided by the present application is simple in calculating the fused image, and the required processing time is short.
  • FIG. 3 is a schematic flowchart of another image fusion method according to an embodiment of the present disclosure.
  • the image fusion method may be applied to an image capture device, such as a surveillance camera in a video surveillance scenario, as shown in FIG. 3 .
  • the image fusion method can include the following steps:
  • Step S300 Convert the visible light image collected by the image acquisition device into an RGB image.
  • Step S310 Convert the near-infrared image collected by the image acquisition device into a first brightness image.
  • Step S320 converting the RGB image into a second brightness image.
  • Step S330 performing fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image.
  • step S300 to the step S330 in the embodiment of the present application, reference may be made to the related description in the step S100 to the step S130, and details are not repeatedly described herein.
  • Step S340 performing detail calculation on the second brightness image to obtain a second detail image.
  • Step S350 performing detail calculation on the first brightness image to obtain a first detail image.
  • the first brightness image and the second brightness image may be further calculated to obtain a corresponding correspondence.
  • the detail image obtained by the detail calculation of the first brightness image is referred to herein as a first detail image
  • the detail image obtained by the detail calculation of the second brightness image is referred to as a second detail image.
  • performing the detail calculation on the first brightness image to obtain the first detail image may include: performing mean filtering on the first brightness image to obtain a first mean image; and pairing the first brightness image Performing a difference with the first mean image to obtain a first difference image; and performing a cutoff operation on the first difference image to obtain a first detail image.
  • the first brightness image may be average filtered to obtain a corresponding mean image (referred to herein as a first mean image).
  • the image capturing device may perform mean filtering on the first brightness image with a radius r (r is an empirical value, which may be set according to an actual scene) to obtain a first mean image.
  • the first luminance image and the first average image may be subjected to difference to obtain a signed difference image (referred to herein as a first difference image). .
  • the first luminance image and the first average image are subjected to difference and are amplified.
  • both the first luminance image and the first mean image are 8-bit images
  • the difference between the first luminance image and the first average image results in a 9-bit difference image. Therefore, after the image acquisition device obtains the first difference image, it is also necessary to perform a cut-off operation on the first difference image, and cut it to a specified interval to obtain a corresponding detail image (referred to herein as a first detail image).
  • the performing the cutting operation on the first difference image may be implemented by the following formula (5):
  • Detail p is the brightness detail of the pixel point P in the first detail image
  • Diff p is the value of the pixel point P in the first difference image
  • str is the detail intensity control parameter
  • [deNirMin, deNirMax] is the cut-off interval
  • CLIP For the cutoff calculation, the input value is cut off within the cutoff interval.
  • the str, the deNirMin, and the deNirMax are all empirical values.
  • the str value can be 64
  • the deNirMin value can be -64
  • the deNirMax value can be 32.
  • the image acquisition device performs detailed calculation on the second brightness image to obtain a second detail image.
  • the image acquisition device performs detailed calculation on the second brightness image to obtain a second detail image.
  • step S340 there is no necessary timing relationship between step S340 and step S350.
  • the operation in step S340 may be performed first, and then the operation in step S350 may be performed.
  • the operation in step S350 may be performed first, and then the steps are performed.
  • the operation in S340; the operations in step S340 and step S350 can also be performed concurrently.
  • Step S360 Perform brightness blending on the first brightness image and the second brightness image according to the fusion weight map, the first detail image, and the second detail image to obtain a brightness fusion image.
  • the first brightness image obtained in step S310 may be obtained according to the first detail image, the second detail image, and the fusion weight obtained in step S330.
  • the brightness is merged with the second brightness image obtained in step S320.
  • the brightness fusion of the first brightness image and the second brightness image according to the fusion weight map, the first detail image, and the second detail image may include:
  • the fusion luminance Y of the pixel is determined by the following formula (6):
  • Y nir is the luminance value of the pixel at the position in the first luminance image
  • wt is the fusion weight of the pixel at the position in the fusion weight map
  • Y LL is the luminance value of the pixel at the location in the second luminance image
  • the Detail nir is the brightness detail of the pixel at the position in the first detail image
  • the Detail LL is the brightness detail of the pixel at the position in the second detail image.
  • Step S370 performing RGB fusion according to the second luminance image, the luminance fused image, and the RGB image, to obtain the fused RGB image as an output image of the image acquisition device.
  • step S370 in the embodiment of the present application, refer to the related description in the step S150, and details are not repeatedly described herein.
  • the visible light image and the near-infrared image collected by the image acquisition device are both 8-bit images.
  • the visible light image and the near-infrared image acquired by the image acquisition device are not limited to 8-bit images, and may be 12-bit images or 16-bit images.
  • AWB correction may be separately performed to obtain an 8-bit visible RGB image.
  • DEMOSIC processing, GAMMA correction RGB2Y (RGB to Y, RGB image conversion to Y-channel image, ie luminance image) processing, and Y-channel registration can be performed separately to obtain an 8-bit infrared luminance image.
  • the image acquisition device can convert it into an 8-bit luminance image by RGB2Y processing, and perform detailed calculation on the 8-bit luminance image to obtain an 8-bit detail image.
  • the flow chart of the detail calculation of the 8-bit visible light brightness image by the image acquisition device can be seen in FIG. 4 .
  • the image acquisition device can perform average filtering of the radius of r on the 8-bit visible light brightness image to obtain an 8-bit average image, and then perform the difference between the 8-bit visible light brightness image and the 8-bit average image to obtain a 9-bit signed difference image, and then pass For the cutoff operation, the 9-bit signed difference image is cut off to the specified interval ([deNirMin, deNirMax]) to obtain an 8-bit visible detail map.
  • the image acquisition device can perform fusion weight calculation.
  • the flowchart of the fusion weight calculation of the 8-bit infrared brightness image by the image acquisition device may be as shown in FIG. 5 .
  • the image acquisition device can query the preset brightness mapping model according to the brightness value of each pixel in the 8-bit infrared brightness image (as shown in FIG. 2), and obtain the fusion weight of each pixel point, thereby obtaining an 8-bit fusion weight map and integrating the 8-bit fusion.
  • the weight map performs mean filtering to obtain a filtered 8-bit fusion weight map.
  • the image acquisition device can perform detailed calculation on the 8-bit infrared brightness image to obtain an 8-bit infrared detail image.
  • the specific implementation refer to the description of the 8-bit visible light brightness image, which is not described herein again.
  • the image capturing device can perform brightness fusion on the 8-bit visible light brightness image and the 8-bit infrared brightness image according to the 8-bit visible light detail image, the 8-bit infrared detail image, and the filtered 8-bit fusion weight map.
  • the RGB fusion processing is performed according to the fused luminance image, the 8-bit visible light luminance image, and the 8-bit visible RGB image to obtain the fused RGB image, and the fusion of the visible light image and the near-infrared image is realized, and the flowchart thereof can be as shown in FIG. 6 .
  • the visible light image collected by the image capturing device is converted into an RGB image
  • the near infrared image collected by the image capturing device is converted into the first brightness image
  • the RGB brightness image is converted into the second brightness image.
  • performing fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image.
  • the detail calculation is then performed on the first brightness image and the second brightness image, respectively, to obtain a first detail image and a second detail image.
  • brightness fusion is performed on the first brightness image and the second brightness image according to the fusion weight map, the first detail image, and the second detail image to obtain a brightness fusion image, and according to the second brightness image, the brightness fusion image, and the RGB image.
  • RGB fusion to obtain the RGB image after fusion, realizes the fusion of visible light image and near-infrared image, enhances the brightness of the image in low illumination environment, maintains the color information of the image, and improves the quality of the image after fusion.
  • FIG. 7 is a schematic structural diagram of an image fusion device according to an embodiment of the present disclosure, where the image fusion device can be applied to an image collection device in the foregoing method embodiment, as shown in FIG. 7 , the image fusion device The following units can be included.
  • the first visible light processing unit 710 is configured to convert the visible light image collected by the image capturing device into a red, green, and blue RGB image.
  • the first infrared processing unit 720 is configured to convert the near-infrared image collected by the image capturing device into a first brightness image.
  • the second visible light processing unit 730 is configured to convert the RGB image into a second brightness image.
  • the second infrared processing unit 740 is configured to perform a fusion weight calculation according to the first brightness image to obtain a fusion weight map of the near-infrared image.
  • the brightness fusion unit 750 is configured to perform brightness fusion on the first brightness image and the second brightness image according to the fusion weight map to obtain a brightness fusion image.
  • the RGB fusion unit 760 is configured to perform RGB fusion according to the second luminance image, the luminance fused image, and the RGB image, to obtain a fused RGB image as an output image of the image collection device.
  • the second visible light processing unit 730 is further configured to perform detail calculation on the second brightness image to obtain a second detail image;
  • the second infrared processing unit 740 further For performing detailed calculation on the first brightness map to obtain a first detail image;
  • the brightness fusion unit 750 is specifically configured to use, according to the fusion weight map, the first detail image, and the second detail image Brightening the brightness of the first brightness image and the second brightness image.
  • the second visible light processing unit 730 is specifically configured to perform mean filtering on the second luminance image to obtain a second average image; and to the second luminance image and the The second mean image is subjected to difference to obtain a second difference image; and the second difference image is subjected to a cutoff operation to obtain the second detail image.
  • the second infrared processing unit 740 is specifically configured to perform mean filtering on the first luminance image to obtain a first average image; and to the first luminance image and the The first mean image is subjected to difference to obtain a first difference image; and the first difference image is subjected to a cutoff operation to obtain the first detail image.
  • the cutoff operation of the difference image is implemented by equation (5).
  • the brightness fusion unit 750 is specifically configured to determine the fusion brightness Y of the pixel point by using equation (6) for a pixel point at any position.
  • the second infrared processing unit 740 is configured to query, according to a brightness value of the pixel point, a preset brightness mapping model for any pixel in the first brightness image. A fusion weight corresponding to the brightness value is determined. The brightness mapping model records the correspondence between the brightness value and the fusion weight.
  • the RGB fusion unit 760 is specifically configured to determine, for the pixel points at any position, the R, G, and B three-channel values of the pixel by using formulas (3)-(4). V out .
  • the brightness fusion unit 750 is specifically configured to determine the fusion brightness Y of the pixel point by using formula (2) for a pixel point at any position.
  • FIG. 8 is a schematic structural diagram of hardware of an image fusion device according to an embodiment of the present application.
  • the image fusion device can include a processor 801, a machine readable storage medium 802 that stores machine executable instructions.
  • Processor 801 and machine readable storage medium 802 can communicate via system bus 803. And, by reading and executing machine executable instructions in the machine readable storage medium 802 corresponding to image fusion logic, the processor 801 can perform the image fusion method described above.
  • the machine-readable storage medium 802 referred to herein can be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like.
  • the machine-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as a hard disk drive), solid state drive, any type of storage disk. (such as a disc, DVD, etc.), or a similar storage medium, or a combination thereof.
  • the embodiment of the present application also provides a machine readable storage medium including machine executable instructions, such as the machine readable storage medium 802 of FIG. 8, which may be executed by the processor 801 in the image fusion device.
  • the image fusion method described above is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像融合方法及其装置,该方法包括:将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像;将所述图像采集设备采集的近红外图像转换为第一亮度图像;将所述RGB图像转换为第二亮度图像;根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像。

Description

图像融合方法及其装置
相关申请的交叉引用
本专利申请要求于2018年04月11日提交的、申请号为201810320154.7、发明名称为“一种图像融合方法及其装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及图像处理技术,尤其涉及图像融合方法及其装置。
背景技术
在视频监控过程中,当环境亮度较低(如夜间)时,为了获取良好的监控图像质量,往往需要高能量密度的可见光补光灯进行补光。该补光灯对人眼刺激较大,容易对过往的行人及车辆驾驶人造成短暂的视觉盲区,从而导致严重的光污染甚至交通事故。
人眼对红外光线感知弱甚至无感知,而监控设备中的成像系统(包含镜头及传感器)对近红外光线仍具有良好的成像能力。因此,采取红外补光并成像可以解决光污染问题。但红外图像存在无颜色、层次感比较差的问题。
发明内容
有鉴于此,本申请提供一种图像融合方法及其装置。
具体地,本申请是通过如下技术方案实现的:
根据本申请实施例的第一方面,提供一种图像融合方法,应用于图像采集设备,该方法包括:将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像;将所述图像采集设备采集的近红外图像转换为第一亮度图像;将所述RGB图像转换为第二亮度图像;根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
可选的,根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融 合之前,还包括:对所述第一亮度图像进行细节计算,以得到第一细节图像;对所述第二亮度图进行细节计算,以得到第二细节图像。在此情况下,根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,包括:根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合。
可选的,对所述第一亮度图像进行细节计算,以得到第一细节图像,包括:对所述第一亮度图像进行均值滤波,以得到第一均值图像;对所述第一亮度图像和所述第一均值图像进行求差,以得到第一差值图像;对所述第一差值图像进行截止操作,以得到所述第一细节图像。
可选的,对所述第二亮度图像进行细节计算,以得到第二细节图像,包括:对所述第二亮度图像进行均值滤波,以得到第二均值图像;对所述第二亮度图像和所述第二均值图像进行求差,以得到第二差值图像;对所述第二差值图像进行截止操作,以得到所述第二细节图像。
可选的,对所述差值图像进行截止操作通过以下公式实现:
Detail p=CLIP(Diff p*str/128,deNirMin,deNirMax)
其中,Detail p为细节图像中像素点P的亮度细节,Diff p为差值图像中像素点P的值,str为细节强度控制参数,[deNirMin,deNirMax]为截止区间,CLIP()为截止计算。
可选的,根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合,包括:对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256+Detail nir+Detail LL),0,255),
其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,wt为所述融合权重图中该位置的像素点的融合权重,Y LL为所述第二亮度图像中该位置的像素点的亮度值,Detail nir为所述第一细节图像中该位置的像素点的亮度细节,Detail LL为所述第二细节图像中该位置的像素点的亮度细节,CLIP()为截止计算。
可选的,根据所述第一亮度图像进行融合权重计算,包括:对于所述第一亮度图像中任一像素点,根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重,其中,所述亮度映射模型记录有亮度值与融合权重的对应关系。
可选的,根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,包括:对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256),0,255)
其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,wt为所述融合权重图中该位置的像素点的融合权重,Y LL为所述第二亮度图像中该位置的像素点的亮度值,CLIP()为截止计算。
可选的,根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,以得到融合后的RGB图像,包括:对于任一位置的像素点,通过以下公式确定该像素点的R、G、B三通道值V out
当Y LL>0时,V out=CLIP(V in*Y/Y LL,0,255),
当Y LL=0时,V out=CLIP(Y,0,255),
其中,V in为所述RGB图像中该位置的像素点的R、G、B三通道值,Y LL为所述第二亮度图像中该位置的像素点的亮度值,Y为所述亮度融合图像中该位置的像素点的亮度值,CLIP()为截止计算。
根据本申请实施例的第二方面,提供一种图像融合装置,应用于图像采集设备,该装置包括:第一可见光处理单元,用于将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像;第一红外处理单元,用于将所述图像采集设备采集的近红外图像转换为第一亮度图像;第二可见光处理单元,用于将所述RGB图像转换为第二亮度图像;第二红外处理单元,用于根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;亮度融合单元,用于根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;RGB融合单元,用于根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
可选的,所述第二可见光处理单元,还用于对所述第二亮度图像进行细节计算,以得到第二细节图像;所述第二红外处理单元,还用于对所述第一亮度图进行细节计算,以得到第一细节图像;所述亮度融合单元,具体用于根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合。
可选的,所述第二可见光处理单元,具体用于对所述第二亮度图像进行均值滤波,以得到第二均值图像;对所述第二亮度图像和所述第二均值图像进行求差,以得到第二 差值图像;对所述第二差值图像进行截止操作,以得到所述第二细节图像。
可选的,所述第二红外处理单元,具体用于对所述第一亮度图像进行均值滤波,以得到第一均值图像;对所述第一亮度图像和所述第一均值图像进行求差,以得到第一差值图像;对所述第一差值图像进行截止操作,以得到所述第一细节图像。
可选的,对所述差值图像进行截止操作通过以下公式实现:
Detail p=CLIP(Diff p*str/128,deNirMin,deNirMax),
其中,Detail p为细节图像中像素点P的亮度细节,Diff p为差值图像中像素点P的值,str为细节强度控制参数,[deNirMin,deNirMax]为截止区间,CLIP()为截止计算。
可选的,所述亮度融合单元,具体用于对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256+Detail nir+Detail LL),0,255),
其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,wt为所述融合权重图中该位置的像素点的融合权重,Y LL为所述第二亮度图像中该位置的像素点的亮度值,Detail nir为所述第一细节图像中该位置的像素点的亮度细节,Detail LL为所述第二细节图像中该位置的像素点的亮度细节,CLIP()为截止计算。
可选的,所述第二红外处理单元,具体用于对于所述第一亮度图像中任一像素点,根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重,其中,所述亮度映射模型记录有亮度值与融合权重的对应关系。
可选的,所述亮度融合单元,具体用于对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256),0,255)
其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,wt为所述融合权重图中该位置的像素点的融合权重,Y LL为所述第二亮度图像中该位置的像素点的亮度值,CLIP()为截止计算。
可选的,所述RGB融合单元,具体用于对于任一位置的像素点,通过以下公式分别确定该像素点的R、G、B三通道值V out
当Y LL>0时,V out=CLIP(V in*Y/Y LL,0,255),
当Y LL=0时,V out=CLIP(Y,0,255),
其中,V in为所述RGB图像中该位置的像素点的R、G、B三通道值,Y LL为所述第二亮度图像中该位置的像素点的亮度值,Y为所述亮度融合图像中该位置的像素点的亮度值,CLIP()为截止计算。
根据本申请实施例的第三方面,提供一种图像融合装置,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令,所述处理器被所述机器可执行指令促使实现本申请实施例的第一方面所述的图像融合的方法。
根据本申请实施例的第四方面,提供一种机器可读存储介质,存储有机器可执行指令,在被处理器调用和执行时,所述机器可执行指令促使所述处理器:将图像采集设备采集的可见光图像转换为红绿蓝RGB图像;将所述图像采集设备采集的近红外图像转换为第一亮度图像;将所述RGB图像转换为第二亮度图像;根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
本申请实施例的图像融合方法,将图像采集设备采集的可见光图像转换为RGB图像,以及,将图像采集设备采集的近红外图像转换为第一亮度图像,然后,将RGB图像转换为第二亮度图像,根据第一亮度图像进行融合权重计算,以得到近红外图像的融合权重图,进而,根据融合权重图对第一亮度图像和第二亮度图像进行亮度融合,以得到亮度融合图像,并根据第二亮度图像、亮度融合图像以及RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像,实现了可见光图像和近红外图像的融合。在提升低照度环境下图像的亮度的同时,保持了图像的色彩信息,提升了融合后的图像的质量。
附图说明
图1是本申请一示例性实施例示出的一种图像融合方法的流程图;
图2是本申请一示例性实施例示出的一种亮度映射模型的示意图;
图3是本申请又一示例性实施例示出的一种图像融合方法的流程图;
图4是本申请一示例性实施例示出的一种细节计算的流程示意图;
图5是本申请一示例性实施例示出的一种融合权重图计算的流程示意图;
图6是本申请又一示例性实施例示出的一种图像融合方法的流程图;
图7是本申请一示例性实施例示出的一种图像融合装置的结构示意图;
图8是本申请一示例性实施例示出的一种图像融合装置的硬件结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,并使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案作进一步详细的说明。
请参见图1,为本申请实施例提供的一种图像融合方法的流程示意图,其中,该图像融合方法可以应用于图像采集设备,如视频监控场景中的监控摄像头。如图1所示,该图像融合方法可以包括以下步骤。
步骤S100、将图像采集设备采集的可见光图像转换为RGB图像。
步骤S110、将图像采集设备采集的近红外图像转换为第一亮度图像。
本申请实施例中,当图像采集设备所处区域的环境亮度较低时,图像采集设备可通过红外补光同时采集可见光图像和近红外图像,并通过可见光图像和近红外图像融合的方式提高图像质量。虽然图像采集设备也可以采集红外图像来进行后续融合处理,但是近红外图像较红外图像有更多的细节,融合后的图像可以保留更多的细节。
本申请实施例中,图像采集设备同时采集到可见光图像和近红外图像之后,一方面可以将可见光图像转换为RGB(Red,Green,Blue,红绿蓝)图像,另一方面可以将近红外图像转换为红外亮度图像(也可以称为红外Y通道图像,本文中称为第一亮度图 像)。需要说明的是,本申请实施例中的近红外图像可以替换为红外图像,因为近红外图像比红外图像呈现更多的图像细节,所以优选近红外图像。
举例来说,对于可见光图像,图像采集设备可以通过AWB(Automatic White Balance,自动白平衡)校正恢复可见光图像颜色,并进行DENOISE(降噪)处理,然后,进行通过DEMOISC(去马赛克)处理将可见光图像插值到初始RGB图像,并对该初始RGB图像进行GAMMA(伽马)校正以提升图像亮度,得到步骤S100所述的RGB图像以供后续步骤使用。此外,本公开并不限定上述AWB校正、DENOISE处理、DEMOISC处理、GAMMA校正的顺序。对于近红外图像,图像采集设备可以通过DEMOSIC处理将近红外图像插值到近红外图像对应的RGB图像中,并对该RGB图像进行GAMMA校正以提升图像亮度,进而,将该RGB图像转换为亮度图像,并通过Y通道配准处理将近红外图像和对应的可见光图像按像素对齐,其具体实现可以参见本领域熟知技术中的相关描述,本申请实施例在此不做赘述。
需要说明的是,步骤S100与步骤S110之间并不存在必然的时序关系,可以先执行步骤S100中的操作,后执行步骤S110中的操作;也可以先执行步骤S110中的操作,后执行步骤S100中的操作;还可以并发执行步骤S100和步骤S110中的操作。
步骤S120、将该RGB图像转换为第二亮度图像。
本申请实施例中,在步骤S100中,图像采集设备将可见光图像转换为RGB图像之后,可以将该RGB图像转换为可见光亮度图像(也可以称为可见光Y通道图像,本文中称为第二亮度图像)。
举例来说,图像采集设备可以通过以下公式(1)将RGB图像转换为第二亮度图像:
y p=(R p*77+G p*150+B p*29)/256     (1),
其中,R p、G p、B p分别为该RGB图像中像素点P处的R、G、B三通道值,y p为第二亮度图像中像素点P处的亮度通道值,像素点P为RGB图像中任一像素点,RGB图像中的像素点P与第二亮度图像中的像素点P的位置相同。
步骤S130、根据第一亮度图像进行融合权重计算,以得到近红外图像的融合权重图。
本申请实施例中,在将近红外图像转换为第一亮度图像之后,还可以根据该第一亮度图像进行融合权重计算,以确定对可见光图像和近红外图像进行亮度融合时,近红外图像中各像素点的亮度值的权重,从而得到近红外图像的融合权重图。该近红外图像的融合权重图中记录了对可见光图像和近红外图像进行亮度融合时,近红外图像中各像素 点的亮度值的权重。
其中,对于第一亮度图像中亮度值较低(如低于预设阈值,该阈值可以根据实际场景设定)的部分的像素点,可以增大近红外图像的像素点亮度值的权重,以提高融合图像的亮度。对于第一亮度图像中亮度值较高(如高于预设阈值,该阈值可以根据实际场景设定)的部分的像素点,可以降低近红外图像的像素点的亮度值的权重,以便融合图像中可以保留更多可见光图像中的细节。
在本申请一个实施例中,上述根据第一亮度图像进行融合权重计算,可以包括:对于第一亮度图像中任一像素点,可以根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重。
在该实施例中,为了确定近红外图像的融合权重,可以预先设定亮度映射模型,该亮度映射模型可以记录亮度值与融合权重的对应关系。
相应地,图像采集设备将近红外图像转换为第一亮度图像之后,可以分别根据各像素点的亮度值查询该亮度映射模型,以得到各像素点的融合权重。
举例来说,请参见图2,为本申请实施例提供的一种亮度映射模型的示意图,如图2所示,该亮度映射模型由min_wt,min_limit和max_limit这三个参数进行控制,该亮度映射模型的横坐标为亮度值,纵坐标为融合权重。
对于第一亮度图像中的任一像素点,当其亮度值小于min_limit时,其融合权重为255。当其亮度值大于max_limit时,其融合权重为min_wt。当其亮度值处于[min_limit,max_limit],其融合权重随亮度值的增大而逐渐减小,其亮度值和融合权重的具体映射关系可以根据实际亮度映射模型确定。
其中,参数min_wt,min_limit和max_limit的值可以为经验值,例如,可以分别为180,200,250。
需要说明的是,在该实施例中,为了优化图像融合效果,在基于亮度映射模型确定近红外图像的融合权重图之后,还可以对该融合权重图进行均值滤波处理,其具体实现在此不做赘述。
需要说明的是,步骤S120与步骤S130之间并不存在必然的时序关系,可以先执行步骤S120中的操作,后执行步骤S130中的操作;也可以先执行步骤S130中的操作,后执行步骤S120中的操作;还可以并发执行步骤S120和步骤S130中的操作。
步骤S140、根据融合权重图对第一亮度图像和第二亮度图像进行亮度融合,以得到亮度融合图像。
本申请实施例中,在得到了近红外图像的融合权重图之后,可以根据该融合权重图对步骤S110中得到的第一亮度图像以及步骤S120中得到的第二亮度图像进行亮度融合处理,以得到亮度融合图像。
对于任一位置的像素点,通过以下公式(2)确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256),0,255)        (2),
其中,Y nir为第一亮度图像中该位置的像素点的亮度值,wt为融合权重图中该位置的像素点的融合权重值,Y LL为第二亮度图像中该位置的像素点的亮度值,CLIP()为截止计算,即将输入的值截止在[0,255]的范围内。
步骤S150、根据第二亮度图像、亮度融合图像以及RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
本申请实施例中,图像采集设备得到了亮度融合图像之后,可以根据该亮度融合图像、步骤S120中得到的第二亮度图像以及步骤S100中得到的RGB图像进行RGB融合处理,以得到融合后的RGB图像。
在本申请其中一个实施例中,上述根据第二亮度图像、亮度融合图像以及RGB图像进行RGB融合,以得到融合后的RGB图像,可以包括:
对于任一位置的像素点,通过以下公式(3)-(4)分别确定该像素点的R、G、B三通道值V out
当Y LL>0时,V out=CLIP(V in*Y/Y LL,0,255)    (3),
当Y LL=0时,V out=CLIP(Y,0,255)        (4)。
其中,V in可以为RGB图像(融合前的RGB图像)中该位置的像素点的R、G、B三通道值,Y LL为第二亮度图像中该位置的像素点的亮度值,Y为亮度融合图像中该位置的像素点的亮度值,CLIP()为截止计算。
其中,当Y LL>0时,若V in为RGB图像中该位置的像素点的R通道值时,则V out为融合后的RGB图像中该位置的像素点的R通道值;若V in为RGB图像中该位置的像素点的G通道值时,则V out为融合后的RGB图像中该位置的像素点的G通道值;若V in为RGB图像中该位置的像素点的B通道值时,则V out为融合后的RGB图像中该位 置的像素点的B通道值。
当Y LL=0时,融合后的RGB图像中同一像素点的R、G、B三通道值相等。
应该认识到,该实施例中描述的进行RGB融合的方式仅仅是本申请实施例中实现RGB融合的一种具体示例,而并不是对本申请保护范围的限定,在本申请实施例的基础上,本领域技术人员在未付出创造性劳动前提下对本申请实施例中的RGB融合方式进行的调整和变型均应属于本申请保护范围。
可见,在图1所示方法流程中,在融合可见光图像和近红外图像时,通过近红外图像获取场景理想的亮度信息,并从可见光图像中获取色彩信息,在此基础上对可见光图像和近红外图像进行亮度和色彩的融合,在提升低照度环境下(即环境亮度较低的场景下)图像的亮度的同时,保持了图像的色彩信息,提升了融合后的图像的质量。此外,由公式(2)-(4)可知,在计算融合图像时,本申请所提供的计算方法简单,所需的处理时间较短。
请参见图3,为本申请实施例提供的另一种图像融合方法的流程示意图,其中,该图像融合方法可以应用于图像采集设备,如视频监控场景中的监控摄像头,如图3所示,该图像融合方法可以包括以下步骤:
步骤S300、将图像采集设备采集的可见光图像转换为RGB图像。
步骤S310、将图像采集设备采集的近红外图像转换为第一亮度图像。
步骤S320、将该RGB图像转换为第二亮度图像。
步骤S330、根据第一亮度图像进行融合权重计算,以得到近红外图像的融合权重图。
本申请实施例中,步骤S300~步骤S330的具体实现可以参见步骤S100~步骤S130中的相关描述,本申请实施例在此不再赘述。
步骤S340、对第二亮度图像进行细节计算,以得到第二细节图像。
步骤S350、对第一亮度图像进行细节计算,以得到第一细节图像。
本申请实施例中,为了提升融合后的图像的细节信息,图像采集设备得到第一亮度图像和第二亮度图像之后,还可以对第一亮度图像和第二亮度图像进行细节计算,以得到对应的细节图像。本文中将第一亮度图像通过细节计算得到的细节图像称为第一细节图像,将第二亮度图像通过细节计算得到的细节图像称为第二细节图像。
在本申请其中一个实施例中,上述对第一亮度图像进行细节计算,以得到第一细节 图像,可以包括:对第一亮度图像进行均值滤波,以得到第一均值图像;对第一亮度图像和第一均值图像进行求差,以得到第一差值图像;对第一差值图像进行截止操作,以得到第一细节图像。
在该实施例中,图像采集设备得到第一亮度图像之后,可以对第一亮度图像进行均值滤波,以得到对应的均值图像(本文中称为第一均值图像)。例如,图像采集设备可以对第一亮度图像进行半径为r(r为经验值,可以根据实际场景设定)的均值滤波,以得到第一均值图像。
在该实施例中,图像采集设备得到第一均值图像之后,可以对第一亮度图像和第一均值图像进行求差,以得到有符号的差值图像(本文中称为第一差值图像)。
考虑到第一亮度图像和第一均值图像进行求差后会被放大。例如,假设第一亮度图像和第一均值图像均为8bit图像,则对第一亮度图像和第一均值图像进行求差会得到9bit的差值图像。因此,图像采集设备得到第一差值图像之后,还需要对第一差值图像进行截止操作,将其截止到指定区间,以得到对应的细节图像(本文中称为第一细节图像)。
在该实施例其中一个实施方式中,上述对第一差值图像进行截止操作,可以通过以下公式(5)实现:
Detail p=CLIP(Diff p*str/128,deNirMin,deNirMax)     (5),
其中,Detail p为第一细节图像中像素点P的亮度细节,Diff p为第一差值图像中像素点P的值,str为细节强度控制参数,[deNirMin,deNirMax]为截止区间,CLIP()为截止计算,将输入的值截止在截止区间内。
其中,str、deNirMin以及deNirMax均为经验值,例如,str取值可以为64,deNirMin取值可以为-64,deNirMax取值可以为32。
本申请实施例中,图像采集设备对第二亮度图像进行细节计算,以得到第二细节图像的具体实现可以参见上述对第一亮度图像进行细节计算,以得到第一细节图像的相关描述,本申请实施例在此不再赘述。
需要说明的是,步骤S340与步骤S350之间并不存在必然的时序关系,可以先执行步骤S340中的操作,后执行步骤S350中的操作;也可以先执行步骤S350中的操作,后执行步骤S340中的操作;还可以并发执行步骤S340和步骤S350中的操作。
步骤S360、根据融合权重图、第一细节图像以及第二细节图像对第一亮度图像和第二亮度图像进行亮度融合,以得到亮度融合图像。
本申请实施例中图像采集设备得到第一细节图像以及第二细节图像之后,可以根据该第一细节图像、第二细节图像以及步骤S330中得到的融合权重对步骤S310中得到的第一亮度图像和步骤S320中得到的第二亮度图像进行亮度融合。
在本申请其中一个实施例中,上述根据融合权重图、第一细节图像以及第二细节图像对第一亮度图像和第二亮度图像进行亮度融合,可以包括:
对于任一位置的像素点,通过以下公式(6)确定该像素点的融合亮度Y:
Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256+Detail nir+Detail LL),0,255)  (6),
其中,Y nir为第一亮度图像中该位置的像素点的亮度值,wt为融合权重图中该位置的像素点的融合权重,Y LL为第二亮度图像中该位置的像素点的亮度值,Detail nir为第一细节图像中该位置的像素点的亮度细节,Detail LL为第二细节图像中该位置的像素点的亮度细节。
步骤S370、根据第二亮度图像、亮度融合图像以及RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
本申请实施例中,步骤S370的具体实现可以参见步骤S150中的相关描述,本申请实施例在此不再赘述。
需要注意的是,上述公式(1)-(6)以及相关描述,都是以可见光图像和近红外图像的像素使用8bit表示为例进行说明。当使用其他位数表示图像像素时,例如12bit,本领域技术人员可以对公式和描述进行相应的修改。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,下面结合具体实例对本申请实施例提供的技术方案进行说明。
在该实施例中,以图像采集设备采集到的可见光图像以及近红外图像均为8bit图像为例。但应该认识到,图像采集设备采集的可见光图像以及近红外图像并不限于8bit图像,也可以为12bit图像或16bit图像。
在该实施例中,图像采集设备采集到可见光图像以及近红外图像之后,对于可见光图像,可以分别进行AWB校正、DEMOSIC处理、DENOISE处理以及GAMMA校正,得到8bit可见光RGB图像。对于近红外图像,可以分别进行DEMOSIC处理、GAMMA 校正、RGB2Y(RGB to Y,将RGB图像转换为Y通道图像,即亮度图像)处理以及Y通道配准,得到8bit红外亮度图像。
对于8bit可见光图像,图像采集设备可以通过RGB2Y处理将其转换为8bit亮度图像,并对该8bit亮度图像进行细节计算,以得到8bit细节图像。其中,图像采集设备对8bit可见光亮度图像进行细节计算的流程示意图可以参见图4。图像采集设备可以对8bit可见光亮度图像进行半径为r的均值滤波,以得到8bit均值图像,然后,对8bit可见光亮度图像和8bit均值图像进行求差,得到9bit有符号的差值图像,进而,通过截止操作,将9bit有符号的差值图像截止到指定区间([deNirMin,deNirMax]),以得到8bit可见光细节图。
对于8bit红外亮度图像,图像采集设备可以对其进行融合权重计算。其中,图像采集设备对8bit红外亮度图像进行融合权重计算的流程图可以如图5所示。图像采集设备可以根据8bit红外亮度图像中各像素点的亮度值查询预设亮度映射模型(可以如图2所示),得到各像素点的融合权重,从而得到8bit融合权重图,并对8bit融合权重图进行均值滤波,得到滤波后的8bit融合权重图。
另一方面,图像采集设备可以对8bit红外亮度图像进行细节计算,以得到8bit红外细节图像,其具体实现可以参见上述对8bit可见光亮度图像的相关描述,本申请实施例在此不再赘述。
图像采集设备得到8bit可见光细节图像和8bit红外细节图像之后,可以根据该8bit可见光细节图像、8bit红外细节图像以及滤波后的8bit融合权重图对8bit可见光亮度图像和8bit红外亮度图像进行亮度融合,并根据融合后的亮度图像、8bit可见光亮度图像以及8bit可见光RGB图像进行RGB融合处理,以得到融合后的RGB图像,实现可见光图像和近红外图像的融合,其流程图可以如图6所示。
本申请实施例中,通过将图像采集设备采集的可见光图像转换为RGB图像,以及,将图像采集设备采集的近红外图像转换为第一亮度图像,然后,将RGB亮度图像转换为第二亮度图像,以及根据第一亮度图像进行融合权重计算,以得到近红外图像的融合权重图。然后对第一亮度图像和第二亮度图像分别进行细节计算,得到第一细节图像和第二细节图像。进而,根据融合权重图、第一细节图像和第二细节图像对第一亮度图像和第二亮度图像进行亮度融合,以得到亮度融合图像,并根据第二亮度图像、亮度融合图像以及RGB图像进行RGB融合,以得到融合后的RGB图像,实现了可见光图像和近红外图像的融合,提升低照度环境下图像的亮度的同时,保持了图像的色彩信息, 提升了融合后的图像的质量。
以上对本申请提供的方法进行了描述。下面对本申请提供的装置进行描述:
请参见图7,为本申请实施例提供的一种图像融合装置的结构示意图,其中,该图像融合装置可以应用于上述方法实施例中的图像采集设备,如图7所示,该图像融合装置可以包括以下单元。
第一可见光处理单元710,用于将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像。
第一红外处理单元720,用于将所述图像采集设备采集的近红外图像转换为第一亮度图像。
第二可见光处理单元730,用于将所述RGB图像转换为第二亮度图像。
第二红外处理单元740,用于根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图。
亮度融合单元750,用于根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像。
RGB融合单元760,用于根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
在一种可选的实施方式中,所述第二可见光处理单元730,还用于对所述第二亮度图像进行细节计算,以得到第二细节图像;所述第二红外处理单元740,还用于对所述第一亮度图进行细节计算,以得到第一细节图像;所述亮度融合单元750,具体用于根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合。
在一种可选的实施方式中,所述第二可见光处理单元730,具体用于对所述第二亮度图像进行均值滤波,以得到第二均值图像;对所述第二亮度图像和所述第二均值图像进行求差,以得到第二差值图像;对所述第二差值图像进行截止操作,以得到所述第二细节图像。
在一种可选的实施方式中,所述第二红外处理单元740,具体用于对所述第一亮度图像进行均值滤波,以得到第一均值图像;对所述第一亮度图像和所述第一均值图像进行求差,以得到第一差值图像;对所述第一差值图像进行截止操作,以得到所述第一 细节图像。
在一种可选的实施方式中,对差值图像进行截止操作通过公式(5)实现。
在一种可选的实施方式中,所述亮度融合单元750,具体用于对于任一位置的像素点,通过公式(6)确定该像素点的融合亮度Y。
在一种可选的实施方式中,所述第二红外处理单元740,具体用于对于所述第一亮度图像中任一像素点,根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重。其中,所述亮度映射模型记录有亮度值与融合权重的对应关系。
在一种可选的实施方式中,所述RGB融合单元760,具体用于对于任一位置的像素点,通过公式(3)-(4)确定该像素点的R、G、B三通道值V out
在一种可选的实施方式中,所述亮度融合单元750,具体用于对于任一位置的像素点,通过公式(2)确定该像素点的融合亮度Y。
请参见图8,为本申请实施例提供的一种图像融合装置的硬件结构示意图。该图像融合装置可以包括处理器801、存储有机器可执行指令的机器可读存储介质802。处理器801与机器可读存储介质802可经由系统总线803通信。并且,通过读取并执行机器可读存储介质802中与图像融合逻辑对应的机器可执行指令,处理器801可执行上文描述的图像融合方法。
本文中提到的机器可读存储介质802可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、DVD等),或者类似的存储介质,或者它们的组合。
本申请实施例还提供了一种包括机器可执行指令的机器可读存储介质,例如图8中的机器可读存储介质802,所述机器可执行指令可由图像融合装置中的处理器801执行以实现以上描述的图像融合方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、 方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (16)

  1. 一种图像融合方法,应用于图像采集设备,包括:
    将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像;
    将所述图像采集设备采集的近红外图像转换为第一亮度图像;
    将所述RGB图像转换为第二亮度图像;
    根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;
    根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;
    根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
  2. 根据权利要求1所述的方法,其特征在于,根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合之前,还包括:
    对所述第一亮度图像进行细节计算,以得到第一细节图像;
    对所述第二亮度图像进行细节计算,以得到第二细节图像;
    根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,包括:
    根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合。
  3. 根据权利要求2所述的方法,其特征在于,对所述第一亮度图像进行细节计算,以得到第一细节图像,包括:
    对所述第一亮度图像进行均值滤波,以得到第一均值图像;
    对所述第一亮度图像和所述第一均值图像进行求差,以得到第一差值图像;
    对所述第一差值图像进行截止操作,以得到所述第一细节图像。
  4. 根据权利要求2所述的方法,其特征在于,对所述第二亮度图像进行细节计算,以得到第二细节图像,包括:
    对所述第二亮度图像进行均值滤波,以得到第二均值图像;
    对所述第二亮度图像和所述第二均值图像进行求差,以得到第二差值图像;
    对所述第二差值图像进行截止操作,以得到所述第二细节图像。
  5. 根据权利要求3或4所述的方法,其特征在于,对所述差值图像进行截止操作通过以下公式实现:
    Detail p=CLIP(Diff p*str/128,deNirMin,deNirMax)
    其中,Detail p为细节图像中像素点P的亮度细节,
    Diff p为差值图像中像素点P的值,
    str为细节强度控制参数,
    [deNirMin,deNirMax]为截止区间,
    CLIP()为截止计算。
  6. 根据权利要求2所述的方法,其特征在于,根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合包括:
    对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
    Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256+Detail nir+Detail LL),0,255)
    其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,
    wt为所述融合权重图中该位置的像素点的融合权重,
    Y LL为所述第二亮度图像中该位置的像素点的亮度值,
    Detail nir为所述第一细节图像中该位置的像素点的亮度细节,
    Detail LL为所述第二细节图像中该位置的像素点的亮度细节,
    CLIP()为截止计算。
  7. 根据权利要求1所述的方法,其特征在于,根据所述第一亮度图像进行融合权重计算,包括:
    对于所述第一亮度图像中任一像素点,根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重,其中,所述亮度映射模型记录有亮度值与融合权重的对应关系。
  8. 根据权利要求1所述的方法,其特征在于,根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,包括:
    对于任一位置的像素点,通过以下公式确定该像素点的融合亮度Y:
    Y=CLIP(((Y nir*wt+Y LL*(256-wt))/256),0,255)
    其中,Y nir为所述第一亮度图像中该位置的像素点的亮度值,
    wt为所述融合权重图中该位置的像素点的融合权重,
    Y LL为所述第二亮度图像中该位置的像素点的亮度值,
    CLIP()为截止计算。
  9. 根据权利要求1所述的方法,其特征在于,根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,以得到融合后的RGB图像,包括:
    对于任一位置的像素点,通过以下公式确定该像素点的R、G、B三通道值V out
    当Y LL>0时,V out=CLIP(V in*Y/Y LL,0,255)
    当Y LL=0时,V out=CLIP(Y,0,255)
    其中,V in为所述RGB图像中该位置的像素点的R、G、B三通道值,
    Y LL为所述第二亮度图像中该位置的像素点的亮度值,
    Y为所述亮度融合图像中该位置的像素点的亮度值,
    CLIP()为截止计算。
  10. 一种图像融合装置,应用于图像采集设备,该装置包括:
    第一可见光处理单元,用于将所述图像采集设备采集的可见光图像转换为红绿蓝RGB图像;
    第一红外处理单元,用于将所述图像采集设备采集的近红外图像转换为第一亮度图像;
    第二可见光处理单元,用于将所述RGB图像转换为第二亮度图像;
    第二红外处理单元,用于根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;
    亮度融合单元,用于根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;
    RGB融合单元,用于根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
  11. 根据权利要求10所述的装置,其特征在于,
    所述第二可见光处理单元,还用于对所述第二亮度图像进行细节计算,以得到第二细节图像;
    所述第二红外处理单元,还用于对所述第一亮度图像进行细节计算,以得到第一细节图像;
    所述亮度融合单元,具体用于根据所述融合权重图、所述第一细节图像以及所述第二细节图像对所述第一亮度图像和所述第二亮度图像进行亮度融合。
  12. 根据权利要求11所述的装置,其特征在于,
    所述第二可见光处理单元,具体用于对所述第二亮度图像进行均值滤波,以得到第二均值图像;对所述第二亮度图像和所述第二均值图像进行求差,以得到第二差值图像;对所述第二差值图像进行截止操作,以得到所述第二细节图像。
  13. 根据权利要求11所述的装置,其特征在于,
    所述第二红外处理单元,具体用于对所述第一亮度图像进行均值滤波,以得到第一均值图像;对所述第一亮度图像和所述第一均值图像进行求差,以得到第一差值图像;对所述第一差值图像进行截止操作,以得到所述第一细节图像。
  14. 根据权利要求10所述的装置,其特征在于,所述第二红外处理单元,具体用于:
    对于所述第一亮度图像中任一像素点,根据该像素点的亮度值查询预设的亮度映射模型,以确定该亮度值对应的融合权重,
    其中,所述亮度映射模型记录有亮度值与融合权重的对应关系。
  15. 一种图像融合装置,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令,所述处理器被所述机器可执行指令促使实现权利要求1-9中任一项所述的图像融合的方法。
  16. 一种机器可读存储介质,存储有机器可执行指令,在被处理器调用和执行时,所述机器可执行指令促使所述处理器:
    将图像采集设备采集的可见光图像转换为红绿蓝RGB图像;
    将所述图像采集设备采集的近红外图像转换为第一亮度图像;
    将所述RGB图像转换为第二亮度图像;
    根据所述第一亮度图像进行融合权重计算,以得到所述近红外图像的融合权重图;
    根据所述融合权重图对所述第一亮度图像和所述第二亮度图像进行亮度融合,以得到亮度融合图像;
    根据所述第二亮度图像、所述亮度融合图像以及所述RGB图像进行RGB融合,得到融合后的RGB图像作为所述图像采集设备的输出图像。
PCT/CN2019/073090 2018-04-11 2019-01-25 图像融合方法及其装置 WO2019196539A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810320154.7 2018-04-11
CN201810320154.7A CN110363732A (zh) 2018-04-11 2018-04-11 一种图像融合方法及其装置

Publications (1)

Publication Number Publication Date
WO2019196539A1 true WO2019196539A1 (zh) 2019-10-17

Family

ID=68163504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073090 WO2019196539A1 (zh) 2018-04-11 2019-01-25 图像融合方法及其装置

Country Status (2)

Country Link
CN (1) CN110363732A (zh)
WO (1) WO2019196539A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233079A (zh) * 2020-10-12 2021-01-15 东南大学 多传感器图像融合的方法及系统
US20220044374A1 (en) * 2019-12-17 2022-02-10 Dalian University Of Technology Infrared and visible light fusion method
US11250550B2 (en) * 2018-02-09 2022-02-15 Huawei Technologies Co., Ltd. Image processing method and related device
EP4273794A4 (en) * 2020-12-30 2024-06-19 Hangzhou Hikmicro Sensing Technology Co., Ltd. IMAGE FUSION METHOD AND APPARATUS, IMAGE PROCESSING DEVICE AND BINOCULAR SYSTEM

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712485B (zh) * 2019-10-24 2024-06-04 杭州海康威视数字技术股份有限公司 一种图像融合方法及装置
CN111161356B (zh) * 2019-12-17 2022-02-15 大连理工大学 一种基于双层优化的红外和可见光融合方法
CN111095919B (zh) * 2019-12-17 2021-10-08 威创集团股份有限公司 一种视频融合方法、装置及存储介质
CN111369486B (zh) * 2020-04-01 2023-06-13 浙江大华技术股份有限公司 一种图像融合处理方法及装置
CN113763295B (zh) * 2020-06-01 2023-08-25 杭州海康威视数字技术股份有限公司 图像融合方法、确定图像偏移量的方法及装置
CN112767298B (zh) * 2021-03-16 2023-06-13 杭州海康威视数字技术股份有限公司 一种可见光图像和红外图像的融合方法、装置
CN113421195B (zh) * 2021-06-08 2023-03-21 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置及设备
CN114841904A (zh) * 2022-03-03 2022-08-02 浙江大华技术股份有限公司 图像融合方法、电子设备及存储装置
CN115239610B (zh) * 2022-07-28 2024-01-26 爱芯元智半导体(上海)有限公司 图像融合方法、装置、系统及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140169671A1 (en) * 2012-12-14 2014-06-19 Industry-Academic Cooperation Foundation, Yonsei University Apparatus and method for color restoration
CN104200452A (zh) * 2014-09-05 2014-12-10 西安电子科技大学 基于谱图小波变换的红外与可见光图像融合方法及其装置
CN104268847A (zh) * 2014-09-23 2015-01-07 西安电子科技大学 一种基于交互非局部均值滤波的红外与可见光图像融合方法
CN107784642A (zh) * 2016-08-26 2018-03-09 北京航空航天大学 一种红外视频和可见光视频自适应融合方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN105069768B (zh) * 2015-08-05 2017-12-29 武汉高德红外股份有限公司 一种可见光图像与红外图像融合处理系统及融合方法
CN106600572A (zh) * 2016-12-12 2017-04-26 长春理工大学 一种自适应的低照度可见光图像和红外图像融合方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140169671A1 (en) * 2012-12-14 2014-06-19 Industry-Academic Cooperation Foundation, Yonsei University Apparatus and method for color restoration
CN104200452A (zh) * 2014-09-05 2014-12-10 西安电子科技大学 基于谱图小波变换的红外与可见光图像融合方法及其装置
CN104268847A (zh) * 2014-09-23 2015-01-07 西安电子科技大学 一种基于交互非局部均值滤波的红外与可见光图像融合方法
CN107784642A (zh) * 2016-08-26 2018-03-09 北京航空航天大学 一种红外视频和可见光视频自适应融合方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250550B2 (en) * 2018-02-09 2022-02-15 Huawei Technologies Co., Ltd. Image processing method and related device
US20220044374A1 (en) * 2019-12-17 2022-02-10 Dalian University Of Technology Infrared and visible light fusion method
US11823363B2 (en) * 2019-12-17 2023-11-21 Dalian University Of Technology Infrared and visible light fusion method
CN112233079A (zh) * 2020-10-12 2021-01-15 东南大学 多传感器图像融合的方法及系统
CN112233079B (zh) * 2020-10-12 2022-02-11 东南大学 多传感器图像融合的方法及系统
EP4273794A4 (en) * 2020-12-30 2024-06-19 Hangzhou Hikmicro Sensing Technology Co., Ltd. IMAGE FUSION METHOD AND APPARATUS, IMAGE PROCESSING DEVICE AND BINOCULAR SYSTEM

Also Published As

Publication number Publication date
CN110363732A (zh) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2019196539A1 (zh) 图像融合方法及其装置
WO2019119842A1 (zh) 图像融合方法、装置、电子设备及计算机可读存储介质
US8363131B2 (en) Apparatus and method for local contrast enhanced tone mapping
WO2019148912A1 (zh) 一种图像处理方法、装置、电子设备及存储介质
WO2017202061A1 (zh) 一种图像透雾方法及实现图像透雾的图像采集设备
WO2021109620A1 (zh) 一种曝光参数的调节方法及装置
JP6394338B2 (ja) 画像処理装置、画像処理方法、及び撮像システム
US9426437B2 (en) Image processor performing noise reduction processing, imaging apparatus equipped with the same, and image processing method for performing noise reduction processing
WO2021073140A1 (zh) 单目摄像机、图像处理系统以及图像处理方法
JP5996970B2 (ja) 車載撮像装置
JP6559229B2 (ja) 画像処理装置、撮像装置、画像処理方法及び画像処理装置の画像処理プログラムを記憶した記憶媒体
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
WO2017080348A2 (zh) 一种基于场景的拍照装置、方法、计算机存储介质
JP2014107852A (ja) 撮像装置
US20130120608A1 (en) Light source estimation device, light source estimation method, light source estimation program, and imaging apparatus
KR20190116077A (ko) 이미지 처리
JP2016126750A (ja) 画像処理システム、画像処理装置、撮像装置、画像処理方法、プログラムおよび記録媒体
JP2004219277A (ja) 人体検知方法およびシステム、プログラム、記録媒体
TWI542212B (zh) Photographic system with visibility enhancement
JP2020509661A (ja) 視程状態の変化にロバストな複合フィルタリング基盤のオートフォーカシング機能を有する監視カメラ及びそれが適用された映像監視システム
KR102336449B1 (ko) 촬영 장치 및 촬영 장치의 동작방법
US20120314044A1 (en) Imaging device
US8743236B2 (en) Image processing method, image processing apparatus, and imaging apparatus
JP2014209681A (ja) 色調調整装置および色調調整方法
CN112241735A (zh) 一种图像处理方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19785139

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19785139

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19785139

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19785139

Country of ref document: EP

Kind code of ref document: A1