CN112053392A - Rapid registration and fusion method for infrared and visible light images - Google Patents

Rapid registration and fusion method for infrared and visible light images Download PDF

Info

Publication number
CN112053392A
CN112053392A CN202010978976.1A CN202010978976A CN112053392A CN 112053392 A CN112053392 A CN 112053392A CN 202010978976 A CN202010978976 A CN 202010978976A CN 112053392 A CN112053392 A CN 112053392A
Authority
CN
China
Prior art keywords
image
visible light
infrared
fusion
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010978976.1A
Other languages
Chinese (zh)
Inventor
陈震
卢锋
张聪炫
张弛
江少锋
危水根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Hangkong University
Original Assignee
Nanchang Hangkong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Hangkong University filed Critical Nanchang Hangkong University
Priority to CN202010978976.1A priority Critical patent/CN112053392A/en
Publication of CN112053392A publication Critical patent/CN112053392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a rapid registration and fusion method of infrared and visible light images, which comprises the steps of simultaneously collecting images of the same scene by using an infrared sensor and a visible light sensor, and solving a homography matrix converted between an infrared image plane coordinate system and a visible light image plane coordinate system by adopting singular value decomposition; taking the visible light image as a reference image, and obtaining an infrared image which is registered with the visible light image through homography transformation; and performing image fusion on the reference image and the registered infrared image by adopting a pixel point superposition method of self-adaptive weight. Aiming at the fusion of two heterogeneous sensor images of infrared and visible light, the invention realizes advantage complementation and solves the problems that the characteristics of the visible light image are difficult to extract and register under the condition of weak visible light; meanwhile, on the basis of ensuring certain fusion precision, the speed of image fusion is greatly improved, and the real-time problem in practical application is solved.

Description

Rapid registration and fusion method for infrared and visible light images
Technical Field
The invention relates to an image processing method, in particular to a fast registration and fusion method of infrared and visible light images, which is based on homography transformation fast image registration and self-adaptive weight-based image fusion of pixel point superposition.
Background
The image fusion is to fuse two or more images of the same scene according to a specific algorithm, so that the information contained in a single fused image is more comprehensive, and the method is more suitable for tasks such as human visual understanding and computer detection, classification, identification and the like. At present, the image fusion technology is widely applied in many fields, including the fields of analysis and processing of remote sensing images, computer vision, unmanned driving, medical image processing, target detection and tracking and the like. In the field of target detection and tracking, visible light is mainly adopted, and a visible light image can provide details and texture information of a target, but under the condition of low visibility, the working effect of the visible light is greatly influenced; the infrared imaging mechanism is different from visible light imaging, so that the infrared imaging mechanism can effectively image a hot target under the condition of low visibility, but is insensitive to scene brightness change and has low imaging resolution.
Disclosure of Invention
The invention aims to provide a method for quickly registering and fusing infrared and visible light images, which fuses the infrared images and the visible light images, so that the fused images can not only highlight an infrared target, but also retain the space detail information of the visible light images so as to facilitate the integral understanding of a scene, so that the problem of real-time registration and fusion of the infrared and visible light images in the actual engineering application at present is solved, and the speed of fusion is improved on the basis of ensuring certain fusion precision.
In order to achieve the above object, the present invention adopts the following technical solutions. A rapid registration and fusion method of infrared and visible light images comprises the following steps:
1) setting a natural scene area which is open, has a depth of more than 100 meters and has obvious object characteristics;
2) simultaneously acquiring images of the same scene by using an infrared sensor and a visible light sensor, respectively extracting 32 groups of feature points, and aligning the feature points (u, v) of the infrared image with the feature points (x, y) of the visible light image in a one-to-one correspondence manner to form 32 groups of feature point sets;
3) by utilizing the characteristic point set and adopting singular value decomposition, a homography matrix T converted between an infrared image plane coordinate system and a visible light image plane coordinate system, namely a registration parameter matrix of the infrared image and the visible light image is obtained; the homography transformation formula of the plane homogeneous coordinate (u, v,1) of the infrared image and the plane homogeneous coordinate (x, y,1) of the visible light image is as follows:
Figure BDA0002686876030000011
in the formula: h is1…h9Nine elements to be solved of the homography matrix T;
the conversion relation formula of the characteristic point (u, v) of the infrared image and the characteristic point (x, y) of the visible light image is as follows:
Figure BDA0002686876030000021
from equation (2), with a pair of feature points (u, v) and (x, y) in the image plane, the following equation can be set up:
Figure BDA0002686876030000022
32 sets of equations are listed by the 32 sets of feature point sets, and a homography matrix T can be solved through singular value decomposition;
4) by utilizing the homography matrix T, taking the visible light image as a reference image, and obtaining an infrared image which is registered with the visible light image through homography transformation;
5) and carrying out image fusion on the reference image and the registered infrared image, and realizing the rapid fusion of the infrared image and the visible light image by superposing pixel points with self-adaptive weights, wherein the superposition formula of each corresponding pixel point is as follows:
F(x)=αf(x)+(1-α)f'(x) (4)
in the formula: f (x) represents the pixel points after fusion, f (x) represents the pixel points of the reference image, f' (x) represents the pixel points of the registration image, and alpha represents the superimposed adaptive weight of the pixel points; the adaptive weight α is adaptively changed mainly according to the brightness of the visible light image, and the calculation formula of α is as follows:
Figure BDA0002686876030000023
the calculation formula of the luminance Y of the visible light image is as follows:
Y=0.299*R+0.587*G+0.114*B (6)
in the formula: r, G, B, and 0.299, 0.587 and 0.114 are coefficients derived according to the sensitivity of human eyes to the three primary colors of red, green and blue.
Aiming at the fusion of images of two different-source sensors of infrared and visible light, firstly, the registration of the images is the basis of the fusion, and then the registered images are subjected to image fusion according to a specific fusion algorithm, so that the advantage complementation is realized. The problems that under the condition of weak visible light, the characteristics of the visible light image are difficult to extract and register are solved; meanwhile, on the basis of ensuring certain fusion precision, the speed of image fusion is greatly improved, and the real-time problem in practical application is solved.
Drawings
FIG. 1 is a schematic view of the test state of the present invention;
FIG. 2a is a visible light image of a natural scene area in accordance with the present invention;
FIG. 2b is an infrared image of a natural scene area in accordance with the present invention;
FIG. 3a is a visible light original image under a collected test scene;
FIG. 3b is an acquired infrared original image under a test scene;
FIG. 3c is a registered infrared original image under a collected test scene;
fig. 3d is an infrared and visible light fused image under a test scene.
Detailed Description
The invention is further illustrated by the following figures and examples. See fig. 1 to 3 d.
A rapid registration and fusion method of infrared and visible light images is disclosed, and the experimental process is explained as follows:
1) selecting a natural scene 3 area which is open, has a depth of more than 100 meters and has obvious object characteristics;
2) simultaneously acquiring images (shown in figure 1) of the same scene 3 by using an infrared sensor 1 and a visible light sensor 2, respectively extracting 32 groups of feature points, and aligning the feature points (u, v) of the infrared image with the feature points (x, y) of the visible light image in a one-to-one correspondence manner to form a feature point set; the visible light image of the natural scene 3 is shown in fig. 2a, the infrared image of the natural scene 3 is shown in fig. 2b, wherein the visible light sensor 2 uses an ACE1920-155UC type image collector of the BASLER company, and the imaging size is 1920 × 1200, the infrared sensor 1 uses a V3-110918 type infrared thermal imager, and the imaging size is 640 × 480;
3) by utilizing the characteristic point set and adopting singular value decomposition, a homography matrix T converted between an infrared image plane coordinate system and a visible light image plane coordinate system, namely a registration parameter matrix of the infrared image and the visible light image is obtained; the homography transformation formula of the plane homogeneous coordinate (u, v,1) of the infrared image and the plane homogeneous coordinate (x, y,1) of the visible light image is as follows:
Figure BDA0002686876030000031
in the formula: h is1…h9Nine elements to be solved of the homography matrix T;
the conversion relation formula of the characteristic point (u, v) of the infrared image and the characteristic point (x, y) of the visible light image is as follows:
Figure BDA0002686876030000041
from equation (2), with a pair of feature points (u, v) and (x, y) in the image plane, the following equation can be set up:
Figure BDA0002686876030000042
32 sets of equations are listed by the 32 sets of feature point sets, and a homography matrix T can be solved through singular value decomposition;
4) in order to verify the accuracy and real-time performance of registration and fusion in an experiment, a multi-target test scene 3 is selected, the infrared sensor 1 and the visible light sensor 2 are used for collecting an infrared image and a visible light image at the same time under the test scene 3, the infrared image of the test scene 3 is shown in a figure 3a, and the visible light image of the test scene 3 is shown in a figure 3 b;
5) using the homography matrix T, taking the visible light image of the test scene 3 as a reference image, obtaining an infrared image of the test scene 3 registered with the visible light image through homography transformation, wherein the registered infrared image is as shown in fig. 3 c;
6) and performing image fusion on the reference image and the registered infrared image, and realizing rapid fusion of the infrared image and the visible light image by overlapping pixels with self-adaptive weights, wherein the fused image is shown in fig. 3d, and the overlapping formula of each corresponding pixel is as follows:
F(x)=αf(x)+(1-α)f'(x) (4)
in the formula: f (x) represents the pixel points after fusion, f (x) represents the pixel points of the reference image, f' (x) represents the pixel points of the registration image, and alpha represents the superimposed adaptive weight of the pixel points; the adaptive weight α is adaptively changed mainly according to the brightness of the visible light image, and the calculation formula of α is as follows:
Figure BDA0002686876030000043
the calculation formula of the luminance Y of the visible light image is as follows:
Y=0.299*R+0.587*G+0.114*B (6)
in the formula: r, G, B, 0.299, 0.587 and 0.114 are coefficients derived according to the sensitivity of human eyes to the three primary colors of red, green and blue.

Claims (1)

1. A rapid registration and fusion method of infrared and visible light images is characterized by comprising the following steps:
1) setting a natural scene area which is open, has a depth of more than 100 meters and has obvious object characteristics;
2) simultaneously acquiring images of the same scene by using an infrared sensor and a visible light sensor, respectively extracting 32 groups of feature points, and aligning the feature points (u, v) of the infrared image with the feature points (x, y) of the visible light image in a one-to-one correspondence manner to form 32 groups of feature point sets;
3) by utilizing the characteristic point set and adopting singular value decomposition, a homography matrix T converted between an infrared image plane coordinate system and a visible light image plane coordinate system, namely a registration parameter matrix of the infrared image and the visible light image is obtained; the homography transformation formula of the plane homogeneous coordinate (u, v,1) of the infrared image and the plane homogeneous coordinate (x, y,1) of the visible light image is as follows:
Figure FDA0002686876020000011
in the formula: h is1…h9Nine elements to be solved of the homography matrix T;
the conversion relation formula of the characteristic point (u, v) of the infrared image and the characteristic point (x, y) of the visible light image is as follows:
Figure FDA0002686876020000012
from equation (2), with a pair of feature points (u, v) and (x, y) in the image plane, the following equation can be set up:
Figure FDA0002686876020000013
32 sets of equations are listed by the 32 sets of feature point sets, and a homography matrix T can be solved through singular value decomposition;
4) by utilizing the homography matrix T, taking the visible light image as a reference image, and obtaining an infrared image which is registered with the visible light image through homography transformation;
5) and carrying out image fusion on the reference image and the registered infrared image, and realizing the rapid fusion of the infrared image and the visible light image by superposing pixel points with self-adaptive weights, wherein the superposition formula of each corresponding pixel point is as follows:
F(x)=αf(x)+(1-α)f'(x) (4)
in the formula: f (x) represents the pixel points after fusion, f (x) represents the pixel points of the reference image, f' (x) represents the pixel points of the registration image, and alpha represents the superimposed adaptive weight of the pixel points; the adaptive weight α is adaptively changed mainly according to the brightness of the visible light image, and the calculation formula of α is as follows:
Figure FDA0002686876020000021
the calculation formula of the luminance Y of the visible light image is as follows:
Y=0.299*R+0.587*G+0.114*B (6)
in the formula: r, G, B, and 0.299, 0.587 and 0.114 are coefficients derived according to the sensitivity of human eyes to the three primary colors of red, green and blue.
CN202010978976.1A 2020-09-17 2020-09-17 Rapid registration and fusion method for infrared and visible light images Pending CN112053392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010978976.1A CN112053392A (en) 2020-09-17 2020-09-17 Rapid registration and fusion method for infrared and visible light images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010978976.1A CN112053392A (en) 2020-09-17 2020-09-17 Rapid registration and fusion method for infrared and visible light images

Publications (1)

Publication Number Publication Date
CN112053392A true CN112053392A (en) 2020-12-08

Family

ID=73603342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010978976.1A Pending CN112053392A (en) 2020-09-17 2020-09-17 Rapid registration and fusion method for infrared and visible light images

Country Status (1)

Country Link
CN (1) CN112053392A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669251A (en) * 2021-03-22 2021-04-16 深圳金三立视频科技股份有限公司 Image fusion method and terminal
CN115830424A (en) * 2023-02-09 2023-03-21 深圳酷源数联科技有限公司 Mining waste identification method, device and equipment based on fusion image and storage medium
CN116152207A (en) * 2023-02-27 2023-05-23 上海福柯斯智能科技有限公司 Image silhouette self-adaptive learning method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN110223262A (en) * 2018-12-28 2019-09-10 中国船舶重工集团公司第七一七研究所 A kind of rapid image fusion method based on Pixel-level
CN110363731A (en) * 2018-04-10 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and electronic equipment
CN110378861A (en) * 2019-05-24 2019-10-25 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN111242991A (en) * 2020-01-10 2020-06-05 大连理工大学 Method for quickly registering visible light and infrared camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN110363731A (en) * 2018-04-10 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and electronic equipment
CN110223262A (en) * 2018-12-28 2019-09-10 中国船舶重工集团公司第七一七研究所 A kind of rapid image fusion method based on Pixel-level
CN110378861A (en) * 2019-05-24 2019-10-25 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN111242991A (en) * 2020-01-10 2020-06-05 大连理工大学 Method for quickly registering visible light and infrared camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王健;郑少峰;: "基于YUV与小波变换的可见光与红外图像融合", 西安工业大学学报, no. 03 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669251A (en) * 2021-03-22 2021-04-16 深圳金三立视频科技股份有限公司 Image fusion method and terminal
CN115830424A (en) * 2023-02-09 2023-03-21 深圳酷源数联科技有限公司 Mining waste identification method, device and equipment based on fusion image and storage medium
CN115830424B (en) * 2023-02-09 2023-04-28 深圳酷源数联科技有限公司 Mining waste identification method, device, equipment and storage medium based on fusion image
CN116152207A (en) * 2023-02-27 2023-05-23 上海福柯斯智能科技有限公司 Image silhouette self-adaptive learning method and device

Similar Documents

Publication Publication Date Title
CN112053392A (en) Rapid registration and fusion method for infrared and visible light images
WO2022142759A1 (en) Lidar and camera joint calibration method
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
EP3534326A1 (en) Method and apparatus for merging infrared image and visible light image
CN106570903B (en) A kind of visual identity and localization method based on RGB-D camera
CN104463108B (en) A kind of monocular real time target recognitio and pose measuring method
CN106485755B (en) Calibration method of multi-camera system
CN102999892B (en) Based on the depth image of region mask and the intelligent method for fusing of RGB image
CN110555889A (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN105809640B (en) Low illumination level video image enhancement based on Multi-sensor Fusion
CN105716539B (en) A kind of three-dimentioned shape measurement method of quick high accuracy
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN102982518A (en) Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
CN107917700B (en) Small-amplitude target three-dimensional attitude angle measurement method based on deep learning
CN102141398A (en) Monocular vision-based method for measuring positions and postures of multiple robots
CN102313536A (en) Method for barrier perception based on airborne binocular vision
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN108959713B (en) Target distance and dead-position offset measurement method based on convolutional neural network
CN109920000B (en) Multi-camera cooperation-based dead-corner-free augmented reality method
CN103606170A (en) Streetscape image feature detecting and matching method based on same color scale
CN109753945A (en) Target subject recognition methods, device, storage medium and electronic equipment
CN116071424A (en) Fruit space coordinate positioning method based on monocular vision
CN112033408A (en) Paper-pasted object space positioning system and positioning method
Du et al. Recognition of mobile robot navigation path based on K-means algorithm
CN117422858A (en) Dual-light image target detection method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination