CN111899289A - Infrared image and visible light image registration method based on image characteristic information - Google Patents

Infrared image and visible light image registration method based on image characteristic information Download PDF

Info

Publication number
CN111899289A
CN111899289A CN202010566412.7A CN202010566412A CN111899289A CN 111899289 A CN111899289 A CN 111899289A CN 202010566412 A CN202010566412 A CN 202010566412A CN 111899289 A CN111899289 A CN 111899289A
Authority
CN
China
Prior art keywords
image
points
visible light
feature
projection matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010566412.7A
Other languages
Chinese (zh)
Other versions
CN111899289B (en
Inventor
李伟
张蒙蒙
陶然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202010566412.7A priority Critical patent/CN111899289B/en
Publication of CN111899289A publication Critical patent/CN111899289A/en
Application granted granted Critical
Publication of CN111899289B publication Critical patent/CN111899289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a registration method of an infrared image and a visible light image based on image characteristic information. According to the method, the point characteristic information and the line characteristic of the infrared image and the visible light image in the same scene are respectively extracted, the line characteristic and the point characteristic are used for registration and the effect of optimized registration, the registered projection matrix is calculated through the point characteristic, then the projection matrix is updated through the line characteristic, and a good registration effect is achieved. The invention removes noise and other interference information through filtering in the whole registration process, only uses the point characteristic and the line characteristic of the image in the subsequent calculation, and the two characteristics, especially the point characteristic is robust to targets with different types, different scales and different brightness, thereby realizing the robustness of the invention to the registration of various scenes.

Description

Infrared image and visible light image registration method based on image characteristic information
Technical Field
The invention relates to the technical field of signal processing and target detection tracking and identification of an infrared thermal imaging unit in an airborne photoelectric system of modern aircraft equipment, in particular to a method for detecting, tracking, identifying and registering various targets such as air, ground, sea and the like acquired by an infrared sensor on the aircraft of the modern aircraft equipment in real time and accurately.
Background
A modern airplane is equipped with an onboard photoelectric detection system for detecting targets in the air, on the ground and on the sea, and mainly comprises a visible light imaging (also called television) unit, an infrared thermal imaging unit and a laser ranging unit. The system comprises a visible light imaging (television) unit, an infrared unit (middle infrared band) and a target, wherein the visible light imaging (television) unit is used for imaging natural light reflected by the target, the infrared unit (middle infrared band) is used for passively detecting thermal radiation of the target and imaging, and searching and detecting the azimuth (azimuth angle and pitch angle) of the target; the laser (near infrared band) irradiates and aims at a target to obtain the radial distance of the target, so that the airplane target is positioned in a three-dimensional space, and the method is also called as a photoelectric radar. The airborne photoelectric detection system not only has the passive target positioning capability of television/infrared, but also has the high-resolution characteristic of laser.
The invention relates to the technical field of intelligent detection and identification of visible light video targets of television units in airborne photoelectric radars, in particular to a method for detecting and identifying visible light video targets in real time based on an embedded system, which is suitable for accurately detecting, identifying and registering ground/sea surface multi-type target images acquired by a visible light camera sensor on an airborne platform moving at high speed in real time.
In the task of detecting and tracking small targets, modern airplanes are provided with an onboard photoelectric detection system for detecting targets in the air, on the ground and on the sea, and the system mainly comprises a visible light imaging (also called television) unit, an infrared thermal imaging unit and a laser ranging unit. Because the monitoring source comprises multiple types of sensing images, synchronization and common imaging of the multiple sensing sources are difficult to realize, and joint analysis facing a real-time tracking task is not facilitated, and therefore, the registration work of the multi-source image is urgently needed to be realized. Meanwhile, image registration is also an indispensable step for implementing sensing fusion, multi-source cooperation and combined analysis, and is the implementation basis of a small target tracking detection task.
Disclosure of Invention
In order to solve the problems, the invention provides a registration method of an infrared image and a visible light image based on image characteristic information. According to the method, the point characteristic information and the line characteristic of the infrared image and the visible light image in the same scene are respectively extracted, the line characteristic and the point characteristic are used for registration and the effect of optimized registration, the registered projection matrix is calculated through the point characteristic, then the projection matrix is updated through the line characteristic, and a good registration effect is achieved. The method specifically comprises the following steps:
s1, selecting ROI of the infrared image as IredFirstly, the ROI is selected from the infrared image for preprocessing, the selected part is subjected to low-pass filtering, and the candy edge is detected, and at the moment, the line feature is selected from the concerned area in the infrared image.
S2, selecting corresponding ROI for visible light, and marking as IvisAnd selecting the ROI of the visible light image, performing the same pretreatment, performing low-pass filtering on the selected part, detecting the candy edge, and selecting the line characteristics.
S3, comparing the two images I respectivelyredAnd IvisAnd performing sift characteristic point extraction.
S4, to IredAnd IvisAnd randomly selecting 4 groups of corresponding matched feature points from the extracted feature points to calculate a projection matrix.
And S5, evaluating the registration effect through the distance of the projected line features.
S6, the calculation of the projection matrix of the next round is performed.
And S7, calculating the projection matrix for multiple times, iteratively selecting the projection matrix with the best effect, and registering the projection images.
Further, the following steps are specifically described:
the detailed steps of the characteristic point extraction process specifically comprise:
s301 using a Gaussian filter
Figure BDA0002547776040000031
And filtering the object to obtain the image golden tower. Constructing a scale space: l (x, y, σ) ═ G (x, y, σ) × I (x, y), where x denotes convolution operation,
Figure BDA0002547776040000032
m and n are dimensions of the Gaussian template.
S302, constructing a differential pyramid, performing down-sampling on the images with different scales in the processed image pyramid to obtain a group of differential images on each layer, and constructing the differential pyramid, wherein the point coordinates are as follows:
D(x,y,σ)=(G(x,y,σ(s+1))-G(x,y,σ(s)))*I(x,y)=L(x,y,σ(s+1))-L(x,y,σ(s))。
where s is the number of intra-group layers for each group of images.
S303, positioning key points, and carrying out Taylor expansion on extreme points in a discrete space:
Figure BDA0002547776040000033
where d (X) represents the selected feature point and its information, and X represents its coordinates, this expression represents the taylor expansion of the feature point with respect to the spatial coordinates.
The extreme points are then derived:
Figure BDA0002547776040000034
wherein
Figure BDA0002547776040000035
Representing the offset position of the interpolation center. Finally, the hessian matrix of the maximum value is obtained
Figure BDA0002547776040000036
By passing
Figure BDA0002547776040000037
And screening the characteristic points to remove edge response and obtain final key points, wherein r is a specified screening parameter. Where the parameter H represents its second derivative matrix, tr (H) and det (H) represent the traces and determinants, respectively, of its second derivative matrix.
S304, determining the direction of the key point, wherein in order to ensure that the description of the key point has rotation invariance, the key point determines a reference direction, and the reference direction is goldenObtaining gradient of pixel in 3 sigma range in key point in word tower
Figure BDA0002547776040000038
And the direction θ (x, y) ═ tan-1(L (x +1, y) - (x-1, y) + (x, y +1) - (x, y-1)) distribution is described. And (3) counting the gradient and the direction of each pixel point, establishing a gradient histogram of the direction, taking the maximum value in the histogram as the main direction of the key point of the histogram, and taking the direction which exceeds the maximum value by 80 percent as an auxiliary direction.
S305 key point feature description:
s30501 determining a range required for feature point description
Figure BDA0002547776040000041
And sigma is the scale of the key point.
S30502 rotating coordinate axes to a key point direction
Figure BDA0002547776040000042
To ensure rotational invariance.
S30503, distributing the sampling points in the rotated neighborhood to corresponding sub-regions, and determining 8 directions and weights of each sub-region.
S30504 calculates each. Seed points are 8 directional gradients.
S30505, the feature description vector is obtained through 128 pieces of gradient information.
S30506, threshold screening and normalization are carried out on the 128 pieces of gradient information, and robustness to illumination change is enhanced.
S30507, arranging the feature vector elements according to the scale information to obtain a final feature descriptor.
The concrete steps for solving the homography matrix are as follows:
s401, for the feature points selected from different source images, matching is conducted according to feature descriptors, screening is conducted on the matched feature points, a feature point pair with the highest matching degree is extracted, and then 4 pairs of feature points are randomly selected to conduct calculation (x, y) → (x ', y').
S402, establishing an equation for each group of matching points:
Figure BDA0002547776040000043
thereby obtaining (h)31xi+h32yi+h33)·xi'=h11xi+h12yi+h13And (h)31xi+h32yi+h33)·yi'=h21xi+h22yi+h23Two explicit expressions on the projection matrix elements.
S403, rewriting the H matrix parameters into a matrix form:
Figure BDA0002547776040000051
from the linear characteristic of the matrix coefficients, let h33The positive coefficient is constant to 1 to obtain a unique transformation matrix. Thus, the degree of freedom of the transformation matrix is 8, and as can be seen from S402, each set of matching points can be obtained (h)31xi+h32yi+h33)·xi'=h11xi+h12yi+h13And (h)31xi+h32yi+h33)·yi'=h21xi+h22yi+h23Two explicit expressions about the elements of the projection matrix are used, so that the substitution of 4 groups of random key points selected in S401 is solved to obtain the projection matrix.
After the calculation of the key steps of S3 and S4 is completed, the infrared image can be directly mapped to the remote sensing image through the projection matrix, so that the distance of the same line feature in the two images is calculated, and a loss function is obtained. The loss value is recorded. And then repeating the calculation process of S4 from the matching points obtained in S3 to continuously obtain loss values, comparing each time with the last time, reserving matrix parameters corresponding to the better loss values, iteratively solving, and finally obtaining the optimal projection matrix.
Drawings
Fig. 1 is a detailed flowchart of the feature point extraction process.
FIG. 2 is an overview flow chart for feature point matching
Fig. 3 is a flow chart of the overall algorithm.
FIG. 4: image after visible light characteristic point extraction
FIG. 5: image after infrared characteristic point extraction
FIG. 6: feature point matching image
Detailed Description
S301 using a Gaussian filter
Figure BDA0002547776040000061
And filtering the object to obtain the image golden tower. Constructing a scale space: l (x, y, σ) ═ G (x, y, σ) × I (x, y), where x denotes convolution operation,
Figure BDA0002547776040000062
and m and n are dimensions of the Gaussian template, wherein m is 5, and n is 5.
S302, constructing a differential pyramid, performing down-sampling on images with different scales in the image pyramid to obtain a group of differential images of each layer, and constructing the differential pyramid, wherein the point coordinates are as follows:
D(x,y,σ)=(G(x,y,σ(s+1))-G(x,y,σ(s)))*I(x,y)=L(x,y,σ(s+1))-L(x,y,σ(s))。
s303, positioning key points, and carrying out Taylor expansion on extreme points in a discrete space:
Figure BDA0002547776040000063
the extreme points are then derived:
Figure BDA0002547776040000064
wherein
Figure BDA0002547776040000065
Representing the offset position of the interpolation center. Finally, the hessian matrix of the maximum value is obtained
Figure BDA0002547776040000066
By passing
Figure BDA0002547776040000067
And screening the characteristic points to remove edge response and obtain final key points, wherein r is a specified screening parameter.
S304, determining the direction of the key point, wherein in order to ensure that the description of the key point has rotation invariance, the key point determines a reference direction, and the gradient of the pixels within the range of 3 sigma in the key point obtained in the pyramid is used for determining the gradient of the pixels within the range of 3 sigma
Figure BDA0002547776040000068
And the direction θ (x, y) ═ tan-1(L (x +1, y) - (x-1, y) + (x, y +1) - (x, y-1)) distribution is described. And (3) counting the gradient and the direction of each pixel point, establishing a gradient histogram of the direction, taking the maximum value in the histogram as the main direction of the key point of the histogram, and taking the direction which exceeds the maximum value by 80 percent as an auxiliary direction.
S305 key point feature description:
s30501 determining a range required for feature point description
Figure BDA0002547776040000071
And sigma is the scale of the key point.
S30502 rotating coordinate axes to a key point direction
Figure BDA0002547776040000072
To ensure rotational invariance.
S30503, distributing the sampling points in the rotated neighborhood to corresponding sub-regions, and determining 8 directions and weights of each sub-region.
S30504 calculates each. Seed points are 8 directional gradients.
S30505, the feature description vector is obtained through 128 pieces of gradient information.
S30506, threshold screening and normalization are carried out on the 128 pieces of gradient information, and robustness to illumination change is enhanced.
S30507, arranging the feature vector elements according to the scale information to obtain a final feature descriptor.
The concrete steps for solving the homography matrix are as follows:
s401, for the feature points selected from different source images, matching is conducted according to feature descriptors, screening is conducted on the matched feature points, a feature point pair with the highest matching degree is extracted, and then 4 pairs of feature points are randomly selected to conduct calculation (x, y) → (x ', y').
S402, establishing an equation for each group of matching points:
Figure BDA0002547776040000073
thereby obtaining (h)31xi+h32yi+h33)·xi'=h11xi+h12yi+h13And (h)31xi+h32yi+h33)·yi'=h21xi+h22yi+h23Two explicit expressions on the projection matrix elements.
S403, rewriting the H matrix parameters into a matrix form:
Figure BDA0002547776040000081
from the linear characteristic of the matrix coefficients, let h33The positive coefficient is constant to 1 to obtain a unique transformation matrix. Thus, the degree of freedom of the transformation matrix is 8, and as can be seen from S402, each set of matching points can be obtained (h)31xi+h32yi+h33)·xi'=h11xi+h12yi+h13And (h)31xi+h32yi+h33)·yi'=h21xi+h22yi+h23Two explicit expressions about the elements of the projection matrix are used, so that the substitution of 4 groups of random key points selected in S401 is solved to obtain the projection matrix. If matching feature points in an iterative process are randomly selected, a projection matrix can be calculated as described above. As shown in the following table:
-0.153560371517028 0.529721362229102 0
-0.605882352941176 -0.0117647058823525 0
829.180495356037 -57.7284829721364 1
after the calculation of the key steps of S3 and S4 is completed, the infrared image can be directly mapped to the remote sensing image through the projection matrix, so that the distance of the same line feature in the two images is calculated, and a loss function is obtained. The loss value is recorded. And then repeating the calculation process of S4 from the matching points obtained in S3 to continuously obtain loss values, comparing each time with the last time, reserving matrix parameters corresponding to the better loss values, iteratively solving, and finally obtaining the optimal projection matrix. The registered image is obtained by a projection matrix.
The method extracts point characteristics and line characteristics of the infrared image and the visible light image respectively, calculates the projection matrix through point characteristic pairing, and can select the projection matrix with the best effect through evaluation of the projection matrix by the line characteristics so as to realize high-quality registration of the infrared image and the visible light image. In addition, noise and other interference information are removed through filtering in the whole registration process, only the point feature and the line feature of the image are used in the subsequent calculation, the two features, particularly the point feature is robust to targets of different types, different scales and different brightness, and the robustness of the method for registering various scenes can be achieved.

Claims (5)

1. A registration method of an infrared image and a visible light image based on image characteristic information is characterized in that: the method specifically comprises the following steps:
s1, selecting ROI of the infrared image as IredFirstly, selecting an ROI (region of interest) of the infrared image for preprocessing, carrying out low-pass filtering on the selected part, detecting the candy edge, and selecting line characteristics of a region concerned in the infrared image;
s2, selecting corresponding ROI marked as I for the visible light imagevisSelecting ROI (region of interest) of the visible light image, performing the same pretreatment, performing low-pass filtering on the selected part, performing candy edge detection, and selecting line characteristics;
s3, comparing the two images I respectivelyredAnd IvisCarrying out sift characteristic point extraction;
s4, to IredAnd IvisRandomly selecting 4 groups of corresponding matched feature points from the extracted feature points to calculate a projection matrix;
s5, evaluating the registration effect through the distance of the projected line features;
s6, calculating a projection matrix;
and S7, calculating the projection matrix for multiple times, iteratively selecting the projection matrix, and registering the projection images.
2. The method for registering the infrared image and the visible light image based on the image characteristic information as claimed in claim 1, wherein:
the detailed steps of the characteristic point extraction process specifically comprise:
s301, filtering by using a Gaussian filter object to obtain an image pyramid;
s302, constructing a differential pyramid, and performing down-sampling on images with different scales in the processed image pyramid to obtain a group of differential images on each layer to form the differential pyramid;
s303, positioning key points, carrying out Taylor expansion on extreme points obtained in a discrete space, and then obtaining the extreme points through derivation; finally, solving hessian matrix of the maximum value, and screening the characteristic points, thereby removing edge response and obtaining final key points, wherein r is a designated screening parameter;
s304, determining the direction of the key point, wherein in order to ensure that the description of the key point has rotation invariance, the key point determines a reference direction, and the gradient and the direction distribution of pixels in the range of 3 sigma in the key point obtained in the pyramid are described; counting the gradient and direction of each pixel point, establishing a gradient histogram of the direction, taking the maximum value in the histogram as the main direction of the key point, and taking the direction which exceeds the maximum value by 80 percent as an auxiliary direction;
and S305, key point feature description.
3. The method for registering the infrared image and the visible light image based on the image characteristic information as claimed in claim 2, wherein: the S305 key point feature description process is as follows:
s30501, determining a range required by feature point description;
s30502, rotating the coordinate axis to the direction of the key point to ensure the invariance of rotation;
s30503, distributing the sampling points in the rotated neighborhood to corresponding sub-regions, determining 8 directions of each sub-region, and determining a weight;
s30504, calculating 8 directional gradients of each seed point;
s30505, obtaining feature description vectors through 128 pieces of gradient information;
s30506, threshold screening and normalization are carried out on the 128 pieces of gradient information, and robustness to illumination change is enhanced;
s30507, arranging the feature vector elements according to the scale information to obtain a final feature descriptor.
4. The method for registering the infrared image and the visible light image based on the image characteristic information as claimed in claim 1, wherein: the specific steps for solving the projection matrix are as follows:
s401, matching the feature points selected from different source images according to the feature descriptors, screening the matched feature points, and randomly selecting 4 pairs of feature points for calculation;
s402, establishing an equation for each group of matching points to obtain an explicit expression about the projection matrix elements;
s403, rewriting the H matrix parameters into a matrix form, changing the degree of freedom of the matrix into 8, and solving the substitution of 4 groups of random key points selected in S401 to obtain a projection matrix.
5. The method for registering the infrared image and the visible light image based on the image characteristic information as claimed in claim 1, wherein:
after the calculation of S3 and S4 is completed, the infrared images are directly mapped to the remote sensing images through the projection matrix, so that the distance of the same line characteristic in the two images is calculated, and a loss function is obtained; recording the loss value; and then repeating the calculation process of S4 from the matching points obtained in S3 to continuously obtain loss values, comparing each time with the last time, keeping matrix parameters corresponding to the better loss values of the two, iteratively solving, and finally obtaining the optimal projection matrix.
CN202010566412.7A 2020-06-19 2020-06-19 Infrared image and visible light image registration method based on image characteristic information Active CN111899289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010566412.7A CN111899289B (en) 2020-06-19 2020-06-19 Infrared image and visible light image registration method based on image characteristic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010566412.7A CN111899289B (en) 2020-06-19 2020-06-19 Infrared image and visible light image registration method based on image characteristic information

Publications (2)

Publication Number Publication Date
CN111899289A true CN111899289A (en) 2020-11-06
CN111899289B CN111899289B (en) 2023-04-18

Family

ID=73207380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010566412.7A Active CN111899289B (en) 2020-06-19 2020-06-19 Infrared image and visible light image registration method based on image characteristic information

Country Status (1)

Country Link
CN (1) CN111899289B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344795A (en) * 2021-08-05 2021-09-03 常州铭赛机器人科技股份有限公司 Rapid image splicing method based on prior information
CN117132796A (en) * 2023-09-09 2023-11-28 廊坊市珍圭谷科技有限公司 Position efficient matching method based on heterogeneous projection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045809A1 (en) * 2008-08-22 2010-02-25 Fluke Corporation Infrared and visible-light image registration
CN102176243A (en) * 2010-12-30 2011-09-07 浙江理工大学 Target ranging method based on visible light and infrared camera
CN107464252A (en) * 2017-06-30 2017-12-12 南京航空航天大学 A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110232655A (en) * 2019-06-13 2019-09-13 浙江工业大学 A kind of double light image splicings of the Infrared-Visible for coal yard and fusion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045809A1 (en) * 2008-08-22 2010-02-25 Fluke Corporation Infrared and visible-light image registration
CN102176243A (en) * 2010-12-30 2011-09-07 浙江理工大学 Target ranging method based on visible light and infrared camera
CN107464252A (en) * 2017-06-30 2017-12-12 南京航空航天大学 A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110232655A (en) * 2019-06-13 2019-09-13 浙江工业大学 A kind of double light image splicings of the Infrared-Visible for coal yard and fusion method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344795A (en) * 2021-08-05 2021-09-03 常州铭赛机器人科技股份有限公司 Rapid image splicing method based on prior information
CN113344795B (en) * 2021-08-05 2021-10-29 常州铭赛机器人科技股份有限公司 Rapid image splicing method based on prior information
CN117132796A (en) * 2023-09-09 2023-11-28 廊坊市珍圭谷科技有限公司 Position efficient matching method based on heterogeneous projection

Also Published As

Publication number Publication date
CN111899289B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN107063228B (en) Target attitude calculation method based on binocular vision
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN109919975B (en) Wide-area monitoring moving target association method based on coordinate calibration
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN111899289B (en) Infrared image and visible light image registration method based on image characteristic information
CN110969667A (en) Multi-spectrum camera external parameter self-correction algorithm based on edge features
WO2007135659A2 (en) Clustering - based image registration
CN113642463B (en) Heaven and earth multi-view alignment method for video monitoring and remote sensing images
CN109697428B (en) Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network
CN114022560A (en) Calibration method and related device and equipment
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN104751451B (en) Point off density cloud extracting method based on unmanned plane low latitude high resolution image
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN112613437B (en) Method for identifying illegal buildings
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN117314986A (en) Unmanned aerial vehicle cross-mode power distribution equipment inspection image registration method based on semantic segmentation
CN112488022A (en) Panoramic monitoring method, device and system
CN112434559A (en) Robot identification and positioning method
CN104484647B (en) A kind of high-resolution remote sensing image cloud height detection method
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
CN111260735A (en) External parameter calibration method for single-shot LIDAR and panoramic camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant