CN112950685B - Infrared and visible light image registration method, system and storage medium - Google Patents

Infrared and visible light image registration method, system and storage medium Download PDF

Info

Publication number
CN112950685B
CN112950685B CN202110270086.XA CN202110270086A CN112950685B CN 112950685 B CN112950685 B CN 112950685B CN 202110270086 A CN202110270086 A CN 202110270086A CN 112950685 B CN112950685 B CN 112950685B
Authority
CN
China
Prior art keywords
image
infrared
visible light
points
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110270086.XA
Other languages
Chinese (zh)
Other versions
CN112950685A (en
Inventor
徐海洋
赵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qinhuai Innovation Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Original Assignee
Qinhuai Innovation Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qinhuai Innovation Research Institute Of Nanjing University Of Aeronautics And Astronautics, Nanjing University of Aeronautics and Astronautics filed Critical Qinhuai Innovation Research Institute Of Nanjing University Of Aeronautics And Astronautics
Priority to CN202110270086.XA priority Critical patent/CN112950685B/en
Publication of CN112950685A publication Critical patent/CN112950685A/en
Application granted granted Critical
Publication of CN112950685B publication Critical patent/CN112950685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared and visible light image registration method, a system and a storage medium, comprising the following steps: preprocessing an infrared image and a visible light image; the infrared image and the visible light image are respectively subjected to saliency enhancement through frequency tuning, and a saliency response diagram is obtained; carrying out phase consistency detection on the obtained infrared and visible light saliency response graphs, and extracting similar structure edge features in the two images; and respectively extracting feature points of the processed similarity structure edge images, matching the extracted feature points by using a matching algorithm, screening mismatching points, and calculating to obtain a registration image. The invention adopts the characteristics of the image structure information to register, solves the problem that the traditional method is difficult to extract effective characteristics from infrared and visible light heterologous images with larger difference, has the advantages of improving the accuracy and the speed of image registration, can be applied to an embedded platform, and has wide market prospect and application value.

Description

Infrared and visible light image registration method, system and storage medium
Technical Field
The invention belongs to the field of image registration technology and computer vision, and particularly relates to an infrared and visible light image registration method, an infrared and visible light image registration system and a storage medium.
Background
Infrared and visible image registration is a key and difficult research point for heterologous image registration. The infrared image reflects the temperature radiation information of the object, the visible light image reflects the light reflection information of the object, and the two information are mutually complemented, so that the registration and fusion of the heterogeneous images can be realized, and the infrared image has wide application in the fields of military investigation, target tracking, pattern recognition and the like. The image registration method based on the feature points is widely applied to the field of image registration due to high calculation speed and good robustness. Firstly, extracting some characteristic elements (edges, angular points, centers of closed areas and the like) of two images which can keep the images unchanged, and then solving the parameters of a space geometric transformation model by utilizing the characteristic elements to finish registration. However, due to the differences of imaging principles and shooting conditions, infrared images and visible light images have larger differences, noise interference is larger, gray contrast and signal noise of the infrared images are lower, the number of feature points detected by a feature point extraction operator is much smaller than that of the visible light images, and the actual image registration requirement cannot be met.
Therefore, a new solution is needed to solve this problem.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, the method, the system and the storage medium for registering the infrared and visible light images are provided, and the method has the advantages of high accuracy and matching rate, high precision, strong real-time performance and good robustness.
The technical scheme is as follows: in order to achieve the above purpose, the present invention provides an infrared and visible light image registration method, comprising the following steps:
s1: acquiring an infrared image and a visible light image under the same scene, and preprocessing the infrared image and the visible light image for smoothing noise;
S2: respectively converting the preprocessed infrared image and the preprocessed visible light image into an LAB color space, and respectively carrying out saliency enhancement on the infrared image and the preprocessed visible light image through frequency tuning to obtain a saliency response diagram;
s3: carrying out phase consistency detection on the obtained infrared and visible light saliency response graphs, and extracting similar structure edge features in the two images;
S4: and based on the similarity structure edge characteristics, extracting characteristic points of the processed similarity structure edge images respectively, matching the extracted characteristic points by using a matching algorithm, screening mismatching points, and calculating a homography transformation matrix between the images to obtain a registration image.
Further, the preprocessing operation in step S1 includes the following steps:
A1: respectively acquiring and acquiring infrared and visible light images in the same scene by using an infrared sensor and a visible light sensor as input, and changing the size of an input source image to ensure that the sizes of the two images are kept consistent;
a2: graying processing is performed, and then noise of the image is smoothed by using a gaussian convolution kernel of 3×3 or 5×5, to obtain a preprocessed image.
Further, the step S2 specifically includes:
B1: converting the obtained preprocessed image into an LAB color space, and calculating an average LAB pixel value of a symmetrical region subgraph of each pixel point;
B2: and calculating a significance response value of each pixel point in the image to obtain a processed significance map.
Further, the calculation formula of the average LAB pixel value of the symmetric region subgraph of each pixel point in the step B1 is as follows:
Where x 0 and y 0 are the offsets of the symmetric regions, and x 0、y0 and area A are calculated as follows:
x0=min(x,w-x)
y0=min(y,h-y)
A=(2x0+1)(2y0+1)
The calculation formula of the saliency response value of the pixel in the step B2 is as follows:
S(x,y)=||Iu-If||
Wherein I f is the average LAB pixel value of the symmetric region subgraph after gaussian filtering of the original image. When calculating the average pixel value of the symmetric region subgraph, each pixel in the traversal image is processed, the time complexity of an algorithm is reduced by using an integral image method, the data processing speed is increased, and a saliency response graph is obtained.
Further, the extracting process of the similarity structure edge feature in the step S3 specifically includes the following steps:
c1: performing Fourier transform on the saliency response graph, performing filtering processing on the Fourier transform image by using a Log-Gabor filter in a plurality of scales and directions, and calculating to obtain amplitude values and phase response values of image pixels in different directions and scales;
C2: and respectively calculating phase consistency values of the image pixel points in different directions to obtain phase consistency moments, and calculating and extracting edge structures in the image according to the characteristics of the phase consistency moments changing along with the directions.
Further, the feature point extraction and registration process in the step S4 is specifically as follows:
D1: extracting FAST key corner points on the similarity structure edges of the infrared and visible light images, creating binary feature vectors for each group of key points by using a BRIEF algorithm, and encoding the feature vectors by using 0 and 1 to generate binary code string descriptors;
D2: calculating the hamming distances of the two feature point sets, and carrying out non-maximum suppression on the obtained distances, and violently matching all points to obtain a rough matching point set;
D3: randomly selecting 4 characteristic points in the rough matching characteristic point set by using a RANSAC algorithm, and obtaining an optimal parameter model through continuous calculation iteration, wherein the matched characteristic points reach the maximum value in the optimal model, and transforming the image by using an optimal homography matrix to realize registration.
Further, the calculation formulas of the amplitude and phase response values of the image pixels in different directions and scales in the step C1 are as follows:
Wherein, A so and delta phi so are amplitude and phase response values of pixel points, s and o respectively represent scale factors and direction factors, and G so and G s'o represent parity symmetric Log-Gabor filters.
Further, the method for calculating the phase consistency measure value in the step C2 is as follows:
wherein s is a scale factor, o is a direction factor, A so and delta phi so are amplitude and phase response values of pixel points, W o is a filtering direction weight, T is a noise threshold, epsilon is a minimum constant, Is a non-positive suppression function.
An infrared and visible light image registration system, the system comprising a network interface, a memory, and a processor; wherein,
The network interface is used for receiving and transmitting signals in the process of receiving and transmitting information with other external network elements;
the memory is used for storing computer program instructions capable of running on the processor;
The processor is configured to perform the steps of an infrared and visible image registration method when executing the computer program instructions.
A computer storage medium storing a program of an infrared and visible light image registration method, which when executed by at least one processor implements the steps of an infrared and visible light image registration method.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) The invention adopts the characteristics of the image structure information to register, solves the problem that the traditional method is difficult to extract effective characteristics from infrared and visible light heterogeneous images with larger difference, and can be applied to an embedded platform.
(2) The invention can simultaneously enhance and extract the similarity structure edge characteristics in the infrared and visible light images, thereby reserving more effective information in the process of rapidly extracting the characteristic points and effectively improving the registration accuracy.
(3) The method has the advantages of less calculated amount, high precision and simple system configuration, can be applied to the embedded platform to complete corresponding tasks in the fields of military investigation, target identification tracking and the like based on infrared and visible light registration fusion, and has wide market prospect and application value.
Drawings
FIG. 1 is a basic flow chart of the present invention;
Fig. 2 is a block diagram of an embedded platform to which the present invention is applied.
Detailed Description
The present application is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the application and not limiting of its scope, and various modifications of the application, which are equivalent to those skilled in the art upon reading the application, will fall within the scope of the application as defined in the appended claims.
As shown in fig. 1, the present invention provides a method for registering infrared and visible light images, comprising the following steps:
s1: acquiring an infrared image and a visible light image under the same scene, and preprocessing the infrared image and the visible light image for smoothing noise;
S2: respectively converting the preprocessed infrared image and the preprocessed visible light image into an LAB color space, and respectively carrying out saliency enhancement on the infrared image and the preprocessed visible light image through frequency tuning to obtain a saliency response diagram;
s3: carrying out phase consistency detection on the obtained infrared and visible light saliency response graphs, and extracting similar structure edge features in the two images;
S4: and based on the similarity structure edge characteristics, extracting characteristic points of the processed similarity structure edge images respectively, matching the extracted characteristic points by using a matching algorithm, screening mismatching points, and calculating a homography transformation matrix between the images to obtain a registration image.
As shown in fig. 2, the above image registration is completed by using an embedded platform in this embodiment.
The preprocessing operation in step S1 includes the following steps:
A1: respectively acquiring and acquiring infrared and visible light images in the same scene by using an infrared sensor and a visible light sensor as input, and changing the size of an input source image to ensure that the sizes of the two images are kept consistent;
a2: graying processing is performed, and then noise of the image is smoothed by using a gaussian convolution kernel of 3×3 or 5×5, to obtain a preprocessed image.
The step S2 specifically comprises the following steps:
B1: converting the obtained preprocessed image into an LAB color space, and calculating an average LAB pixel value of a symmetrical region subgraph of each pixel point;
The calculation formula of the average LAB pixel value of the symmetric region subgraph of each pixel point is as follows:
Where x 0 and y 0 are the offsets of the symmetric regions, and x 0、y0 and area A are calculated as follows:
x0=min(x,w-x)
y0=min(y,h-y)
A=(2x0+1)(2y0+1)
B2: and calculating a significance response value of each pixel point in the image to obtain a processed significance map.
The formula for calculating the saliency response value of the pixel in the step B2 is as follows:
S(x,y)=||Iu-If||
Wherein I f is the average LAB pixel value of the symmetric region subgraph after gaussian filtering of the original image. When calculating the average pixel value of the symmetric region subgraph, each pixel in the traversal image is processed, the time complexity of an algorithm is reduced by using an integral image method, the data processing speed is increased, and a saliency response graph is obtained.
The extraction process of the edge features of the similarity structure in the step S3 is specifically as follows:
c1: performing Fourier transform on the saliency response graph, performing filtering processing on the Fourier transform image by using a Log-Gabor filter in a plurality of scales and directions, and calculating to obtain amplitude values and phase response values of image pixels in different directions and scales;
In the step C1, the calculation formulas of the amplitude values and the phase response values of the image pixel points in different directions and scales are as follows:
Wherein, A so and delta phi so are amplitude and phase response values of pixel points, s and o respectively represent scale factors and direction factors, and G so and G s'o represent parity symmetric Log-Gabor filters.
C2: and respectively calculating phase consistency values of the image pixel points in different directions to obtain phase consistency moments, and calculating and extracting edge structures in the image according to the characteristics of the phase consistency moments changing along with the directions.
The method for calculating the phase consistency measure value in the step C2 is as follows:
wherein s is a scale factor, o is a direction factor, A so and delta phi so are amplitude and phase response values of pixel points, W o is a filtering direction weight, T is a noise threshold, epsilon is a minimum constant, Is a non-positive suppression function.
The feature point extraction and registration process in step S4 is specifically as follows:
D1: and in the obtained image edge structure diagram, selecting key points of the edge structures of the infrared and visible light images respectively. And searching a special area in the image by adopting a FAST algorithm, and selecting a corner point with a pixel value which changes sharply from shallow to deep as a key point of the image. For a given pixel p, comparing 16 pixels in the circle range, and classifying and screening the pixels according to p, p and similarity with p within a certain threshold value. Then, binary feature vectors are established for a group of key points according to a BRIEF algorithm, and the feature vectors are encoded by utilizing 0 and 1 to generate binary code string descriptors;
D2: the method comprises the steps of regarding key points and descriptors thereof selected from infrared and visible light edge images as two characteristic point sets, calculating hamming distances between each point in one point set and all points in the other point set, and carrying out non-maximum suppression on the obtained distances, and violently matching all points to obtain a rough matching point set;
d3: assume that a transformation model between infrared and visible images in the same scene is:
Wherein the 3×3 matrix is an optimal homography matrix. And randomly selecting 4 characteristic points in the rough matching characteristic point set by using a RANSAC algorithm, and obtaining an optimal parameter model through continuous calculation iteration, wherein the matching characteristic points reach the maximum value in the optimal model. The image can be transformed by using the optimal homography matrix to realize registration.
The embodiment also provides an infrared and visible light image registration system, which comprises a network interface, a memory and a processor; the network interface is used for receiving and transmitting signals in the process of receiving and transmitting information with other external network elements; a memory storing computer program instructions executable on the processor; and a processor for executing the steps of the consensus method as described above when executing the computer program instructions.
The present embodiment also provides a computer storage medium disposed in the embedded platform, the computer storage medium storing a computer program, which when executed by a processor, implements the method described above. The computer-readable medium may be considered tangible and non-transitory. Non-limiting examples of non-transitory tangible computer readable media include non-volatile memory circuits (e.g., flash memory circuits, erasable programmable read-only memory circuits, or masked read-only memory circuits), volatile memory circuits (e.g., static random access memory circuits or dynamic random access memory circuits), magnetic storage media (e.g., analog or digital magnetic tape or hard disk drives), and optical storage media (e.g., CDs, DVDs, or blu-ray discs), among others. The computer program includes processor-executable instructions stored on at least one non-transitory tangible computer-readable medium. The computer program may also include or be dependent on stored data. The computer programs may include a basic input/output system (BIOS) that interacts with the hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, and so forth.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (8)

1. The infrared and visible light image registration method is characterized by comprising the following steps of:
s1: acquiring an infrared image and a visible light image under the same scene, and preprocessing the infrared image and the visible light image;
S2: respectively converting the preprocessed infrared image and the preprocessed visible light image into an LAB color space, and respectively carrying out saliency enhancement on the infrared image and the preprocessed visible light image through frequency tuning to obtain a saliency response diagram;
s3: carrying out phase consistency detection on the obtained infrared and visible light saliency response graphs, and extracting similar structure edge features in the two images;
s4: based on the similarity structure edge characteristics, extracting characteristic points of the processed similarity structure edge images respectively, matching the extracted characteristic points by using a matching algorithm, screening mismatching points, and calculating to obtain a registration image;
The step S2 specifically comprises the following steps:
B1: converting the obtained preprocessed image into an LAB color space, and calculating an average LAB pixel value of a symmetrical region subgraph of each pixel point;
B2: calculating a significance response value of each pixel point in the image to obtain a processed significance map;
the calculation formula of the average LAB pixel value of the symmetric region subgraph of each pixel point in the step B1 is as follows:
Where x 0 and y 0 are the offsets of the symmetric regions, and x 0、y0 and area A are calculated as follows:
x0=min(x,w-x)
y0=min(y,h-y)
A=(2x0+1)(2y0+1)
The calculation formula of the saliency response value of the pixel in the step B2 is as follows:
S(x,y)=||Iu-If||
Wherein I f is the average LAB pixel value of the symmetric region subgraph after gaussian filtering of the original image.
2. The method for registering infrared and visible light images according to claim 1, wherein the preprocessing operation in step S1 comprises the following steps:
A1: respectively acquiring and acquiring infrared and visible light images in the same scene by using an infrared sensor and a visible light sensor as input, and changing the size of an input source image to ensure that the sizes of the two images are kept consistent;
A2: and (3) carrying out graying treatment, and then smoothing the noise of the image by using a Gaussian convolution kernel to obtain a preprocessed image.
3. The method for registering infrared and visible light images according to claim 1, wherein the extracting process of the similarity structure edge feature in the step S3 is specifically as follows:
c1: performing Fourier transform on the saliency response graph, performing filtering processing on the Fourier transform image by using a Log-Gabor filter in a plurality of scales and directions, and calculating to obtain amplitude values and phase response values of image pixels in different directions and scales;
C2: and respectively calculating phase consistency values of the image pixel points in different directions to obtain phase consistency moments, and calculating and extracting edge structures in the image according to the characteristics of the phase consistency moments changing along with the directions.
4. The method for registering infrared and visible light images according to claim 1, wherein the feature point extraction and registration process in step S4 is specifically as follows:
D1: extracting FAST key corner points on the similarity structure edges of the infrared and visible light images, creating binary feature vectors for each group of key points by adopting a BRIEF algorithm, and encoding the feature vectors to generate binary code string descriptors;
D2: calculating the hamming distances of the two feature point sets, and carrying out non-maximum suppression on the obtained distances, and violently matching all points to obtain a rough matching point set;
d3: and randomly selecting a plurality of characteristic points from the rough matched characteristic point set by using a RANSAC algorithm, and obtaining an optimal parameter model through continuous calculation iteration, wherein the matched characteristic points reach the maximum value in the optimal parameter model, and transforming the image by using an optimal homography matrix to realize registration.
5. The method for registering infrared and visible light images according to claim 3, wherein the calculation formulas of the amplitude and phase response values of the image pixels in different directions and dimensions in the step C1 are as follows:
Φso=tan-1(L(x,y)*Gso/L(x,y)*G′so)
Wherein, A so and delta phi so are amplitude and phase response values of pixel points, s and o respectively represent scale factors and direction factors, and G so and G' so represent parity symmetric Log-Gabor filters.
6. A method for registering an infrared and visible light image according to claim 3, wherein the method for calculating the phase consistency measure in step C2 is as follows:
wherein s is a scale factor, o is a direction factor, A so and delta phi so are amplitude and phase response values of pixel points, W o is a filtering direction weight, T is a noise threshold, epsilon is a minimum constant, Is a non-positive suppression function.
7. An infrared and visible image registration system, characterized by: the system includes a network interface, a memory, and a processor; wherein,
The network interface is used for receiving and transmitting signals in the process of receiving and transmitting information with other external network elements;
the memory is used for storing computer program instructions capable of running on the processor;
The processor, when executing the computer program instructions, is configured to perform the steps of an infrared and visible image registration method as claimed in any one of claims 1 to 6.
8. A computer storage medium, characterized by: the computer storage medium stores a program of an infrared and visible light image registration method, which when executed by at least one processor implements the steps of the infrared and visible light image registration method according to any one of claims 1 to 6, and the computer storage medium is applied to an embedded platform.
CN202110270086.XA 2021-03-12 2021-03-12 Infrared and visible light image registration method, system and storage medium Active CN112950685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270086.XA CN112950685B (en) 2021-03-12 2021-03-12 Infrared and visible light image registration method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270086.XA CN112950685B (en) 2021-03-12 2021-03-12 Infrared and visible light image registration method, system and storage medium

Publications (2)

Publication Number Publication Date
CN112950685A CN112950685A (en) 2021-06-11
CN112950685B true CN112950685B (en) 2024-04-26

Family

ID=76229623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270086.XA Active CN112950685B (en) 2021-03-12 2021-03-12 Infrared and visible light image registration method, system and storage medium

Country Status (1)

Country Link
CN (1) CN112950685B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344987A (en) * 2021-07-07 2021-09-03 华北电力大学(保定) Infrared and visible light image registration method and system for power equipment under complex background
CN115474033A (en) * 2022-09-19 2022-12-13 卓谨信息科技(常州)有限公司 Method for realizing virtual screen for intelligent recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447704A (en) * 2016-10-13 2017-02-22 西北工业大学 A visible light-infrared image registration method based on salient region features and edge degree

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447704A (en) * 2016-10-13 2017-02-22 西北工业大学 A visible light-infrared image registration method based on salient region features and edge degree

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合相位一致性的行人检测;曹继 等;《机械设计与制造》;第91-94页 *

Also Published As

Publication number Publication date
CN112950685A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
Yu et al. Efficient patch-wise non-uniform deblurring for a single image
CN112950685B (en) Infrared and visible light image registration method, system and storage medium
CN109064504B (en) Image processing method, apparatus and computer storage medium
CN109410246B (en) Visual tracking method and device based on correlation filtering
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
Deshpande et al. A novel modified cepstral based technique for blind estimation of motion blur
Chen et al. A novel infrared small target detection method based on BEMD and local inverse entropy
Medouakh et al. Improved object tracking via joint color-LPQ texture histogram based mean shift algorithm
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
Lou et al. Nonlocal similarity image filtering
CN109903246B (en) Method and device for detecting image change
CN112926516B (en) Robust finger vein image region-of-interest extraction method
CN111091107A (en) Face region edge detection method and device and storage medium
Özkan et al. A novel multi-scale and multi-expert edge detector based on common vector approach
Zin et al. Local image denoising using RAISR
CN111311610A (en) Image segmentation method and terminal equipment
CN113066030B (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
Ashiba Dark infrared night vision imaging proposed work for pedestrian detection and tracking
Sliti et al. Efficient visual tracking via sparse representation and back-projection histogram
CN109815791B (en) Blood vessel-based identity recognition method and device
Shankar et al. Object oriented fuzzy filter for noise reduction of Pgm images
Nayagi et al. An efficiency correlation between various image fusion techniques
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN112711748B (en) Finger vein identity authentication method and device, electronic equipment and storage medium
CN110634107B (en) Standardization method for enhancing brain magnetic resonance image brightness aiming at T1

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant