CN111583315A - Novel visible light image and infrared image registration method and device - Google Patents
Novel visible light image and infrared image registration method and device Download PDFInfo
- Publication number
- CN111583315A CN111583315A CN202010329377.7A CN202010329377A CN111583315A CN 111583315 A CN111583315 A CN 111583315A CN 202010329377 A CN202010329377 A CN 202010329377A CN 111583315 A CN111583315 A CN 111583315A
- Authority
- CN
- China
- Prior art keywords
- image
- infrared
- visible light
- gray
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/35—Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A new visible light image and infrared image registration method and device are provided, the method comprises: step 1, collecting a visible light image original image and an infrared image original image; step 2, graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image; step 3, respectively extracting edge information of the visible light gray level image and the infrared gray level image to obtain edge images of the visible light gray level image and the infrared gray level image; step 4, performing horizontal and vertical translation traversal on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the horizontal and vertical translation values of the edge image of the infrared gray-scale image when the maximum goodness of fit is obtained after traversal is finished; and 5, translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete the registration of the two images.
Description
Technical Field
The invention relates to the field of image registration in image processing application, in particular to a novel visible light image and infrared image registration method and device.
Background
Image registration fusion is an important research direction in image processing, especially in the military field, and accurate detection of military targets in complex environments is a difficult task, and information obtained by picture capture by only using a traditional visible light sensor camera is very limited. With the improvement of computer image processing capability and the development of various sensor technologies, the images obtained by the multiple sensors are fused by using an image fusion technology, so that the shot scene can be comprehensively and clearly understood and identified.
The common sensors comprise a visible light sensor, a thermal infrared imager, a low-light level night vision device, a laser imaging radar and the like, the working principle of various imaging sensors is different from the using environment, the image expression in the imaging result is greatly different, the information of different imaging sensors is fused, and the complementary information of the imaging sensors can be fully utilized. At present, the most common method is to fuse the image obtained by the visible light sensor and the image obtained by the infrared sensor.
Generally, a visible light image has high spatial-temporal resolution, and can show detailed information such as textures, colors and the like in a scene, but a visible light sensor cannot detect a shielded target, and the imaging effect is greatly influenced by external conditions such as light, weather and the like. The infrared image is formed by receiving infrared radiation emitted by the target, a hidden object can be detected, the anti-interference capability is strong, but the details of the target cannot be shown. In view of the characteristics of the two images, the fusion of the two images can highlight objects in the picture, and is suitable for all-weather detection of hidden targets, so that the fusion of the visible light image and the infrared image has wide application in the fields of military, safety monitoring and the like.
Due to the limitation of the imaging principle of the camera, the images formed by the visible light sensor and the infrared sensor are slightly different in space, which affects the effect of image fusion, so that the registration of the visible light image and the infrared image is required before the fusion, and the alignment of the visible light image and the infrared image in space is completed. The image registration accuracy directly determines the impression of image fusion, the manual registration accuracy is high, but the time is long and the real-time performance of continuous registration of the video stream cannot be realized, so the design of an efficient automatic registration algorithm is a key step of the image fusion.
Because the contents expressed by the visible image and the infrared image have difference, the effect of directly matching the feature points on the original images of the visible image and the infrared image is poor, and the only similar part of the two images is edge information in the images, so the current common automatic registration algorithm of the visible image and the infrared image is carried out on the basis of image edge extraction, the edge extraction is respectively carried out on the visible image and the infrared image, then the feature point detection and matching are carried out by utilizing the edge image, and the transformation matrix between the two images is solved to complete the image registration.
When the picture composition is simple, the edge similarity extracted from the visible light image and the infrared image is high, the registration effect by using feature point matching is good, but when the complexity of the scene is improved, the results of the edge extraction of the visible light image and the infrared image may have large difference, which brings great obstacle to registration, and the matching accuracy of the traditional feature point detection and matching algorithm such as SIFT, SURF, ORB and the like in the situation is greatly reduced.
Disclosure of Invention
In view of the technical defects and technical drawbacks in the prior art, embodiments of the present invention provide a new method and apparatus for registering a visible light image and an infrared image, which overcome the above problems or at least partially solve the above problems, and the specific scheme is as follows:
as a first aspect of the present invention, there is provided a new visible light image and infrared image registration method, the method comprising the steps of:
step 1, collecting a visible light image original image and an infrared image original image;
step 2, graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image;
step 3, respectively extracting edge information of the visible light gray level image and the infrared gray level image to obtain edge images of the visible light gray level image and the infrared gray level image;
step 4, performing horizontal and vertical translation traversal on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the horizontal and vertical translation values of the edge image of the infrared gray-scale image when the maximum goodness of fit is obtained after traversal is finished;
and 5, translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete the registration of the two images.
Further, in step 3, edge information of the visible light gray scale image and the infrared gray scale image is respectively extracted by using a sobel operator.
Further, edge information of the visible light gray level image and the infrared gray level image is respectively extracted by using a sobel operator, which is specifically as follows:
let G be the horizontal gradient map and the vertical gradient map of the image IxAnd GyRespectively convolving the image I with two convolution kernels with odd numbers to obtain GxAnd Gy,The calculation formula is as follows:
the gradient map of the image population can be obtained by combining the horizontal gradient map and the vertical gradient map, namely the extracted edge information, and the formula is as follows:
according to the method, the edge information of the visible light gray-scale image and the infrared gray-scale image is extracted.
Further, in step 4, calculating the coincidence degree of the edge graph of the infrared gray-scale image and the edge graph of the visible light gray-scale image by using the PSNR index, wherein the calculation formula is as follows:
wherein the content of the first and second substances,the maximum pixel value of the image possible, the maximum pixel value of the image at 8-bit depth is 255,the larger the PSNR value, the more similar the two images, that is, the higher the degree of coincidence of the edges.
Further, in step 1, the visible light image original and the infrared image original are a visible light image and an infrared image captured in the same scene at the same time, where the visible light image is a three-channel color image, and the infrared image is a single-channel image.
As a second aspect of the present invention, there is provided a new visible light image and infrared image registration apparatus, the apparatus comprising: the device comprises an image acquisition module, a graying processing module, an edge image extraction module, a translation value calculation module, a translation module and a registration module;
the image acquisition module is used for acquiring visible light image original drawings and infrared image original drawings;
the graying processing module is used for graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image;
the edge image extraction module is used for respectively extracting the edge information of the visible light gray image and the infrared gray image to obtain the edge images of the visible light gray image and the infrared gray image.
The translation value calculation module is used for performing translation traversal in the horizontal and vertical directions on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the translation values of the edge image of the infrared gray-scale image in the horizontal and vertical directions when the maximum goodness of fit is obtained after traversal is finished;
and the registration module is used for translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete registration of the two images.
Further, the edge image extraction module extracts edge information of the visible light gray scale image and the infrared gray scale image respectively by using a sobel operator.
Further, edge information of the visible light gray level image and the infrared gray level image is respectively extracted by using a sobel operator, which is specifically as follows:
let G be the horizontal gradient map and the vertical gradient map of the image IxAnd GyRespectively convolving the image I with two convolution kernels with odd numbers to obtain GxAnd Gy,The calculation formula is as follows:
the gradient map of the image population can be obtained by combining the horizontal gradient map and the vertical gradient map, namely the extracted edge information, and the formula is as follows:
according to the method, the edge information of the visible light gray-scale image and the infrared gray-scale image is extracted.
Further, the translation value calculating module calculates the goodness of fit of the edge graph of the infrared gray-scale image and the edge graph of the visible light gray-scale image by using the PSNR index, and the calculation formula is as follows:
wherein the content of the first and second substances,the maximum pixel value of the image possible, the maximum pixel value of the image at 8-bit depth is 255,the larger the PSNR value, the more similar the two images, that is, the higher the degree of coincidence of the edges.
Furthermore, the visible light image original image and the infrared image original image are a visible light image and an infrared image shot in the same scene at the same time, wherein the visible light image is a three-channel color image, and the infrared image is a single-channel image.
The invention has the following beneficial effects:
according to the novel visible light image and infrared image registration method and device, after the image edges are extracted, translation traversal is utilized to search the translation amount when the two images are optimally matched, the obtained translation result is used as the registration basis, the PSNR method is adopted to calculate the edge matching degree, the similarity degree of the two images can be accurately and comprehensively reflected, and the problem of inaccurate matching of characteristic points caused by overlarge difference between the visible light edge image and the infrared edge image is solved.
Drawings
Fig. 1 is a flowchart of a new method for registering a visible light image and an infrared image according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, as a first embodiment of the present invention, there is provided a new visible light image and infrared image registration method, including the steps of:
step 1, collecting visible light image original drawings and infrared image original drawings.
The visible light image original image and the infrared image original image are a visible light image and an infrared image which are shot in the same scene at the same time, wherein the visible light image is a three-channel color image, and the infrared image is a single-channel image.
Step 2, in order to facilitate processing and edge extraction, graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image, in this embodiment, the input visible light image is in a YUV format, wherein a Y channel represents brightness of the image, and a component extracted from the Y channel is a pixel value of the grayscale image.
And 3, respectively extracting edge information of the visible light gray image and the infrared gray image to obtain edge images of the visible light gray image and the infrared gray image.
Specifically, the image edge refers to a place where a pixel value is in transition, that is, a place where a change rate is maximum and a place where a derivative is maximum, and the image is imagined as a continuous function. However, the image is a two-dimensional discrete function, the derivative becomes a difference, which is referred to as the gradient of the image.
In the implementation, the Sobel operator is used for respectively extracting the edge information of the visible light gray level image and the infrared gray level image, the Sobel operator is a discrete differential operator and can be used for calculating the approximate gradient of the image gray level, and the place with larger gradient is more likely to be the edge. The function of Soble operator integrates Gaussian smoothing and differential derivation, also called first order differential operator, and derivation operator is derived in horizontal and vertical directions to obtain gradient images of the image in x direction and y direction.
The operator is to expand the difference by weight,the Sobel operator performs gradient calculation by using two weighted convolution kernels, and the horizontal gradient map and the vertical gradient map of the image I are respectively set as GxAnd GyRespectively convolving I with two convolution kernels with odd sizes to obtain GxAnd GyThe calculation formula is as follows:
where denotes the operation of convolution,the method is characterized by representing a matrix of an original image I gray scale image, namely a matrix with the original image length of × width, wherein each element in the matrix represents a pixel gray scale value of a corresponding coordinate, the value is 0-255, and f (x, y) represents a pixel value of a coordinate (x, y) point on an image, so that the convolution result is specifically calculated as follows:
Gysimilar to the convolution calculation method of (1), G is obtained in the convolutionxAnd GyThen, combining the above two results at each pixel of the image, the approximate gradient of the point can be found:
the Sobel operator detects the edge according to the gray weighting difference of the upper, lower, left and right adjacent points of the pixel point, and the phenomenon that the edge reaches an extreme value.
And 4, performing horizontal and vertical translation traversal on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the horizontal and vertical translation values of the edge image of the infrared gray-scale image when the maximum goodness of fit is obtained after traversal is finished.
The objective of this step is to find out the optimal x and y translation amounts through traversal, so that the matching degree between the edge images of the visible light gray level image and the infrared gray level image is the highest, for the two edge images obtained in the previous step, making the edges the most matched is to make the two images the most similar, and in this embodiment, the PSNR (Peak Signal-to-Noise Ratio) Peak Signal-to-Noise Ratio (PSNR) index is used to measure the similarity between the two edge images in the traversal process.
PSNR is commonly used for quality evaluation of compressed images, and the core idea of PSNR is to calculate the mean square error of all pixel values of two images, and to give two images I with the size of m × n1,I2The Mean Square Error (MSE) is defined as:
PSNR is defined as:
in the present embodiment, the first and second electrodes are,the maximum pixel value of the image possible, the maximum pixel value of the image at 8-bit depth is 255,the larger the PSNR value, the more similar the two images, that is, the higher the degree of coincidence of the edges.
Specifically, the process of translation traversal is started by firstly setting two variables to respectively represent translation values of x and y, the two variables are gradually valued from-25 to 25 pixels, a double-layer cycle is adopted to realize the traversal process, and for any cycle, the current cycle variable x is adopted0、y0Flattening edge images of an infrared grayscale imageAnd (4) performing shift transformation, then calculating PSNR values of the translated image and the edge image of the visible light gray level image, and after traversal is finished, recording the translation amount when the PSNR is maximum, namely the value required by registration.
And 5, translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete the registration of the two images.
Using the offset obtained by traversal in the previous stepxAnd offsetyThe infrared original image is translated to align with the visible light image, so that the registration process is completed, and preparation is made for image fusion.
The translation process can be completed through simple coordinate transformation:
the transformation transforms the original coordinates (x, y) into new coordinates (x ', y').
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A new visible light image and infrared image registration method, characterized in that it comprises the following steps:
step 1, collecting a visible light image original image and an infrared image original image;
step 2, graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image;
step 3, respectively extracting edge information of the visible light gray level image and the infrared gray level image to obtain edge images of the visible light gray level image and the infrared gray level image;
step 4, performing horizontal and vertical translation traversal on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the horizontal and vertical translation values of the edge image of the infrared gray-scale image when the maximum goodness of fit is obtained after traversal is finished;
and 5, translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete the registration of the two images.
2. The new registration method for visible light image and infrared image according to claim 1, wherein in step 3, the sobel operator is used to extract the edge information of the visible light gray scale image and the infrared gray scale image respectively.
3. The new registration method for visible light images and infrared images according to claim 2, wherein the sobel operator is used to extract the edge information of the visible light gray scale image and the infrared gray scale image respectively, specifically as follows:
let G be the horizontal gradient map and the vertical gradient map of the image IxAnd GyRespectively convolving the image I with two convolution kernels with odd numbers to obtain GxAnd Gy,The calculation formula is as follows:
the gradient map of the image population can be obtained by combining the horizontal gradient map and the vertical gradient map, namely the extracted edge information, and the formula is as follows:
according to the method, the edge information of the visible light gray-scale image and the infrared gray-scale image is extracted.
4. The new registration method for visible light images and infrared images according to claim 1, wherein in step 4, PSNR index is used to calculate the degree of matching between the edge graph of the infrared gray-scale map and the edge graph of the visible light gray-scale map, and the calculation formula is as follows:
5. The method as claimed in claim 1, wherein in step 1, the visible image original and the infrared image original are the visible image and the infrared image captured in the same scene at the same time, wherein the visible image is a three-channel color image, and the infrared image is a single-channel image.
6. A new visible light image and infrared image registration apparatus, characterized in that the apparatus comprises: the device comprises an image acquisition module, a graying processing module, an edge image extraction module, a translation value calculation module, a translation module and a registration module;
the image acquisition module is used for acquiring visible light image original drawings and infrared image original drawings;
the graying processing module is used for graying the visible light image original image and the infrared image original image to obtain a visible light grayscale image and an infrared grayscale image;
the edge image extraction module is used for respectively extracting the edge information of the visible light gray image and the infrared gray image to obtain the edge images of the visible light gray image and the infrared gray image;
the translation value calculation module is used for performing translation traversal in the horizontal and vertical directions on the edge image of the infrared gray-scale image, calculating the goodness of fit of the edge image of the infrared gray-scale image and the edge image of the visible light gray-scale image in the current translation state after each translation, and counting the translation values of the edge image of the infrared gray-scale image in the horizontal and vertical directions when the maximum goodness of fit is obtained after traversal is finished;
and the registration module is used for translating the infrared image original image based on the obtained translation values in the transverse and longitudinal directions, and registering the translated infrared image and the visible light image original image to complete registration of the two images.
7. The new visible-light and infrared-image registration apparatus according to claim 6, wherein the edge-image extraction module extracts edge information of the visible-light gray map and the infrared gray map respectively using a sobel operator.
8. The apparatus of claim 7, wherein the sobel operator is used to extract the edge information of the visible light gray scale map and the infrared gray scale map respectively, specifically as follows:
let G be the horizontal gradient map and the vertical gradient map of the image IxAnd GyRespectively convolving the image I with two convolution kernels with odd numbers to obtain GxAnd Gy,The calculation formula is as follows:
the gradient map of the image population can be obtained by combining the horizontal gradient map and the vertical gradient map, namely the extracted edge information, and the formula is as follows:
according to the method, the edge information of the visible light gray-scale image and the infrared gray-scale image is extracted.
9. The apparatus according to claim 6, wherein the translation value calculating module calculates the degree of matching between the edge map of the infrared gray scale map and the edge map of the visible light gray scale map by using the PSNR index, and the calculation formula is as follows:
10. The apparatus of claim 6, wherein the visible-light image original and the infrared image original are a visible-light image and an infrared image captured in a same scene at a same time, wherein the visible-light image is a three-channel color image, and the infrared image is a single-channel image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329377.7A CN111583315A (en) | 2020-04-23 | 2020-04-23 | Novel visible light image and infrared image registration method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329377.7A CN111583315A (en) | 2020-04-23 | 2020-04-23 | Novel visible light image and infrared image registration method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111583315A true CN111583315A (en) | 2020-08-25 |
Family
ID=72122605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010329377.7A Pending CN111583315A (en) | 2020-04-23 | 2020-04-23 | Novel visible light image and infrared image registration method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583315A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529987A (en) * | 2020-09-14 | 2021-03-19 | 武汉高德智感科技有限公司 | Method and system for fusing infrared image and visible light image of mobile phone terminal |
CN113628261A (en) * | 2021-08-04 | 2021-11-09 | 国网福建省电力有限公司泉州供电公司 | Infrared and visible light image registration method in power inspection scene |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976434A (en) * | 2010-08-27 | 2011-02-16 | 浙江大学 | Frequency domain weighting correlation method for image registration |
CN102982535A (en) * | 2012-11-02 | 2013-03-20 | 天津大学 | Stereo image quality evaluation method based on peak signal to noise ratio (PSNR) and structural similarity (SSIM) |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN107240096A (en) * | 2017-06-01 | 2017-10-10 | 陕西学前师范学院 | A kind of infrared and visual image fusion quality evaluating method |
CN107862706A (en) * | 2017-11-01 | 2018-03-30 | 天津大学 | A kind of improvement optical flow field model algorithm of feature based vector |
CN108242061A (en) * | 2018-02-11 | 2018-07-03 | 南京亿猫信息技术有限公司 | A kind of supermarket shopping car hard recognition method based on Sobel operators |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN109242891A (en) * | 2018-08-03 | 2019-01-18 | 天津大学 | A kind of method for registering images based on improvement light stream field model |
CN110223330A (en) * | 2019-06-12 | 2019-09-10 | 国网河北省电力有限公司沧州供电分公司 | A kind of method for registering and system of visible light and infrared image |
CN110610463A (en) * | 2019-08-07 | 2019-12-24 | 深圳大学 | Image enhancement method and device |
-
2020
- 2020-04-23 CN CN202010329377.7A patent/CN111583315A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976434A (en) * | 2010-08-27 | 2011-02-16 | 浙江大学 | Frequency domain weighting correlation method for image registration |
CN102982535A (en) * | 2012-11-02 | 2013-03-20 | 天津大学 | Stereo image quality evaluation method based on peak signal to noise ratio (PSNR) and structural similarity (SSIM) |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN107240096A (en) * | 2017-06-01 | 2017-10-10 | 陕西学前师范学院 | A kind of infrared and visual image fusion quality evaluating method |
CN107862706A (en) * | 2017-11-01 | 2018-03-30 | 天津大学 | A kind of improvement optical flow field model algorithm of feature based vector |
CN108242061A (en) * | 2018-02-11 | 2018-07-03 | 南京亿猫信息技术有限公司 | A kind of supermarket shopping car hard recognition method based on Sobel operators |
CN109242891A (en) * | 2018-08-03 | 2019-01-18 | 天津大学 | A kind of method for registering images based on improvement light stream field model |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN110223330A (en) * | 2019-06-12 | 2019-09-10 | 国网河北省电力有限公司沧州供电分公司 | A kind of method for registering and system of visible light and infrared image |
CN110610463A (en) * | 2019-08-07 | 2019-12-24 | 深圳大学 | Image enhancement method and device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529987A (en) * | 2020-09-14 | 2021-03-19 | 武汉高德智感科技有限公司 | Method and system for fusing infrared image and visible light image of mobile phone terminal |
CN112529987B (en) * | 2020-09-14 | 2023-05-26 | 武汉高德智感科技有限公司 | Method and system for fusing infrared image and visible light image of mobile phone terminal |
CN113628261A (en) * | 2021-08-04 | 2021-11-09 | 国网福建省电力有限公司泉州供电公司 | Infrared and visible light image registration method in power inspection scene |
CN113628261B (en) * | 2021-08-04 | 2023-09-22 | 国网福建省电力有限公司泉州供电公司 | Infrared and visible light image registration method in electric power inspection scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109410207B (en) | NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method | |
EP2426642B1 (en) | Method, device and system for motion detection | |
CN111079556A (en) | Multi-temporal unmanned aerial vehicle video image change area detection and classification method | |
CN107993258B (en) | Image registration method and device | |
CN103077521B (en) | A kind of area-of-interest exacting method for video monitoring | |
WO2018023916A1 (en) | Shadow removing method for color image and application | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
CN111709968B (en) | Low-altitude target detection tracking method based on image processing | |
CN109559324A (en) | A kind of objective contour detection method in linear array images | |
CN110414385A (en) | A kind of method for detecting lane lines and system based on homography conversion and characteristic window | |
CN111583315A (en) | Novel visible light image and infrared image registration method and device | |
CN108416798A (en) | A kind of vehicle distances method of estimation based on light stream | |
CN115375733A (en) | Snow vehicle sled three-dimensional sliding track extraction method based on videos and point cloud data | |
CN111028263B (en) | Moving object segmentation method and system based on optical flow color clustering | |
CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
CN110111292B (en) | Infrared and visible light image fusion method | |
CN111161308A (en) | Dual-band fusion target extraction method based on key point matching | |
CN104966283A (en) | Imaging layered registering method | |
CN112613568B (en) | Target identification method and device based on visible light and infrared multispectral image sequence | |
CN110430400B (en) | Ground plane area detection method of binocular movable camera | |
CN115497073A (en) | Real-time obstacle camera detection method based on fusion of vehicle-mounted camera and laser radar | |
CN115690190B (en) | Moving target detection and positioning method based on optical flow image and pinhole imaging | |
CN111833384B (en) | Method and device for rapidly registering visible light and infrared images | |
CN108460722A (en) | A kind of high-resolution wide visual field rate remotely sensed image method and device | |
Shahista et al. | Detection of the traffic light in challenging environmental conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |