CN108364272A - A kind of high-performance Infrared-Visible fusion detection method - Google Patents

A kind of high-performance Infrared-Visible fusion detection method Download PDF

Info

Publication number
CN108364272A
CN108364272A CN201711492960.4A CN201711492960A CN108364272A CN 108364272 A CN108364272 A CN 108364272A CN 201711492960 A CN201711492960 A CN 201711492960A CN 108364272 A CN108364272 A CN 108364272A
Authority
CN
China
Prior art keywords
image
point
infrared
fusion
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711492960.4A
Other languages
Chinese (zh)
Inventor
伍俪璇
谢京洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Golden Ze Technology Co Ltd
Original Assignee
Guangdong Golden Ze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Golden Ze Technology Co Ltd filed Critical Guangdong Golden Ze Technology Co Ltd
Priority to CN201711492960.4A priority Critical patent/CN108364272A/en
Publication of CN108364272A publication Critical patent/CN108364272A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a kind of infrared visible light fusion detection methods of high-performance, include the following steps:The pretreatment of image, to infrared image into the equalization of column hisgram, and to the gray level more than number of pixels in image into line broadening, the gray level few to number of pixels reduces, gets a distinct image;Image registration detects the edge of infrared image and visible images by Canny operators, builds graphical rule pyramid later and extracts Fast characteristic points, and carries out the registration of image;Image co-registration is carried out infrared and visible light image co-registration by blending algorithm, obtains grayscale fusion image on the image;Color mapped, it will be seen that the component of honorable color image is added to obtain color fusion image with grayscale fusion image;The advantages of capable of ensureing the precision of image co-registration while removing noise, while retaining infrared image and visible images, ensure the clarity of image.

Description

A kind of high-performance Infrared-Visible fusion detection method
Technical field
The present invention relates to detection technology field, specially a kind of high-performance Infrared-Visible fusion detection method.
Background technology
Infrared with visual image fusion technology is integrated infrared with the information that visible light provides, eliminate it is infrared with Redundancy and contradiction that may be present between visible light image information are subject to complementation, reduce uncertainty, reduce fuzziness, with Enhance transparency information in image, improve the precision, reliability and utilization rate of information, to be formed to the complete consistent of target Information describes, and the target of image co-registration is the comprehensive multiple images information from identical scenery.This is because single infrared image Resolution ratio is low, no matter daytime can be used, although single visible images image resolution ratio is high, nighttime imaging effect compared with The advantages of difference, image information is smudgy, and image co-registration combines the two makes user obtain image and the user of better quality Experience.
But under prior art conditions, due to the presence of noise and other factors so that infrared image and visible light figure The fusion accuracy of picture cannot be guaranteed.
Invention content
In order to overcome the shortcomings of that prior art, the present invention provide a kind of high-performance Infrared-Visible fusion detection side Method can ensure the precision of image co-registration while removing noise, while the advantages of retain infrared image and visible images, The clarity for ensureing image can effectively solve the problem that the problem of background technology proposes.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of high-performance Infrared-Visible fusion detection method, includes the following steps:
The pretreatment of S100, image, to infrared image into the equalization of column hisgram, and it is more to number of pixels in image Gray level into line broadening, the gray level few to number of pixels reduces, gets a distinct image;
S200, image registration detect the edge of infrared image and visible images by Canny operators, later structure figures As scale pyramid extraction Fast characteristic points, and carry out the registration of image;
S300, image co-registration are carried out infrared and visible light image co-registration by blending algorithm, obtain gray scale on the image Blending image;
S400, color mapped, it will be seen that the component of honorable color image is added to obtain color integration figure with grayscale fusion image Picture.
As a kind of preferred technical solution of the present invention, in step s 200, the specific algorithm of image registration is:
S201, extraction image edge information, the edge of infrared image and visible images is detected using Canny operators;
S202, extraction Scale invariant property feature point, pass through three octave layer of graphical rule pyramid construction and three Intra-octrave layers, FAST characteristic point detections are carried out to six figures;
The matching of S203, characteristic point, to feature point extraction descriptor, the Hamming distance that is accorded with by each feature point description into Row Feature Points Matching;
S204, affine transformation matrix is calculated, according to the characteristic point that matching obtains, six parameters is determined by LMEDS algorithms Affine Transform Model selects 3 pairs of points pair at random in match point, calculates affine transformation matrix parameter, and compare by deviation Obtain deviation minimum affine transformation matrix parameter, and according to the parameter to infrared image progress geometric transformation obtain registration after Infrared image;
S205, gray scale fusion, to after registration infrared image and the corresponding pixel of visible images do weighted average processing, And the weight of each image to be fused is determined according to the principal component with blending image by Principal Component Analysis.
As a kind of preferred technical solution of the present invention, FAST characteristic points detect the gradation of image around feature based point Value detects the pixel value around candidate feature point, if the enough big numbers of gray value difference of surrounding pixel and the candidate point are N, Then the candidate point is a corner feature point, wherein
I (x) is using candidate feature point p as the center of circle, and 16 be the gray scale at any point on the circle of radius, and I (p) is that candidate point is special Sign point gray scale, εdFor threshold value.
As a kind of preferred technical solution of the present invention, the extraction algorithm of feature point description symbol is specially:In characteristic point week One region of selection is enclosed, picks out n in the areadThen a point pair puts each and carries out brightness value to (p, d) Compare, if I (p)>I (q), then the point is 1 to generating corresponding position in two-value string, if I (p)<I (q), it is corresponding in two-value string Position be -1, be otherwise 0, all points pair between being all compared, generate a n for belonging to this feature pointdLong two The descriptor of system string, referred to as this feature point.
As a kind of preferred technical solution of the present invention, in step S300, the specific algorithm of gray scale fusion is:
S301, the luminance component I for calculating otherwise visible light color imagevisible,
Wherein,
S302, by luminance component IvisibleIt is directly added with infrared image IR and obtains grayscale fusion image F,
Wherein F=Γ (Ivisible, IR), Γ is HIS blending algorithms;
S303, the contrast of grey parameter image Ref and brightness are passed into grayscale fusion image F, after being adjusted Grayscale fusion image F*,
Wherein(uF, σF) be grayscale fusion image F mean value and variance, (uRef, σRef) be gray reference image Ref mean value and variance.
Further include the correction of Epipolar geometry bounding algorithm, specific algorithm as a kind of preferred technical solution of the present invention It is as follows:
Given image is normalized first, choose matching double points and calculates fundamental matrix, at remaining With point to finding all points pair for meeting condition in set, interior point is treated them as, and record the interior quantity put, repeat above-mentioned step Suddenly several times, interior quantity each time is recorded, determines sampling number, concentrate searching to meet identical item in initial matching point again The point pair of part, treats them as final interior point, i.e., otherwise correct matching double points are considered as Mismatching point pair, are rejected.
Compared with prior art, the beneficial effects of the invention are as follows:The present invention, can be in removal noise by optimization algorithm The advantages of ensureing the precision of image co-registration simultaneously, while retaining infrared image and visible images, ensures the clarity of image.
Description of the drawings
Fig. 1 is the flow diagram of the present invention;
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the present invention provides a kind of high-performance Infrared-Visible fusion detection method, include the following steps:
Step S100, the pretreatment of image, to infrared image into the equalization of column hisgram, and to pixel in image The more gray level of number into line broadening, reduce, get a distinct image by the gray level few to number of pixels.
It is found in image co-registration, that often there is contrasts is not high for infrared image, and visual effect is more fuzzy, is unfavorable for The extraction and matching of late feature point, but also can be since the image of image co-registration is from different sensor or different condition Lower shoot obtains, it is understood that there may be and resolution ratio, tonal range etc. difference such as needs to normalize at the preliminary pretreatment work, Therefore pre-processed before carrying out registration fusion, to eliminate these influences.
Step S200, image registration detects the edge of infrared image and visible images by Canny operators, later structure Graphical rule pyramid extraction Fast characteristic points are built, and carry out the registration of image.
Image registration is the technological means of difference between the image for solving to obtain under different scenes, in many high-level figures As being an essential step in processing procedure, the effect of subsequent algorithm is directly affected.Due to the diversity of image subject to registration And the influence that various interference are degenerated, each registration Algorithm not only need to consider the geometric transformation model between registration image, also Take the factors such as influence of noise, registration accuracy and application data characteristics into account.
According to above-mentioned, image registration includes four steps:First, feature detects, notable unique target object, such as closed zone Domain, boundary, profile, cross spider, angle point etc., are often chosen as characteristics of image.For further image procossing, features above is usually It is characterized with control point, such as straight line endpoint, center of gravity etc.;Secondly, characteristic matching, in this step, reference picture and subject to registration Similarity measure between the feature extracted in image is set up, according to demand, choose different feature description operators and Similarity often will produce corresponding Feature Correspondence Algorithm;Furthermore transformation model is estimated, determines reference picture and image subject to registration Geometric transformation type after, the number of parameters of transformation also determines that, corresponding parameter space is also set up, On the basis of two step similarity measures, the parameter that optimizing obtains similitude minimum in parameter space is the ginseng of geometric transformation model Number;Finally, image transformation and interpolation, image subject to registration usually will produce the space coordinate of non-integer by geometric transformation later, And gray value of the pixel of these coordinates etc. can not determine, so needing suitable interpolation method to be calculated.
As the preferred embodiment of the present invention, in step s 200, the specific algorithm of image registration is:
Step S201, image edge information is extracted, the side of infrared image and visible images is detected using Canny operators Edge.
It is apparent poor all to have in terms of infrared image and visible images either gamma characteristic or scene minutia Different, when directly extracting characteristic point, characteristic point distribution is variant, and when generating the descriptor of characteristic point, even true phase Its descriptor of corresponding characteristic point may also be different, leads to that it fails to match.For the two gray scale and texture information all there is In the case of larger difference, it is found that visible light has preferable correlation with the edge image in infrared image, pass through Canny Operator detects edge, to obtain better effect.
Step S202, extract Scale invariant property feature point, by three octave layer of graphical rule pyramid construction with Three intra-octrave layers, FAST characteristic point detections are carried out to six figures.
In above-mentioned structure, the image of top has been the equal of bottom layer image down-sampling 6 times, wherein FAST characteristic points The gray value of image around feature based point is detected, the pixel value around candidate feature point is detected, if surrounding pixel and the candidate The enough big numbers of the gray value difference of point are N, then the candidate point is a corner feature point, whereinI (x) is using candidate feature point p as the center of circle, and 16 be any one on the circle of radius The gray scale of point, I (p) are candidate point characteristic point gray scale, εdFor threshold value.
The algorithm has speed fast, then the good advantage of effect carries out the inhibition of non-maximum and the difference of sub-pix, Obtain the feature point set with scale invariability.
Step S203, the matching of characteristic point, to feature point extraction descriptor, the Hamming distance accorded with by each feature point description From progress Feature Points Matching.
Wherein, Hamming distance refers to that inside data transmission error control coding, Hamming distance is a concept, its table Show that two (equal length) words correspond to the different quantity in position, it is right if indicating the Hamming distance between two words x, y with d (x, y) Two character strings carry out XOR operation, and the number that statistical result is 1, then this number is exactly Hamming distance.
In addition, the extraction algorithm of feature point description symbol is specially:A region is selected around characteristic point, in the area Pick out ndThen a point pair puts each the comparison that brightness value is carried out to (p, d), if I (p)>I (q), the then point pair It is 1 to generate corresponding position in two-value string, if I (p)<I (q), corresponding position is -1 in two-value string, is otherwise 0, all Point pair, between being all compared, generates a n for belonging to this feature pointdThe description of long binary string, referred to as this feature point Symbol.
It should be added that as there is also in certain rotation for camera in assembling process, it may be considered that using has The descriptor of rotational invariance.In the present embodiment, since assembling process is therefore the picture in two images in coaxial and visual field The position of vegetarian refreshments can it is corresponding on, so will directly be obtained in infrared image for less operation time in the matching search phase To characteristic point be mapped in visible images then define a search radius, by centered on characteristic point in visible images Point scans for matching in search radius.
Step S204, affine transformation matrix is calculated, according to the characteristic point that matching obtains, six ginsengs are determined by LMEDS algorithms Several affine Transform Models selects 3 pairs of points pair at random in match point, calculates affine transformation matrix parameter, and pass through deviation Compare the affine transformation matrix parameter for obtaining deviation minimum, and geometric transformation is carried out to infrared image according to the parameter and is matched Infrared image after standard.
Select 3 pairs of points pair at random among the above, being set in matched centering, calculate affine transformation matrix parameter a, b, C, d, e, f }, the deviation under the transformation parameter is defined as: Repetition selects M pairs, obtains M group deviationsThen obtain in deviation corresponding to minimum value a, B, c, d, e, f } be exactly gained affine transformation parameter, this parameter describe the rotation between two images, translation, dimensional variation, root Geometric transformation is carried out to infrared image according to the parameter, bilinearity difference is carried out in geometric maps obtain new images, is not had The pixel being mapped to, the infrared image after being registrated.
Step S205, gray scale merge, to after registration infrared image and the corresponding pixel of visible images do weighted average It handles, and determines the weight of each image to be fused according to the principal component with blending image by Principal Component Analysis.
In the present embodiment, if A (x, y) and B (x, y) are respectively the pixel of two images A and B, then add the method for average It can be expressed as:
F (x, y)=wAA (x, y)+wBB (x, y), wherein wA+wB=1, in addition, wAAnd wBThe respectively weighting of two images Coefficient, and by Principal Component Analysis, the weight of respectively platform image to be melted is determined according to principal component, so as to preserve two width Respective more apparent part in image.
Its specific algorithm flow is as follows, it is seen that and the covariance of light and infrared image matrix is C,
Wherein,
Calculate covariance matrix eigenvalue λ1And λ2, det (C- λ I)=0 is enabled, then is had
After obtaining characteristic value, be calculated the corresponding feature vector of first principal component (x, y)T, then the weight distribution of infrared image beVisible images weight
The excess smoothness that image on the one hand can be caused in conjunction with above-mentioned, current Gaussian filter, on the other hand can make gradual Edge is easily lost, and produces the contradiction between noise suppressed and edge details extraction, in the image of Noise, this algorithm Effect it is bad, so the edge detection operator of single scale have certain limitation, wavelet transformation can to image carry out it is more Dimensional analysis crosses noise filtering using big scale, identifies edge, small scale realizes the accurate positionin at edge, and small echo becomes The ability with detection local mutation is changed, image border is exactly the maximum place of greyscale transformation rate, therefore present embodiment uses Above-mentioned algorithm carries out edge extracting to image, and optimizes processing to edge.
Step S300, image co-registration is carried out infrared and visible light image co-registration by blending algorithm, obtained on the image Grayscale fusion image.
Using the fast colourful blending algorithm transmitted based on luminance contrast, the color fusion algorithms of standard need to scheme Picture is converted into yuv space by rgb space, increases operation time, uses HIS blending algorithms here, directly enterprising in rgb space The simple addition of row can realize that infrared and visible light image co-registration, specific algorithm are:
Step S301, the luminance component I of otherwise visible light color image is calculatedvisible,
Wherein,
Step S302, by luminance component IvisibleIt is directly added with infrared image IR and obtains grayscale fusion image F,
Wherein F=Γ (Ivisible, IR), Γ is HIS blending algorithms;
Step S303, the contrast of grey parameter image Ref and brightness are passed into grayscale fusion image F, is adjusted Grayscale fusion image F afterwards*,
Wherein(uF, σF) be grayscale fusion image F mean value and variance, (uRef, σRef) be gray reference image Ref mean value and variance.
Step S400, color mapped, it will be seen that the component of honorable color image, which is added to obtain colour with grayscale fusion image, to be melted Close image.
It will be seen that R, G of honorable color image, B component respectively with (F*-Ivisible) be added obtain final color fusion image [RC, GC, BC]T
In the present invention, it should be added that, according to the level that fusion occurs, the data fusion of multisensor can be with It is completed in four different levels, is respectively:Signals layer, pixel layer, characteristic layer, decision-making level.
(1) signal level fusion, in fusion signal-based, it is therefore an objective to generate a ratio in conjunction with the signal of different sensors The higher new signal of original signal signal-to-noise ratio;
(2) Pixel-level fusion, the fusion based on pixel are completed point by point, its one width blending image of generation, in image Each pixel be to be generated according to the pixel information of several source images, effect is to make the blending image of generation at image (such as image segmentation) performance is more superior in reason;
(3) the fusion needs of Feature-level fusion, feature based extract information from various data source, it requires extraction Go out significant characteristic set in image, feature is different and different according to application field, and there are commonly grey scale pixel value, boundaries Or texture etc..Together by the Fusion Features extracted finally;
(4) fusion of Decision-level fusion, the level comes comprising the higher level information of fusion, in conjunction with the result of many algorithms Generate final fusion decision.The image of input is separately handled, and what the information of acquisition reinforced decision for last decision can By property.The fusion PCR of image be suitable for different fields and occasion, according to demand depending on.
The comparison of the fusion of each level is as it appears from the above, the characteristics of fusion of each level has its own and advantage and not Evitable defect.
It further illustrates, Pixel-level multi-Resolution Image Fusion method mainly has based on pyramid decomposition, small echo change It changes, the methods of multi-scale geometric analysis such as Curvelet, Contourlet, non-downsampling Contourlet.Multiresolution Decomposition Characteristic information in image can be decomposed different scale space by fusion method well, be the most important development side of image co-registration To.Since gaussian pyramid is the various pyramidal bases of complexity of construction, gaussian pyramid be Reusability low-pass filter with Image construction is decomposed after down-sampling, i.e., it is multiresolution, multiple dimensioned, low-pass filtering as a result, present embodiment uses Gauss Pyramid carries out multi-Resolution Image Fusion.
If source images to be decomposed are G0, it is G that l layers, which are decomposed image,l, decomposition level N, then can obtain:
Wherein, w (m, n) is Kernel function is generated, which must satisfy following condition:
Separability:W (m, n)=w (m) xw (n), -2≤m≤2, -2≤n≤2;
Symmetry:W (m)=w (- m);
Normalization:
The contributions such as odd even item:W (2)+w (- 2)+w (0)=w (1)+w (- 1).
In addition, the initial matching point obtained after thick matching is to generally there are two class errors, when characteristic point self poisoning Error;Second is that Mismatching point pair.It is unavoidable for error of first kind, but the feature detected by feature extraction algorithm Point all has the precision of sub-pixel, as long as correct match point logarithm is enough, can eliminate to later image registration accuracy It influences;Error of the second kind is objective reality, it seriously affects the precision of image registration, it is necessary to take measures to be rejected, i.e., Keep only a pair of of error matching points also not all right.
Therefore, the invention also includes the correction of Epipolar geometry bounding algorithm, specific algorithm is as follows:
Given image is normalized first, choose matching double points and calculates fundamental matrix, at remaining With point to finding all points pair for meeting condition in set, interior point is treated them as, and record the interior quantity put, repeat above-mentioned step Suddenly several times, interior quantity each time is recorded, determines sampling number, concentrate searching to meet identical item in initial matching point again The point pair of part, treats them as final interior point, i.e., otherwise correct matching double points are considered as Mismatching point pair, are rejected.
Wherein method for normalizing is specially:For given two images, initial matching double points are obtained by slightly matching Collect miWith m 'i(i=1,2 ..., n), respectively translates these point coordinates so that two images coordinate origin is located at The center of gravity of these points;The point coordinates concentrated to above-mentioned point zooms in and out transformation so that the point that image respectively puts concentration is former to coordinate The average distance of point is 21/2
The match point of above-mentioned fundamental matrix is f pairs, and basis for estimation is Sampson distance d, Rule of judgment d<T, wherein T is distance threshold, is in the present embodiment 0.001, sampling number position K, when guarantee sampling matching double points are all the general of interior point When rate P is sufficiently high, any pair of match point is that the probability of exterior point is expressed as ε, and be it is unknown, can be continuous with the operation of program Update.
Wherein,
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power Profit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent requirements of the claims Variation is included within the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.

Claims (6)

1. a kind of high-performance Infrared-Visible fusion detection method, which is characterized in that include the following steps:
The pretreatment of S100, image, to infrared image into the equalization of column hisgram, and to the ash more than number of pixels in image Grading line broadening is spent, the gray level few to number of pixels is reduced, and is got a distinct image;
S200, image registration detect the edge of infrared image and visible images by Canny operators, build image ruler later It spends pyramid and extracts Fast characteristic points, and carry out the registration of image;
S300, image co-registration carry out infrared and visible light image co-registration by blending algorithm on the image, obtain gray scale fusion Image;
S400, color mapped, it will be seen that the component of honorable color image is added to obtain color fusion image with grayscale fusion image.
2. a kind of high-performance Infrared-Visible fusion detection method according to claim 1, which is characterized in that in step In S200, the specific algorithm of image registration is:
S201, extraction image edge information, the edge of infrared image and visible images is detected using Canny operators;
S202, extraction Scale invariant property feature point, pass through three octave layer of graphical rule pyramid construction and three Intra-octrave layers, FAST characteristic point detections are carried out to six figures;
The matching of S203, characteristic point carry out special feature point extraction descriptor by the Hamming distance that each feature point description accords with Sign point matching;
S204, affine transformation matrix is calculated, according to the characteristic point that matching obtains, the affine of six parameters is determined by LMEDS algorithms Transformation model selects 3 pairs of points pair at random in match point, calculates affine transformation matrix parameter, and compare acquisition by deviation The affine transformation matrix parameter of deviation minimum, and according to the parameter to infrared image carry out geometric transformation be registrated after it is red Outer image;
S205, gray scale fusion, to after registration infrared image and the corresponding pixel of visible images do weighted average processing, and lead to Cross Principal Component Analysis and determined according to the principal component with blending image the weight of each image to be fused.
3. a kind of high-performance Infrared-Visible fusion detection method according to claim 2, which is characterized in that FAST is special Sign point detection feature based point around gray value of image, detect candidate feature point around pixel value, if surrounding pixel with should The enough big numbers of the gray value difference of candidate point are N, then the candidate point is a corner feature point, wherein
I (x) is using candidate feature point p as the center of circle, and 16 be the gray scale at any point on the circle of radius, and I (p) is candidate point characteristic point Gray scale, εdFor threshold value.
4. a kind of high-performance Infrared-Visible fusion detection method according to claim 2, which is characterized in that characteristic point The extraction algorithm of descriptor is specially:A region is selected around characteristic point, picks out n in the areadA point pair, then The comparison for carrying out brightness value to (p, d) for each point, if I (p)>I (q), then the point is corresponding in two-value string to generating Position be 1, if I (p)<I (q), corresponding position is -1 in two-value string, is otherwise 0, all points pair are all compared it Between, generate a n for belonging to this feature pointdThe descriptor of long binary string, referred to as this feature point.
5. a kind of high-performance Infrared-Visible fusion detection method according to claim 1, which is characterized in that in step In S300, the specific algorithm of gray scale fusion is:
S301, the luminance component I for calculating otherwise visible light color imagevisible,
Wherein,
S302, by luminance component IvisibleIt is directly added with infrared image IR and obtains grayscale fusion image F,
Wherein F=Γ (Ivisible, IR), Γ is HIS blending algorithms;
S303, the contrast of grey parameter image Ref and brightness are passed into grayscale fusion image F, the gray scale after being adjusted Blending image F*,
Wherein(uF, σF) be grayscale fusion image F mean value and variance, (uRef, σRef) it is ash Spend the mean value and variance of reference image R ef.
6. a kind of high-performance Infrared-Visible fusion detection method according to claim 1, which is characterized in that further include The correction of Epipolar geometry bounding algorithm, specific algorithm are as follows:
Given image is normalized first, choose matching double points and calculates fundamental matrix, in remaining match point To finding all points pair for meeting condition in set, interior point is treated them as, and record the interior quantity put, if repeating the above steps Dry time, interior quantity each time is recorded, determines sampling number, concentrates searching to meet the same terms in initial matching point again Point pair, treats them as final interior point, i.e., otherwise correct matching double points are considered as Mismatching point pair, rejected.
CN201711492960.4A 2017-12-30 2017-12-30 A kind of high-performance Infrared-Visible fusion detection method Pending CN108364272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711492960.4A CN108364272A (en) 2017-12-30 2017-12-30 A kind of high-performance Infrared-Visible fusion detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711492960.4A CN108364272A (en) 2017-12-30 2017-12-30 A kind of high-performance Infrared-Visible fusion detection method

Publications (1)

Publication Number Publication Date
CN108364272A true CN108364272A (en) 2018-08-03

Family

ID=63010816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711492960.4A Pending CN108364272A (en) 2017-12-30 2017-12-30 A kind of high-performance Infrared-Visible fusion detection method

Country Status (1)

Country Link
CN (1) CN108364272A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035193A (en) * 2018-08-29 2018-12-18 成都臻识科技发展有限公司 A kind of image processing method and imaging processing system based on binocular solid camera
CN109192302A (en) * 2018-08-24 2019-01-11 杭州体光医学科技有限公司 A kind of face's multi-modality images acquisition processing device and method
CN109255773A (en) * 2018-09-13 2019-01-22 武汉大学 Different resolution ratio based on full variation is infrared with visible light image fusion method and system
CN110082355A (en) * 2019-04-08 2019-08-02 安徽驭风风电设备有限公司 A kind of wind electricity blade detection system
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110232655A (en) * 2019-06-13 2019-09-13 浙江工业大学 A kind of double light image splicings of the Infrared-Visible for coal yard and fusion method
CN110246130A (en) * 2019-06-21 2019-09-17 中国民航大学 Based on infrared and visible images data fusion airfield pavement crack detection method
CN110378355A (en) * 2019-06-30 2019-10-25 南京理工大学 Based on FPGA hardware blending image FAST feature point detecting method
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision
CN111104917A (en) * 2019-12-24 2020-05-05 杭州魔点科技有限公司 Face-based living body detection method and device, electronic equipment and medium
CN111681198A (en) * 2020-08-11 2020-09-18 湖南大学 Morphological attribute filtering multimode fusion imaging method, system and medium
CN111738969A (en) * 2020-06-19 2020-10-02 无锡英菲感知技术有限公司 Image fusion method and device and computer readable storage medium
CN112001871A (en) * 2020-08-21 2020-11-27 沈阳天眼智云信息科技有限公司 Fusion method of infrared double-light image information
CN112102217A (en) * 2020-09-21 2020-12-18 四川轻化工大学 Method and system for quickly fusing visible light image and infrared image
CN112233024A (en) * 2020-09-27 2021-01-15 昆明物理研究所 Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping
CN112268621A (en) * 2020-09-15 2021-01-26 武汉华中天纬测控有限公司 Disconnecting switch state and contact temperature monitoring device
CN112396572A (en) * 2020-11-18 2021-02-23 国网浙江省电力有限公司电力科学研究院 Composite insulator double-light fusion method based on feature enhancement and Gaussian pyramid
CN112634186A (en) * 2020-12-25 2021-04-09 江西裕丰智能农业科技有限公司 Image analysis method of unmanned aerial vehicle
CN112669357A (en) * 2020-12-21 2021-04-16 苏州微清医疗器械有限公司 Fundus image synthesis method and fundus imager
CN113361554A (en) * 2020-03-06 2021-09-07 北京眼神智能科技有限公司 Biological feature recognition multi-modal fusion method and device, storage medium and equipment
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium
CN116228618A (en) * 2023-05-04 2023-06-06 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition
CN117173601A (en) * 2023-11-03 2023-12-05 中铁建设集团有限公司 Photovoltaic power station array hot spot identification method and system
CN117191816A (en) * 2023-11-08 2023-12-08 广东工业大学 Method and device for detecting surface defects of electronic component based on multispectral fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567979A (en) * 2012-01-20 2012-07-11 南京航空航天大学 Vehicle-mounted infrared night vision system and multi-source images fusing method thereof
CN103337077A (en) * 2013-07-01 2013-10-02 武汉大学 Registration method for visible light and infrared images based on multi-scale segmentation and SIFT (Scale Invariant Feature Transform)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567979A (en) * 2012-01-20 2012-07-11 南京航空航天大学 Vehicle-mounted infrared night vision system and multi-source images fusing method thereof
CN103337077A (en) * 2013-07-01 2013-10-02 武汉大学 Registration method for visible light and infrared images based on multi-scale segmentation and SIFT (Scale Invariant Feature Transform)

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BROOK_ICV: "《https://www.cnblogs.com/wangguchangqing/p/4414892.html》", 10 April 2015 *
LINSHANXIAN: "《https://blog.csdn.net/linshanxian/article/details/71085726》", 2 May 2017 *
刘志庭: "《基于视觉注意的红外与可见光图像配准》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李光鑫: "《红外和可见光图像融合技术的研究》", 《中国博士学位论文全文数据库 信息科技辑》 *
金木炎: "《https://blog.csdn.net/qq_18661939/article/details/52900524》", 23 October 2016 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192302A (en) * 2018-08-24 2019-01-11 杭州体光医学科技有限公司 A kind of face's multi-modality images acquisition processing device and method
CN109035193A (en) * 2018-08-29 2018-12-18 成都臻识科技发展有限公司 A kind of image processing method and imaging processing system based on binocular solid camera
CN109255773B (en) * 2018-09-13 2021-05-04 武汉大学 Different-resolution infrared and visible light image fusion method and system based on total variation
CN109255773A (en) * 2018-09-13 2019-01-22 武汉大学 Different resolution ratio based on full variation is infrared with visible light image fusion method and system
CN110082355A (en) * 2019-04-08 2019-08-02 安徽驭风风电设备有限公司 A kind of wind electricity blade detection system
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110210541B (en) * 2019-05-23 2021-09-03 浙江大华技术股份有限公司 Image fusion method and device, and storage device
CN110232655A (en) * 2019-06-13 2019-09-13 浙江工业大学 A kind of double light image splicings of the Infrared-Visible for coal yard and fusion method
CN110246130A (en) * 2019-06-21 2019-09-17 中国民航大学 Based on infrared and visible images data fusion airfield pavement crack detection method
CN110246130B (en) * 2019-06-21 2023-03-31 中国民航大学 Airport pavement crack detection method based on infrared and visible light image data fusion
CN110378355A (en) * 2019-06-30 2019-10-25 南京理工大学 Based on FPGA hardware blending image FAST feature point detecting method
CN110378355B (en) * 2019-06-30 2022-09-30 南京理工大学 FAST feature point detection method based on FPGA hardware fusion image
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision
CN111104917A (en) * 2019-12-24 2020-05-05 杭州魔点科技有限公司 Face-based living body detection method and device, electronic equipment and medium
CN113361554A (en) * 2020-03-06 2021-09-07 北京眼神智能科技有限公司 Biological feature recognition multi-modal fusion method and device, storage medium and equipment
CN111738969A (en) * 2020-06-19 2020-10-02 无锡英菲感知技术有限公司 Image fusion method and device and computer readable storage medium
CN111681198A (en) * 2020-08-11 2020-09-18 湖南大学 Morphological attribute filtering multimode fusion imaging method, system and medium
CN112001871A (en) * 2020-08-21 2020-11-27 沈阳天眼智云信息科技有限公司 Fusion method of infrared double-light image information
CN112268621A (en) * 2020-09-15 2021-01-26 武汉华中天纬测控有限公司 Disconnecting switch state and contact temperature monitoring device
CN112268621B (en) * 2020-09-15 2024-04-02 武汉华中天纬测控有限公司 Disconnecting switch state and contact temperature monitoring device
CN112102217B (en) * 2020-09-21 2023-05-02 四川轻化工大学 Method and system for quickly fusing visible light image and infrared image
CN112102217A (en) * 2020-09-21 2020-12-18 四川轻化工大学 Method and system for quickly fusing visible light image and infrared image
CN112233024B (en) * 2020-09-27 2023-11-03 昆明物理研究所 Medium-long wave double-band infrared image fusion method based on difference characteristic color mapping
CN112233024A (en) * 2020-09-27 2021-01-15 昆明物理研究所 Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping
CN112396572B (en) * 2020-11-18 2022-11-01 国网浙江省电力有限公司电力科学研究院 Composite insulator double-light fusion method based on feature enhancement and Gaussian pyramid
CN112396572A (en) * 2020-11-18 2021-02-23 国网浙江省电力有限公司电力科学研究院 Composite insulator double-light fusion method based on feature enhancement and Gaussian pyramid
CN112669357A (en) * 2020-12-21 2021-04-16 苏州微清医疗器械有限公司 Fundus image synthesis method and fundus imager
CN112669357B (en) * 2020-12-21 2022-04-19 苏州微清医疗器械有限公司 Fundus image synthesis method and fundus imager
CN112634186A (en) * 2020-12-25 2021-04-09 江西裕丰智能农业科技有限公司 Image analysis method of unmanned aerial vehicle
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium
CN116228618B (en) * 2023-05-04 2023-07-14 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition
CN116228618A (en) * 2023-05-04 2023-06-06 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition
CN117173601A (en) * 2023-11-03 2023-12-05 中铁建设集团有限公司 Photovoltaic power station array hot spot identification method and system
CN117173601B (en) * 2023-11-03 2024-03-01 中铁建设集团有限公司 Photovoltaic power station array hot spot identification method and system
CN117191816A (en) * 2023-11-08 2023-12-08 广东工业大学 Method and device for detecting surface defects of electronic component based on multispectral fusion
CN117191816B (en) * 2023-11-08 2024-02-20 广东工业大学 Method and device for detecting surface defects of electronic component based on multispectral fusion

Similar Documents

Publication Publication Date Title
CN108364272A (en) A kind of high-performance Infrared-Visible fusion detection method
Wen et al. Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution
Park et al. Look wider to match image patches with convolutional neural networks
CN107680054B (en) Multi-source image fusion method in haze environment
Xiao et al. Making of night vision: Object detection under low-illumination
Shi et al. High-accuracy stereo matching based on adaptive ground control points
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
Jiao et al. Local stereo matching with improved matching cost and disparity refinement
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN111667506B (en) Motion estimation method based on ORB feature points
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN109993052B (en) Scale-adaptive target tracking method and system under complex scene
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN109754440A (en) A kind of shadow region detection method based on full convolutional network and average drifting
CN109544694A (en) A kind of augmented reality system actual situation hybrid modeling method based on deep learning
CN111899295A (en) Monocular scene depth prediction method based on deep learning
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
Zhou et al. Position-aware relation learning for RGB-thermal salient object detection
KR101753360B1 (en) A feature matching method which is robust to the viewpoint change
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
JP2019185787A (en) Remote determination of containers in geographical region
Pan et al. An adaptive multifeature method for semiautomatic road extraction from high-resolution stereo mapping satellite images
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
CN114241372A (en) Target identification method applied to sector-scan splicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180803

RJ01 Rejection of invention patent application after publication