CN109509164A - A kind of Multisensor Image Fusion Scheme and system based on GDGF - Google Patents

A kind of Multisensor Image Fusion Scheme and system based on GDGF Download PDF

Info

Publication number
CN109509164A
CN109509164A CN201811194834.5A CN201811194834A CN109509164A CN 109509164 A CN109509164 A CN 109509164A CN 201811194834 A CN201811194834 A CN 201811194834A CN 109509164 A CN109509164 A CN 109509164A
Authority
CN
China
Prior art keywords
source images
image
gdgf
levels
basal layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811194834.5A
Other languages
Chinese (zh)
Other versions
CN109509164B (en
Inventor
张永新
王莉
马友忠
贾世杰
蒋琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Normal University
Original Assignee
Luoyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Normal University filed Critical Luoyang Normal University
Priority to CN201811194834.5A priority Critical patent/CN109509164B/en
Publication of CN109509164A publication Critical patent/CN109509164A/en
Application granted granted Critical
Publication of CN109509164B publication Critical patent/CN109509164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention belongs to optical sensing technical field of image processing, disclose a kind of Multisensor Image Fusion Scheme and system based on gradient field Steerable filter, source images are carried out smoothly with mean filter, the small structure in source images is removed, and source images are decomposed to obtain the basal layer and levels of detail of source images;Laplce's high-pass filtering and Gassian low-pass filter are successively filtered source images, obtain the notable figure of source images;The weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;Weight figure is decomposed using source images as navigational figure and using GDGF, respectively obtains weight figure basal layer and levels of detail;Basal layer and levels of detail respective pixel are merged using the weight figure basal layer and levels of detail of optimization according to certain fusion rule, and fused basal layer and levels of detail are merged, obtains blending image.The present invention can effectively save the detailed information in source images, the subjective and objective quality of blending image be greatly improved, and have robustness to image registration.

Description

A kind of Multisensor Image Fusion Scheme and system based on GDGF
Technical field
The invention belongs to optical sensor image processing technology fields, design a kind of Multisensor Image Fusion Scheme, especially It is related to a kind of Multi-sensor Image Fusion for being based on GDGF (Gradient Domain Guided Filtering, GDGF) Method and system.
Background technique
With the fast development of high-performance sensing equipment technology, the approach and type of mankind's acquisition image are also more and more. Since the depth of field limits, the target in focus can only be clearly imaged in picture plane by optical sensor imaging system, and focus Outer target imaging be it is fuzzy, lead to not target whole blur-free imagings all in scene once.Currently, long-range sense Know, imaging of medical, military exploration field require to understand entire scene objects, but need to analyze a large amount of same field Different scenes target image under scape causes the huge waste on time, space and energy.It can be by reasonable to observation information It dominates and uses, multiple image is merged in space or temporal complementary information according to certain criterion, is obtained to the one of scene Cause property is explained or description, enable fused image more comprehensively, true reflection scene objects information, for scene Accurate analysis and understanding are of great significance.
Multi-sensor Image Fusion is exactly that several sensed images by registration are detected and extracted by activity measurement Then the clear area of every width sensed image generates these region merging techniques in the one width scene according to certain fusion rule All objects all clearly scene images.Multi-sensor image fusion can be with accurate Characterization scene objects information, for spy Sign is extracted, and the follow-up works such as target identification and target tracking lay a good foundation, to effectively improve image information Utilization rate and system extend space-time unique to the reliability and stability of object table detection identification.
Accurate characterization is made to focal zone characteristic, be accurately positioned and extracts the region in focusing range or pixel is It is not yet received in the key and multi-sensor image fusion of Multi-sensor Image Fusion algorithm and to solve very well so far One of problem.Currently, image co-registration research continue for for more than 30 years, with low-cost and high-performance imaging technique constantly into Step, the continuous development of signal disposal and analysis theory and the approach and type of acquisition image are continuously increased, image fusion technology Obtain tremendous development.Domestic and international researcher determines and extracts for focal zone present in multi-sensor image fusion Problem proposes the blending algorithm largely haveing excellent performance.These Image Fusions mainly include two classes: spatial domain fusion is calculated Method and transform domain blending algorithm.Wherein, spatial domain Image Fusion is according to pixel gray value size or region in source images Feature determines the focus characteristics of source images, using different focal zone evaluating characteristics by the pixel of focal zone Or extracted region comes out, and merges the pixel extracted or region according to fusion rule, obtains blending image.The calculation The advantages of method is that method is simple, is easily performed, and computation complexity is low, but vulnerable to noise jamming, especially pixel or region When positioned at focal zone and fuzzy region intersection, it is also easy to produce " blocking artifact ".Transform domain image blending algorithm carries out source images Transformation, the focus characteristics of source images is determined according to decomposition scale up conversion coefficient magnitude, then according to fusion rule to transformation Coefficient is handled, and by treated, transformation coefficient progress inverse transformation obtains blending image.Its shortcoming is mainly manifested in point Solution preocess is complicated, time-consuming, and high frequency coefficient space hold is big, easily causes information to lose in fusion process.If changing blending image A transformation coefficient, then the airspace gray value of whole image will all change, and as a result enhance some image-regions During characteristic, unnecessary artificial interference is introduced.More common Multi-sensor Image Fusion algorithm has following several Kind:
(1) it is based on the image interfusion method of laplacian pyramid (Laplacian Pyramid, LAP).Its main mistake Journey be to source images carry out Laplacian pyramid, then use suitable fusion rule, by high and low frequency coefficient into The progress inverse transformation of fused pyramid coefficient is obtained blending image by row fusion.This method has good time-frequency part special Property, it achieves good results, but each inter-layer data that decomposes has redundancy, can not determine the data dependence on each decomposition layer.It mentions Take detailed information ability poor, decomposable process medium-high frequency information is lost seriously, and fused image quality is directly affected.
(2) it is based on the image interfusion method of wavelet transformation (Discrete Wavelet Transform, DWT).It is main Process is to carry out wavelet decomposition to source images, then uses suitable fusion rule, high and low frequency coefficient is merged, Fused wavelet coefficient progress wavelet inverse transformation is obtained into blending image.This method has good time-frequency local characteristics, takes Good effect was obtained, but 2-d wavelet base is made of by way of tensor product one-dimensional wavelet basis, for the surprise in image The expression of dissimilarity is optimal, but the line and face unusual for image can not carry out rarefaction representation.In addition DWT is adopted under belonging to Sample transformation, lacks translation invariance, the loss of information is easily caused in fusion process, blending image is caused to be distorted.
(3) contourlet transform based on non-lower sampling (Non-sub-sampled Contourlet Transform, NSCT image interfusion method).Its main process is to carry out NSCT decomposition to source images, then uses suitable fusion rule, High and low frequency coefficient is merged, the progress NSCT inverse transformation of fused wavelet coefficient is obtained into blending image.This method Good syncretizing effect can be obtained, but the speed of service is slower, decomposition coefficient needs to occupy a large amount of memory space.
(4) image for decomposing (Cartoon-Texture Decomposition, CTD) based on cartoon-texture image melts Conjunction method.Its main process is that source images are carried out with cartoon-texture image respectively to decompose, obtain source images cartoon ingredient and Texture ingredient, and the cartoon ingredient and texture ingredient of source images are merged respectively, merge fused cartoon ingredient and line Reason ingredient obtains blending image.Its fusion rule is that the focus characteristics of cartoon ingredient and texture ingredient based on image design, The focus characteristics of source images are not directly dependent on, to have robustness to noise and scratch breakage.
(5) based on the multiple dimensioned weight of gradient (Multi-scale Weighted Gradient-based Fusion, MWGF image interfusion method).Its main process is the gradient for calculating source images first, and poly- with structure-based large scale Jiao Texiang detection method detects the focal zone of source images, poly- with small scale then by merging the multi-scale information of source images Burnt characteristic detecting method determines the gradient weights size of boundary focal zone, and determines therefrom that the weight of each source images fusion, It is carried out being weighted fusion to each focal zone according to weight.This method can accurately identify focal zone, to avoid focusing The anisotropy occurred when fusion at zone boundary is fuzzy or misregistration, can obtain preferable syncretizing effect.This method Calculate simple, fusion mass height, but the focal zone Characteristics Detection due to carry out large scale and small scale respectively, the algorithm Time and space performance it is poor.
(6) based on more visual signatures (Multiple Visual Features Measurement-based Fusion, MVFMF image interfusion method)).Its main process is that source images are decomposed into basal layer and details first with mean filter Layer, then determines the conspicuousness of source images using three kinds of visual signatures, decompose to notable figure using gradient field filtering It, then will in conjunction with the basal layer and levels of detail of notable figure according to fusion rule to the corresponding basal layer of notable figure and levels of detail The basal layer and levels of detail of source images merge, and finally merge the basal layer of fusion and levels of detail, obtain final fusion figure Picture.This method effectively increases fused image quality.But due to needing to calculate separately the aobvious of source images based on three kinds of visual signatures Work property, time consumption is larger, and time and space performance is poor.
(7) image interfusion method based on Steerable filter (Guided Filter Fusion, GFF).Its main process is Picture breakdown for the basal layer comprising large scale Strength Changes and is included into the thin of small scale details using guiding image filter Then ganglionic layer constructs blending weight figure using the conspicuousness and Space Consistency of basal layer and levels of detail, and based on this will The basal layer and levels of detail of source images merge respectively, and finally the basal layer of fusion and levels of detail are merged to obtain final fusion figure Picture, this method can obtain good syncretizing effect, but lack robustness to noise.
Above-mentioned seven kinds of methods are more common Multisensor Image Fusion Schemes, but in these methods, wavelet transformation (DWT) geometrical characteristic possessed by image data itself cannot be made full use of, cannot optimal or most " sparse " expression image, Blending image is easily caused offset and information Loss occur;Contourlet transform (NSCT) method based on non-lower sampling due to Decomposable process is complicated, and the speed of service is slower, and in addition decomposition coefficient needs to occupy a large amount of memory space.Cartoon texture image point Solution (CTD), based on the multiple dimensioned weight of gradient (MWGF), based on more visual signatures (MVFMF), Steerable filter (GFF) be all close several The new method that year proposes, all achieves good syncretizing effect, and wherein Steerable filter (GFF) is based on local nonlinearity model Edge holding and translation invariant operation are carried out, computational efficiency is high;Based on the multiple dimensioned weight of gradient (MWGF), based on more visions spies Sign (MVFMF) significantly improves the accuracy of source images focal zone characteristic judgement, improves fused image quality, but polygonal Conspicuousness and the visual signature calculating of degree take a substantial amount of time and space, limits the promotion and application of its own.Preceding four Kind common fusion method is all haveed the shortcomings that different, is difficult to reconcile between speed and fusion mass, is limited answering for these methods With and promote, the 7th kind of method Steerable filter (GFF) is the more excellent blending algorithm of current fusion performance, but Steerable filter Source images are not filtered directly, are easily lost part source image information, at the same average weight integration technology with Fusion performance is affected to a certain extent.
In conclusion problem of the existing technology is:
In the prior art, (1) traditional Space domain mainly uses region partitioning method to carry out, region division size It is excessive to will lead to exterior domain in focus and be located at the same area, cause fused image quality to decline;Region division is undersized, son Provincial characteristics cannot sufficiently reflect the provincial characteristics, be easy to cause the judgement inaccuracy of focal zone pixel and generate and falsely drop, make Consistency is poor between obtaining adjacent area, obvious detail differences occurs in intersection, generates " blocking artifact ".(2) traditional based on more rulers It spends in the multiple sensor integrated method decomposed, is always handled whole picture multi-sensor image as single entirety, details letter Breath extraction is imperfect, and the detailed information such as source images Edge texture cannot be preferably indicated in blending image, affect blending image To the integrality of source images potential information description, and then influence fused image quality.
Summary of the invention
In view of the problems of the existing technology, the present invention provides one kind can effectively eliminate " blocking artifact ", expansion optical The sensor depth of field and can the subjective and objective quality of significant increase blending image Multisensor Image Fusion Scheme based on GDGF and be System.It overcomes focal zone present in Multi-sensor Image Fusion and determines inaccuracy, it cannot effective extraction source image border line Information is managed, blending image minutia characterizes imperfect, part loss in detail, and " blocking artifact ", contrast decline etc. is many to ask Topic.
The invention is realized in this way basal layer and levels of detail will be decomposed into source images with mean filter first;Then Conspicuousness detection is carried out to source images using Laplce's filtering and Gassian low-pass filter, and is compared by conspicuousness size To the weight figure of corresponding source images;And DECOMPOSED OPTIMIZATION is carried out to weight figure using source images as navigational figure and using GDGF, point The basal layer and levels of detail of weight figure are not obtained;Then in conjunction with the basal layer and levels of detail of source images weight figure, according to certain Fusion rule respectively merges source images basal layer and levels of detail respective pixel;Finally by fused basal layer and levels of detail Merging obtains blending image.
Further, the Multisensor Image Fusion Scheme based on GDGF, which is characterized in that described based on GDGF's Multisensor Image Fusion Scheme, to the sensor image { I after registration1, I2, I3, L, InMerged, and In∈□M×N,It is the space that size is M × N, M and N are positive integer, it specifically includes:
(1) using mean value wave device AF respectively to multi-sensor image { I1, I2, I3, L, InSmooth operation is carried out, remove source Image { I1, I2, I3, L, InIn small structure, obtain source images basal layer { B1, B2, B3, L, Bn, source images levels of detail { D1, D2, D3, L, Dn}.Wherein: Bn=AF (In), Dn=In-Bn, color image handled under RGB color,C ∈ { R_band, G_band, B_band }.
(2) source images are filtered with Laplce's high-pass filtering L and Gassian low-pass filter G, obtain source images Notable figure Sn.Wherein: Sn=In*L*G.Color image is handled under RGB color,c∈ { R_band, G_band, B_band }.
(3) according to source images { I1, I2, I3, L, InCorrespond to pixel { S in notable figure1, S2, S3, L, SnSize, construct source Corresponding weight matrix { the P of image1, P2, P3, L, Pn}.Wherein:
Sn(i, j) is source images InNotable figure pixel (i, j);
Pn(i, j) is source images InWeight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;P (i, j) is the member of the i-th row of weight figure S, jth column Element;
(4) by source images { I1, I2, I3, L, InIt is used as navigational figure, with GDGF to weight matrix { P1, P2, P3, L, Pn} It is decomposed to obtain weight matrixWithWherein:
(5) it is based on source images basal layer BnWith levels of detail Dn, according to weight matrixWithConstruct blending image basal layer FB,With levels of detail FD,It obtains Fused basal layer FBWith levels of detail FD.Wherein,
(6) blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
The Multi-sensor Image Fusion system based on GDGF that another object of the present invention is to provide a kind of.
Another object of the present invention is to provide a kind of using the above-mentioned Multisensor Image Fusion Scheme based on GDGF Smart city sensor image emerging system.
Another object of the present invention is to provide a kind of using the above-mentioned Multisensor Image Fusion Scheme based on GDGF Imaging of medical sensor image emerging system.
Another object of the present invention is to provide a kind of using the above-mentioned Multisensor Image Fusion Scheme based on GDGF Security monitoring sensor image emerging system.
Advantages of the present invention and good effect are as follows:
(1) the invention firstly uses mean filters, and basal layer and levels of detail will be decomposed into source images, then general using drawing Lars high-pass filtering and Gassian low-pass filter carry out conspicuousness detection to source images, and pass through comparison source images notable figure pixel Size obtains the weight figure of corresponding source images;And weight figure is decomposed using GDGF using source images as navigational figure, it obtains To the basal layer and levels of detail of weight figure, then using the basal layer of weight figure and levels of detail by the basal layer of source images and carefully Ganglionic layer fusion, is finally merged fused basal layer and levels of detail to obtain the blending image of source images.Pass through conspicuousness It detects to carry out determining focal zone characteristic, improves the accuracy rate determined source images focal zone characteristic, it is clear to be conducive to The extraction of regional aim, while instructing according to the basal layer of weight figure and levels of detail melting for source images basal layer and levels of detail It closes, effectively improves the subjective and objective quality of blending image.
(2) present invention fusion frame is flexible, and easy to implement, real-time is good.
(3) this blending algorithm instructs the basal layer of source images and levels of detail to melt with the basal layer of weight figure and levels of detail It closes, improves the Space Consistency between source images and blending image.
Image interfusion method frame of the present invention is flexible, and real-time is good, and it is thin can accurately to extract focal zone target Section, it is accurate to indicate image detail feature, effectively improve the subjective and objective quality of blending image.
Detailed description of the invention
Fig. 1 is the Multisensor Image Fusion Scheme flow chart based on GDGF that case study on implementation of the present invention provides.
Fig. 2 is source images to be fused ' Balloon ' effect picture that case study on implementation 1 of the present invention provides.
Fig. 3 is that case study on implementation offer of the present invention is Laplce (LAP), wavelet transformation (DWT), based on non-lower sampling Contourlet transform (NSCT), cartoon texture image decompose (CTD), based on the multiple dimensioned weight of gradient (MWGF), be based on more visions Feature (MVFMF), Steerable filter (GFF) and of the invention (Proposed) totally eight kinds of image interfusion methods to multisensor figure The differential image between syncretizing effect figure and blending image and source images as ' Balloon ' Fig. 3 (a) and (b).
Fig. 4 is color image to be fused ' Book ' effect picture that case study on implementation 2 of the present invention provides 2;
Fig. 5 is Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, cartoon Texture image decomposes (CTD), is based on the multiple dimensioned weight of gradient (MWGF), is based on more visual signatures (MVFMF), Steerable filter (GFF) and eight kinds of fusion methods of (Proposed) image of the invention melt multi-sensor image ' Book ' Fig. 4 (a) with (b) Close effect image.
Fig. 6 is color image to be fused ' Island ' effect picture that case study on implementation 3 of the present invention provides 3;
Fig. 7 is Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, cartoon Texture image decomposes (CTD), is based on the multiple dimensioned weight of gradient (MWGF), is based on more visual signatures (MVFMF), Steerable filter (GFF) and eight kinds of fusion methods of (Proposed) image of the invention are to multi-sensor image ' Island ' Fig. 6 (a) and (b) Syncretizing effect image.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with case study on implementation, to this Invention is further elaborated.It should be appreciated that specific implementation case described herein is only used to explain the present invention, It is not intended to limit the present invention.
In the prior art, blending algorithm determines inaccuracy to source images focal zone in Multi-sensor Image Fusion field, Detailed information extraction is imperfect, and the detailed information such as source images Edge texture, syncretizing effect cannot be preferably indicated in blending image Difference.
Application principle of the invention is described in detail with reference to the accompanying drawing.
As shown in Figure 1, the Multisensor Image Fusion Scheme based on GDGF that case study on implementation of the present invention provides, comprising:
S101: basal layer and levels of detail will be decomposed into source images first with mean filter.
S102: and then conspicuousness detection is carried out to source images using Laplce's high-pass filtering and Gassian low-pass filter, lead to Cross the focus characteristics weight figure that conspicuousness comparison constructs corresponding source images.
S103: weight figure is decomposed using source images as navigational figure and using GDGF, respectively obtains focus characteristics The basal layer and levels of detail of weight figure, and source images basal layer is instructed using the basal layer and levels of detail of focus characteristics weight figure It is merged with levels of detail.
S104: finally fused basal layer and levels of detail are merged, obtain blending image.
Below with reference to detailed process, the invention will be further described.
The Multisensor Image Fusion Scheme based on GDGF that case study on implementation of the present invention provides, detailed process include:
Using mean value wave device AF respectively to multi-sensor image { I1, I2, I3, L, InSmooth operation is carried out, remove source figure As { I1, I2, I3, L, InIn small structure, obtain source images basal layer { B1, B2, B3, L, Bn, source images levels of detail { D1, D2, D3, L, Dn}.Wherein: Bn=AF (In), Dn=In-Bn, color image handled under RGB color,C ∈ { R_band, G_band, B_band }.
Source images are filtered with Laplce's high-pass filtering L and Gassian low-pass filter G, it is aobvious to obtain source images Write figure Sn.Wherein: Sn=In*L*G.Color image is handled under RGB color,c∈ { R_band, G_band, B_band }.
According to source images { I1, I2, I3, L, InCorrespond to pixel { S in notable figure1, S2, S3, L, SnSize, construct source images Corresponding weight matrix { P1, P2, P3, L, Pn}.Wherein:
Sn(i, j) is source images InNotable figure pixel (i, j);
Pn(i, j) is source images InWeight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;P (i, j) is the member of the i-th row of weight figure S, jth column Element;
By source images { I1, I2, I3, L, InIt is used as navigational figure, with GDGF to weight matrix { P1, P2, P3, L, PnCarry out Decomposition obtains weight matrixWithWherein:
Based on source images basal layer BnWith levels of detail Dn, according to weight matrixWithConstruct blending image basal layer FB,With levels of detail FD,Melted Basal layer F after conjunctionBWith levels of detail FD.Wherein,
Blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
Below with reference to specific implementation case, the invention will be further described.
Fig. 2 is source images to be fused ' Balloon ' effect picture that case study on implementation 1 of the present invention provides.
Fig. 3 is syncretizing effect figure and blending image same source images of the case study on implementation 1 of the present invention to source images ' Balloon ' Between differential image.
Case study on implementation 1
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 2 (a) and (b), Processing result is as shown in the Propose in Fig. 3.Simultaneously using Laplce (LAP), wavelet transformation (DWT), based on adopting under non- Contourlet transform (NSCT), the cartoon texture image of sample are decomposed (CTD), are regarded morely based on the multiple dimensioned weight of gradient (MWGF), based on Feel that seven kinds of feature (MVFMF), Steerable filter (GFF) image interfusion methods melt Fig. 2 (a) and two width source images shown in (b) Conjunction processing carries out quality evaluation to the blending images of different fusion methods, and processing calculates to obtain result shown in table 1.
Table 1 multi-sensor image ' Balloon ' fused image quality evaluates
Case study on implementation 2:
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 4 (a) and (b), Processing result is as shown in the Proposed in Fig. 5.
Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, cartoon simultaneously Texture image decomposes (CTD), is based on the multiple dimensioned weight of gradient (MWGF), is based on more visual signatures (MVFMF), Steerable filter (GFF) eight kinds of image interfusion methods carry out fusion treatment to two width source images (a) shown in Fig. 4 and (b), to Fig. 5 difference fusion side The blending image of method carries out quality evaluation, and processing calculates to obtain result shown in table 2.
Table 2 multi-sensor image ' Book ' fused image quality evaluates
Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, cartoon simultaneously Texture image decomposes (CTD), is based on the multiple dimensioned weight of gradient (MWGF), is based on more visual signatures (MVFMF), Steerable filter (GFF) eight kinds of image interfusion methods carry out fusion treatment to two width source images (a) shown in Fig. 6 and (b), to Fig. 7 difference fusion side The blending image of method carries out quality evaluation, and processing calculates to obtain result shown in table 3.
Table 3 multi-sensor image ' Island ' fused image quality evaluates
In table 1, table 2 and table 3: Method represents method;Fusion method includes eight kinds: Laplce (LAP), Wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, cartoon texture image decompose (CTD), are based on gradient Multiple dimensioned weight (MWGF) is based on more visual signatures (MVFMF), Steerable filter (GFF);When Running Time represents operation Between, unit is the second.MI represents mutual information, is that fused image quality based on mutual information objectively evaluates index.QYIt represents from source figure The structural information total amount shifted as in.QAB/FRepresent the marginal information total amount shifted from source images.
Can be seen that other method frequency domain methods from Fig. 3, Fig. 5 and Fig. 7 includes Laplce (LAP), wavelet transformation (DWT), the contourlet transform based on non-lower sampling (NSCT), blending image all deposit artifact again, fuzzy and poor contrast Problem;Cartoon texture image decomposes (CTD), is based on the multiple dimensioned weight of gradient (MWGF), is based on more visual signatures (MVFMF), leads It is relatively preferable to filtering (GFF) fusion mass, but there is also a small amount of obscure portions.Method of the invention is to multi-sensor image The blending image of Fig. 3 ' Balloon ', multisensor color image Fig. 5 ' Book ' and multi-sensor image Fig. 7 ' Island ' Subjective vision effect is substantially better than the syncretizing effect of other fusion methods.
It can be seen that from blending image, extractability of the method for the present invention to source images focus area object edge and texture Other methods are substantially better than, can be good at for the target information of focus area in source images being transferred in blending image, are protected Deposit the detailed information such as edge and the texture in source images.The target detail information of focal zone can be effectively captured, image is improved Fusion mass.The method of the present invention has good subjective attribute.
From table 1, table 2 and table 3 as can be seen that the picture quality of the method for the present invention blending image objectively evaluates index MI ratio The blending image of other methods corresponds to index and is averagely higher by 1.1, and the picture quality of blending image objectively evaluates index QYThan other The blending image of method corresponds to index and is higher by 0.05, QAB/FBlending image than other methods corresponds to index and is higher by 0.03.Explanation This method, which obtains blending image, has good objective figures.
The foregoing is merely preferable case study on implementation of the invention, are not intended to limit the invention, all of the invention Made any modifications, equivalent replacements, and improvements etc. within spirit and principle, should be included in protection scope of the present invention it It is interior.

Claims (7)

1. a kind of Multisensor Image Fusion Scheme based on GDGF, which is characterized in that the multisensor figure based on GDGF As fusion method and system the following steps are included:
(1) source images are decomposed with mean filter (Average Filtering, AF), respectively obtains the basis of source images Layer and levels of detail;
(2) by Laplce's high-pass filtering (Laplacian Filter, LF) and Gassian low-pass filter (Gaussian Low- Pass Filter, GLF) successively source images are filtered, obtain the notable figure of source images;
(3) the weight figure that notable figure pixel size constructs corresponding source images is corresponded to according to source images, schemed source images as guidance Picture simultaneously decomposes weight figure using GDGF, respectively obtains weight figure basal layer and levels of detail;
(4) basal layer and levels of detail based on weight figure, according to certain fusion rule respectively by source images basal layer and details Layer respective pixel fusion, fused basal layer and levels of detail is merged to get blending image is arrived.
2. as described in claim 1 based on the Multisensor Image Fusion Scheme of GDGF, which is characterized in that described to be based on GDGF Multisensor Image Fusion Scheme, to the sensor image { I after registration1, I2, I3, L, InMerged, and In∈□M×N,It is the space that size is M × N, M and N are positive integer, it specifically includes:
(1) using mean value wave device AF respectively to multi-sensor image { I1, I2, I3, L, InSmooth operation is carried out, remove source images {I1, I2, I3, L, InIn small structure, obtain source images basal layer { B1, B2, B3, L, Bn, source images levels of detail { D1, D2, D3, L, Dn};Wherein: Bn=AF (In), Dn=In-Bn, color image handled under RGB color,
(2) source images are filtered with Laplce's high-pass filtering L and Gassian low-pass filter G, it is significant obtains source images Scheme Sn;Wherein: Sn=In*L*G;Color image is handled under RGB color,
(3) according to source images { I1, I2, I3, L, InCorrespond to pixel { S in notable figure1, S2, S3, L, SnSize, construct source images pair Weight matrix { the P answered1, P2, P3, L, Pn};Wherein:
Sn(i, j) is source images InNotable figure pixel (i, j);
Pn(i, j) is source images InWeight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;P (i, j) is the element of the i-th row of weight figure S, jth column;
(4) by source images { I1, I2, I3, L, InIt is used as navigational figure, with GDGF to weight matrix { P1, P2, P3, L, PnDivided Solution obtains weight matrix { W1 B, W2 B, W3 B, L, Wn BAnd { W1 D, W2 D, W3 D, L, Wn D};Wherein: (Wn B, Wn D)=GDGF (Pn, In);
(5) it is based on source images basal layer BnWith levels of detail Dn, according to weight matrix { W1 B, W2 B, W3 B, L, Wn BAnd { W1 D, W2 D, W3 D, L, Wn DBuilding blending image basal layer FB,With levels of detail FD,Obtain fused basal layer FB With levels of detail FD;Wherein,
(6) blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
3. more sensed image fusion methods based on GDGF as claimed in claim 2, which is characterized in that structure in step (4) The weight matrix built optimizes resolution process, and utilizes the basal layer and details of treated weight matrix fusion source images Layer, and then construct blending image.
4. a kind of more sensed images based on GDGF of more sensed image fusion methods based on GDGF as described in claim 1 are melted Collaboration system.
5. a kind of more sensed images in smart city using more sensed image fusion methods described in claim 1 based on GDGF are melted Collaboration system.
6. a kind of more sensed images of imaging of medical using more sensed image fusion methods described in claim 1 based on GDGF are melted Collaboration system.
7. a kind of more sensed images of security monitoring using more sensed image fusion methods described in claim 1 based on GDGF are melted Collaboration system.
CN201811194834.5A 2018-09-28 2018-09-28 Multi-sensor image fusion method and system based on GDGF Active CN109509164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811194834.5A CN109509164B (en) 2018-09-28 2018-09-28 Multi-sensor image fusion method and system based on GDGF

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811194834.5A CN109509164B (en) 2018-09-28 2018-09-28 Multi-sensor image fusion method and system based on GDGF

Publications (2)

Publication Number Publication Date
CN109509164A true CN109509164A (en) 2019-03-22
CN109509164B CN109509164B (en) 2023-03-28

Family

ID=65746510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811194834.5A Active CN109509164B (en) 2018-09-28 2018-09-28 Multi-sensor image fusion method and system based on GDGF

Country Status (1)

Country Link
CN (1) CN109509164B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648302A (en) * 2019-10-08 2020-01-03 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN111192229A (en) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video image enhancement display method and system
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111598819A (en) * 2020-05-14 2020-08-28 易思维(杭州)科技有限公司 Self-adaptive image preprocessing method and application thereof
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放军火箭军工程大学 Night infrared medium-long wave image fusion method and system
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN113421452A (en) * 2021-06-03 2021-09-21 上海大学 Open parking lot recommendation system based on visual analysis
CN114757912A (en) * 2022-04-15 2022-07-15 电子科技大学 Material damage detection method, system, terminal and medium based on image fusion
CN115115554A (en) * 2022-08-30 2022-09-27 腾讯科技(深圳)有限公司 Image processing method and device based on enhanced image and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192368A1 (en) * 2014-06-20 2015-12-23 深圳市大疆创新科技有限公司 Hdri generating method and apparatus
CN107194904A (en) * 2017-05-09 2017-09-22 西北工业大学 NSCT area image fusion methods based on supplement mechanism and PCNN
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Based on the multiple dimensioned fast and effective video image fusion method for instructing filtering
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192368A1 (en) * 2014-06-20 2015-12-23 深圳市大疆创新科技有限公司 Hdri generating method and apparatus
CN107194904A (en) * 2017-05-09 2017-09-22 西北工业大学 NSCT area image fusion methods based on supplement mechanism and PCNN
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Based on the multiple dimensioned fast and effective video image fusion method for instructing filtering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘先红等: "结合引导滤波和卷积稀疏表示的红外与可见光图像融合", 《光学精密工程》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648302B (en) * 2019-10-08 2022-04-12 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN110648302A (en) * 2019-10-08 2020-01-03 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN111192229A (en) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video image enhancement display method and system
CN111192229B (en) * 2020-01-02 2023-10-13 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video picture enhancement display method and system
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111223069B (en) * 2020-01-14 2023-06-02 天津工业大学 Image fusion method and system
CN111598819A (en) * 2020-05-14 2020-08-28 易思维(杭州)科技有限公司 Self-adaptive image preprocessing method and application thereof
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN112184646B (en) * 2020-09-22 2022-07-29 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放军火箭军工程大学 Night infrared medium-long wave image fusion method and system
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN113421452A (en) * 2021-06-03 2021-09-21 上海大学 Open parking lot recommendation system based on visual analysis
CN114757912A (en) * 2022-04-15 2022-07-15 电子科技大学 Material damage detection method, system, terminal and medium based on image fusion
CN115115554A (en) * 2022-08-30 2022-09-27 腾讯科技(深圳)有限公司 Image processing method and device based on enhanced image and computer equipment
CN115115554B (en) * 2022-08-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method and device based on enhanced image and computer equipment

Also Published As

Publication number Publication date
CN109509164B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN109509164A (en) A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN105550678B (en) Human action feature extracting method based on global prominent edge region
CN103455991B (en) A kind of multi-focus image fusing method
Zhang et al. Multi-focus image fusion algorithm based on focused region extraction
CN105957054B (en) A kind of image change detection method
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN109509163A (en) A kind of multi-focus image fusing method and system based on FGF
CN105321172A (en) SAR, infrared and visible light image fusion method
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN107452022A (en) A kind of video target tracking method
CN104616274A (en) Algorithm for fusing multi-focusing image based on salient region extraction
CN104915672B (en) A kind of Rectangle building extracting method and system based on high-resolution remote sensing image
CN105894513B (en) Take the remote sensing image variation detection method and system of imaged object change in time and space into account
CN104123554B (en) SIFT image characteristic extracting methods based on MMTD
CN101777181A (en) Ridgelet bi-frame system-based SAR image airfield runway extraction method
CN106023245A (en) Static background moving object detection method based on neutrosophy set similarity measurement
CN109919960A (en) A kind of image continuous boundary detection method based on Multiscale Gabor Filters device
CN108230282A (en) A kind of multi-focus image fusing method and system based on AGF
CN105512622B (en) A kind of visible remote sensing image sea land dividing method based on figure segmentation and supervised learning
CN104268833A (en) New image fusion method based on shift invariance shearlet transformation
CN101334834A (en) Bottom-up caution information extraction method
CN109063643A (en) A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part
CN106682678A (en) Image angle point detection and classification method based on support domain
CN116403121A (en) Remote sensing image water area segmentation method, system and equipment for multi-path fusion of water index and polarization information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant