CN106803242A - Multi-focus image fusing method based on quaternion wavelet conversion - Google Patents

Multi-focus image fusing method based on quaternion wavelet conversion Download PDF

Info

Publication number
CN106803242A
CN106803242A CN201611216386.5A CN201611216386A CN106803242A CN 106803242 A CN106803242 A CN 106803242A CN 201611216386 A CN201611216386 A CN 201611216386A CN 106803242 A CN106803242 A CN 106803242A
Authority
CN
China
Prior art keywords
coefficient
band
quaternion
subband
frequency sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611216386.5A
Other languages
Chinese (zh)
Inventor
罗晓清
张战成
郑雪妮
席新星
檀华廷
王骏
董静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201611216386.5A priority Critical patent/CN106803242A/en
Publication of CN106803242A publication Critical patent/CN106803242A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of multi-focus image fusing method based on quaternion wavelet conversion, primarily to during the information in source images more accurately incorporated into fused images.Implementation step is:1) quaternion wavelet conversion is carried out to two width multiple focussing images to be fused, decomposition obtains a low frequency sub-band (LL) and every layer of three high-frequency sub-band (LH, HL, HH), wherein each subband corresponds to four quaternary number system number subbands, by quaternion algebra computing, this four coefficient subbands can be converted to an amplitude subband and three phason bands;2) low frequency sub-band is merged using taking big fusion rule based on improved phase gradient characteristic value, and high-frequency sub-band is using the fusion method based on the comprehensive multiple features of Quaternion Matrix;4) image after quaternion wavelet inverse transformation is merged is carried out to the high and low frequency sub-band coefficients after fusion.The present invention can fully extract the feature of source images, accurately represent source images, and error message is introduced into fused images so as to be prevented effectively from, and improve visual effect, compared to the quality that traditional fusion method greatly improves fused images.

Description

Multi-focus image fusing method based on quaternion wavelet conversion
Technical field
It is multiple focussing image treatment the present invention relates to a kind of multi-focus image fusing method based on quaternion wavelet conversion One fusion method of technical field, has in fields such as machine vision, target identification, digital cameras and widely applies.
Background technology
Multi-focus image fusion is one of research contents of image co-registration.When video camera pair from camera lens at a distance of different two When target A and B are shot, two targets cannot usually focused on simultaneously.Generally using method is shot twice, make a wherein width figure In target A, another piece image focuses on target B to image focu, then two images to shooting carry out fusion treatment, obtain target A With target B all clearly piece images, it is easy to the observation of human eye and the subsequent treatment of computer, this process to be referred to as multi-focus Image co-registration.The utilization rate and system that multi-focus image fusion technology can effectively improve image information are recognized to target acquisition Reliability.These advantages cause that multi-focus image fusion technology becomes increasingly widespread to be applied to machine vision, digital camera, mesh Mark Deng field.
In recent years, the multi-scale transform instrument based on image co-registration has been widely used, classical multi-scale transform work Have:Wavelet transformation, warp wavelet, profile wave convert, shearing wave conversion etc., these classical multi-scale transform instruments by It is successfully applied in image co-registration.But these conversion are unable to the phase information of capture images, therefore quaternion wavelet is converted It is suggested.Quaternion wavelet conversion overcomes deficiency and complex wavelet transform phase information of the real small echo on translation invariance not Abundant shortcoming.Different with Phase information from real small echo, quaternary number has approximate translation invariance and abundant phase information.Therefore The present invention is from quaternion wavelet conversion as multi-scale transform instrument.
Coefficient after quaternion wavelet conversion is decomposed has very strong correlation between yardstick, between direction, in space field, Hidden Ma Er Krafts model HMM (Hidden Markov Model) can accurately be modeled to the coefficient after multi-resolution decomposition, The correlation of coefficient is described.Hidden Ma Er Krafts MODEL C HMM (Contextual HMM) based on context is exactly by context Information fully, effectively to capture continuation, chi of the high frequency direction sub-band coefficients of quaternion wavelet conversion decomposition between yardstick The energy accumulating characteristic in multi-direction selectivity and spatial neighborhood in degree.Therefore, the present invention selects CHMM to quaternion wavelet The coefficient that decomposition is obtained carries out statistical modeling.
The quality of image co-registration quality depends not only on multi-resolution decomposition instrument, fusion rule no less important.Compare often Low frequency fusion rule is averaged, weight be averaged, absolute value take it is big etc., from improved based on phase ladder in the present invention Degree information takes big fusion rule, preferably identifies the definition of image.Taken frequently with absolute value in the past for HFS Greatly or big convergence strategy is taken based on certain feature of region, but these schemes can not obtain preferable syncretizing effect.In order to Fully capture the complementary information of image to be fused, it is to avoid introduce error message, the present invention utilizes phase information, the context of image Multiple features of the extraction coefficient simultaneously such as statistical information and subband coefficient values, then carry out it is comprehensive obtain a comprehensive characteristics, The comprehensive characteristics are capable of the essence of effectively accurate phenogram picture.The fusion rule based on Quaternion Matrix synthesis multiple features for proposing Then, it is possible to increase fusion mass.
The content of the invention
Only consider single features the invention aims to solve multi-focus image fusing method, be easily introduced melting for mistake The problem of conjunction information, proposes a kind of multi-focus image fusing method based on quaternion wavelet conversion, makes full use of the phase of coefficient Position information, effectively protects image detail, strengthens picture contrast and definition, improves its visual effect, improves fused images Quality.
The technical solution adopted for the present invention to solve the technical problems is as follows:
A kind of multi-focus image fusing method based on quaternion wavelet conversion, it is characterised in that comprise the following steps:
1) two width multiple focussing images to be fused are carried out with quaternion wavelet conversion, decomposition obtains different scale, non-Tongfang To high and low frequency subband, each subband four coefficient subbands of correspondence, this four coefficient subbands can be converted to an amplitude subband With three phason bands;
2) low frequency sub-band coefficient and high-frequency sub-band coefficient are merged respectively;
2.1) big fusion rule fusion low frequency sub-band coefficient is taken using based on improved phase gradient characteristic value;
2.2) using the fusion rule fusion high-frequency sub-band coefficient based on Quaternion Matrix synthesis multiple features;
A) the CHMM statistical models of quaternary number high-frequency sub-band coefficient are built, model is estimated using expectation maximization EM algorithms Parameter, obtains quaternary number subband statistical model, and calculates its marginal probability density function Edge PDF, so as to obtain based on edge The local energy of probability density function this feature.Then the feature based on high-frequency sub-band phase gradient, direction sign are asked respectively again Quasi- difference feature, sub-band coefficients Variance feature;
B) four features in step a) are carried out using Quaternion Matrix comprehensively obtaining a comprehensive characteristics, is finally used Comprehensive characteristics value takes big fusion rule and obtains high frequency fusion coefficients;
3) to step 2) gained high and low frequency subband fusion coefficients, carry out the figure after quaternion wavelet inverse transformation is merged Picture;
As a kind of preferred scheme, the described multi-focus image fusing method based on quaternion wavelet conversion, step 1) It is described that quaternion wavelet conversion is carried out to two width multiple focussing images to be fused, obtain a low frequency sub-band (LL) and three every layer High-frequency sub-band (LH, HL, HH), wherein each subband four quaternary number system number subbands of correspondence, by quaternion algebra computing, this four Individual coefficient subband can be converted to an amplitude subband and three phason bandsIn low frequency sub-band, these three phases SubbandThe overall edge and texture information of image is given, amplitude subband reflects the general picture of image;In high frequency In band, phaseConversion of the image along edge and grain direction is reflected, amplitude subband reflects image in a direction Overall profile.As a kind of preferred scheme, step 2.1) it is described using being taken greatly based on improved phase gradient characteristic value (ZML) Fusion rule fusion low frequency sub-band, it is specific as follows:
Wherein,Represent source images A, B and fused images F at position (x, y) respectively The low frequency sub-band coefficient at place.τ1(x, y) represents coefficient selection weight.
Due to low frequency sub-bandPhase and θ phases represent image texture information both vertically and horizontally respectively, and Gradient information can reflect the texture transformation of image, so, we devise a new index ZML, as follows:
Wherein, k represents source images A or B;Represent low frequency sub-band (x, Y) average gradient at place, is defined as follows:
Wherein,Represent the coefficient of s phase (x, y) position of the low frequency sub-band of source images k, W1×W2Represent window Size, here, W1=W2=9.
As a kind of preferred scheme, step 2.2) in step a) it is specific as follows:
1) feature based on phase gradient
High-frequency sub-band mainly includes the texture information of image, in quaternion wavelet change, the phason in high frequency HL directions BandTexture information vertically and horizontally is represented respectively with θ with the phason in LH directions.Gradient Features can be contrasted not well Equidirectional texture information, and reflect the definition of current pixel.For comprehensive prominent textural characteristics, we have proposed one kind Feature based on coefficient gradients, its computing formula is as follows:
Wherein, newAGh1 kRepresent HL directionsThe Grad of phase (x, y) position, newAGh2 kRepresent LH directions θ phases The Grad of (x, y) position.It is similar that computational methods solve mode with the average gradient of low frequency sub-band.
2) region energy
A) first, three kinds of important relationships between quaternion wavelet decomposition coefficient are defined:1. current quaternion wavelet coefficient X It is between neighbor coefficient (NX) with 8 quaternion wavelet coefficients on adjacent position in same yardstick same direction subband Relation, representation space correlation;2. current quaternion wavelet coefficient X is corresponding in adjacent rougher yardstick correspondence direction subband empty Between quaternion wavelet coefficient on position be relation between paternal number (PX), represent across scale correlations;3. current quaternary number Wavelet coefficient X is cousins with one group of quaternion wavelet coefficient on additional space position in same yardstick different directions subband Relation between coefficient (CX), represents the correlation between different directions;
Secondly, according to the correlation calculations context variable value between quaternion wavelet coefficient, using two states, zero equal The gauss hybrid models GMM of value portrays the non-gaussian distribution characteristic of high frequency direction sub-band coefficients, then each coefficient and one Context variable and a hidden state are associated, and CHMM statistical modelings are then carried out, it is possible thereby to calculate its marginal probability density Function Edge PDF, its formula is as follows:
Wherein, x, y representation space location index, Vx,yIt is context variable, Sx,yIt is hidden state variable,It is probability when state is m,It is the condition of context variable value v Lower state is the probability of m,Gauss conditions probability density function is represented, average is zero,Represent mark It is accurate poor;Finally, it is divided to initialization and the step of repetitive exercise two to estimate model parameter using the expectation maximization EM algorithms of optimization.
The computing formula of context value is as follows:
Wherein, ωh(h=0,1,2,3) represents the weight of current coefficient relevant information, NAtRepresent direct neighbour coefficient, NBt Oblique neighborhood coefficient is represented, PX represents paternal number, CX1And CX2Left fraternal coefficient and right fraternal coefficient are represented respectively.
The computing formula of corresponding context variable is as follows:
Wherein,Current sub-band coefficient, father's sub-band coefficients, left fraternal and right brother's are represented respectively Average energy with coefficient, computing formula is as follows:
Wherein,M and N represent the line number and columns of subband respectively.
B) correlation of coefficient is obtained by calculating marginal probability density function, so that the reliability of feature extraction is improved, The computing formula of its local energy is as follows:
Wherein,Represent the marginal probability density function of j yardstick r direction coefficient subbands, W1×W2Representative Region The size in domain, here, W1=W2=3.
3) standard error of direction
In Quaternion Transformation field, structural information and noise all correspond to the big coefficient of numerical value, but the structural information of image Several directions are only distributed in, and noise energy is distributed in all directions.In order to avoid noise is brought into fused images, we A kind of feature based on directional subband factor standard difference is proposed, its computing formula is as follows:
Wherein, M (x, y) represents the average value of (x, y) coefficient on same yardstick different directions subband.Dir=3, r are from 1 to 3 These three directions of HL, LH, HH, C are represented respectivelyj,r(x, y) represents the j layers of coefficient at r directions position (x, y) place.
4) sub-band coefficients variance
The distribution of the good Reaction coefficient of sub-band coefficients variance energy, so that more accurately picture engraving feature.It calculates public Formula is as follows:
Wherein, Cj,rThe high frequency coefficient subband in j yardstick r directions is represented,Represent the average value of current region, W1×W2Represent The size in region, here, W1=W2=3.
As a kind of preferred scheme, step 2.2) in step b) it is specific as follows:
Effective comprehensive multiple feature can help prevent error message to introduce fused images, and we pass through Quaternion Matrix Multiple features are synthesized into a feature for synthesis, Quaternion Matrix can be expressed as Qq=P1+iP2+jP3+kP4, it can be with Write as Qq=(P1+iP2)+(P3+iP4) j, wherein P1, P2, P3, P4 correspond to feature, region energy based on phase gradient respectively Feature, standard error of direction feature, sub-band coefficients Variance feature, Q is expressed as by Quaternion Matrixq=A(c)+B(c)J, what is carried is comprehensive The computing formula for closing feature is as follows:
Finally, high-frequency sub-band coefficient takes big fusion rule using comprehensive characteristics value, and its computing formula is as follows:
τ1(x, y)=ZFA(x,y)>ZFB(x,y)
Wherein,WithRepresent that source images A and B is located at the height of (x, y) position in jth layer r directions respectively Frequency sub-band coefficients,Represent the high-frequency sub-band coefficient after melting, τ1(x, y) represents the selection weight of comprehensive characteristics value.
The present invention existing multi-focus image fusing method that compares has the following advantages:
1st, using the image interfusion method based on quaternion wavelet conversion, small echo is compared in quaternion wavelet conversion to the present invention (Wavelet) become the phase information of transducing capture coefficient, and compensate for deficiency of the real small echo on translation invariance, overcome Complex wavelet transform lacks the defect of phase information.It is different with Phase information from real small echo, quaternary number have approximate translation invariance with And abundant phase information, thus it is more suitable for processing the singularity of image, obtain that information content is more rich, definition is higher, quality More preferable fused images.
2nd, multi-focus image fusing method of the invention is to low frequency sub-band coefficient, using based on improved phase gradient feature Value takes big fusion rule, can effective Identified coefficient definition, improve overall visual effect;For high frequency direction subband system Number, quaternion wavelet coefficient is fully captured using context hidden Markov model between yardstick, between direction and spatial neighborhood Interior statistic correlation, and then using the local energy based on marginal probability density function Edge PDF as high-frequency sub-band coefficient A feature, the feature based on phase gradient, standard error of direction feature, coefficient sub-band variance feature are then obtained again, finally Synthesis is carried out to this four features with Quaternion Matrix, big fusion rule is taken using comprehensive characteristics;The above method can be effective Extract the information of source images in ground, it is to avoid error message is introduced, so as to increase the fusion mass and visual effect of image.
Brief description of the drawings
Fig. 1 is the flow chart of multi-focus image fusing method of the present invention based on quaternion wavelet conversion.
Fig. 2 is quaternion wavelet domain coefficient set membership of the present invention, arest neighbors relation and cousins's relation schematic diagram.
Fig. 3 is the structure chart of context HMM Q-CHMM in quaternion wavelet domain of the present invention.
Fig. 4 (a) is the left focusedimage of one embodiment of the invention.
Fig. 4 (b) is the right focusedimage of one embodiment of the invention.
Fig. 4 (c)-(j) is the fusion results schematic diagram of one embodiment of the invention.
Fig. 5 (a)-(i) is the details enlarged drawing of Fig. 4 (b)-(j).
In figure:C () is based on average weighted fused images;D () is based on the fused images of grad pyramid conversion;(e) base In stationary wavelet and the fused images of non-down sampling contourlet transform;F () is based on the fused images that tensor launches;G () is based on knot The fused images of structure tensor sum wavelet transformation;H () is based on the fused images of spatial frequency driving pulse coupled neural network;(i) Fused images based on spatial frequency driving pulse coupled neural network and shearing wave conversion;The fusion figure of (j) the inventive method Picture.
Specific embodiment
One embodiment of the present of invention (" babara " multiple focussing image) is elaborated with reference to accompanying drawing below, this implementation Example is carried out under premised on technical solution of the present invention, as shown in figure 1, detailed implementation method and specific operating procedure are such as Under:
Step 1, two width multiple focussing images to be fused are carried out 2 yardstick quaternion wavelets conversion, decomposition obtain one it is low Frequency subband (LL) and every layer of high-frequency sub-band (LH, HL, HH) in three directions, each subband four coefficient subbands of correspondence, this four Coefficient subband can be converted to an amplitude subband and three phason bands.
Step 2, is respectively adopted different fusion rules and merges to high-frequency sub-band coefficient and low frequency sub-band coefficient.
1) use takes big fusion rule based on improved Gradient Features value (ZML) carries out low frequency sub-band coefficient fusion:
τ1(x, y)=ZMLA(x,y)>ZMLB(x,y)
Wherein,Represent respectively source images A, B and fused images F position (x, Y) the low frequency sub-band coefficient at place.τ1(x, y) represents the selection weight of coefficient.
Due to low frequency sub-bandPhase and θ phases represent image texture information both vertically and horizontally respectively, and Gradient information can reflect the texture transformation of image, so, we devise a new index ZML, and formula is as follows:
Wherein, k represents source images A or B;Represent in low frequency phase s subbands The upper average gradient positioned at (x, y) place, is defined as follows:
Wherein,The coefficient of (x, y) position, W in the s phases of the low frequency sub-band for representing source images k1×W2Represent window Mouth size, here, W1=W2=9
To high frequency direction subband, merged using the fusion method based on Quaternion Matrix synthesis multiple features;
2.1) four features (feature, region energy, direction standard based on phase gradient are built to high-frequency sub-band coefficient Difference, sub-band coefficients variance)
A) feature based on phase gradient
High-frequency sub-band mainly includes the texture information of image, in quaternion wavelet change, the phase in high-frequency sub-band HL directions PositionTexture information vertically and horizontally is represented respectively with the phase theta in LH directions.Gradient can well contrast different directions line Reason information, and reflect the definition of current pixel.For comprehensive prominent textural characteristics, so proposing one based on phase ladder The module of degree, computing formula is as follows:
Wherein, newAGh1 k(x, y) represents HL directionsThe Grad of the point of (x, y) position, newAG in phaseh2 k(x,y) Represent the average gradient value of the point of (x, y) position in the θ phases of LH directions.
B) region energy
First, three kinds of important relationships between quaternion wavelet decomposition coefficient are defined:1. current quaternion wavelet coefficient X with 8 quaternion wavelet coefficients on adjacent position in same yardstick same direction subband are the pass between neighbor coefficient (NX) System, representation space correlation;2. current quaternion wavelet coefficient X and additional space in adjacent rougher yardstick correspondence direction subband Quaternion wavelet coefficient on position is the relation between paternal number (PX), is represented across scale correlations;3. current quaternary number is small Wave system number X is cousins system with one group of quaternion wavelet coefficient on additional space position in same yardstick different directions subband Relation between number (CX), represents the correlation between different directions;
Secondly, according to the correlation calculations context variable value between quaternion wavelet coefficient, using two states, zero equal The gauss hybrid models GMM of value portrays the non-gaussian distribution characteristic of high frequency direction sub-band coefficients, then each coefficient and one Context variable and a hidden state are associated, and CHMM statistical modelings are then carried out, it is possible thereby to calculate its marginal probability density Function Edge PDF, its formula is as follows:
Wherein, x, y representation space location index, Vx,yIt is context variable, Sx,yIt is hidden state variable,It is probability when state is m,It is the condition of context variable value v Lower state is the probability of m,Gauss conditions probability density function is represented, average is zero,Expression standard Difference;Finally, it is divided to initialization and the step of repetitive exercise two to estimate model parameter using the expectation maximization EM algorithms of optimization.
The computing formula of context value is as follows:
Wherein, ωh(h=0,1,2,3) represents the weight of current coefficient relevant information, NAtRepresent direct neighbour coefficient, NBt Oblique neighborhood coefficient is represented, PX represents paternal number, CX1And CX2Left fraternal coefficient and right fraternal coefficient are represented respectively.
The computing formula of corresponding context variable is as follows:
Wherein,Current sub-band coefficient, father's sub-band coefficients, left fraternal and right fraternal subband are represented respectively The average energy of coefficient, computing formula is as follows:
Wherein,M and N represent the line number and columns of subband respectively.
Finally, the correlation of coefficient is obtained by calculating marginal probability density function, so as to improve the reliability of feature extraction Property, the computing formula of its local energy is as follows:
Wherein, W1×W2Represent the size in region, here, W1=W2=3.
C) standard error of direction
Field is converted in quaternion wavelet, structural information and noise all correspond to the big coefficient of numerical value, but the structure of image Information is only distributed in several directions, and the Energy distribution of noise is in all directions.In order to avoid noise is brought into fused images In, the present invention proposes a kind of module based on standard error of direction, and its computing formula is as follows:
Wherein, M (x, y) represents the average value of the coefficient of same yardstick different directions subband (x, y) position.Dir=3, r from 1 to 3 represent HL, LH, HH these three high frequency directions, C respectivelyj,r(x, y) represents the coefficient at position (x, y) place on j layers of r direction.
D) sub-band coefficients variance
The distribution of the good Reaction coefficient of sub-band coefficients variance energy, so that more accurately picture engraving feature.It calculates public Formula is as follows:
Wherein, CjThe high-frequency sub-band on j yardsticks is represented,Represent the average value of current region, W1×W2Represent the big of region It is small, here, W1=W2=3, Var (x, y) represent variance of the sub-band coefficients at position (x, y) place.
2.2) one feature is obtained to four characteristic synthetics in 2.1) using Quaternion Matrix, is finally taken using characteristic value Big fusion rule obtains high frequency fusion coefficients
Effective comprehensive multiple feature can help prevent error message to introduce fused images, will be many by Quaternion Matrix Individual feature synthesizes a feature for synthesis, and Quaternion Matrix can be expressed as Qq=P1+iP2+jP3+kP4, it can also be write as Qq =(P1+iP2)+(P3+iP4) j, wherein P1, P2, P3, P4 correspond to 2.1) respectively in the feature based on gradient, region energy it is special Levy, standard error of direction feature, sub-band coefficients Variance feature, Quaternion Matrix is expressed as Qq=A(c)+B(c)J, institute's shedding closes special The computing formula levied is as follows:
Finally, the fusion rule of high frequency is as follows:
τ1(x, y)=ZFA(x,y)>ZFB(x,y)
Wherein,WithRepresent source images A and B in the upper height for being set to (x, y) in jth layer r directions respectively Frequency sub-band coefficients,Represent the high-frequency sub-band coefficient after fusion, τ1(x, y) represents comprehensive characteristics value selection weight.
High and low frequency subband fusion coefficients are carried out the image after quaternion wavelet inverse transformation is merged by step 4;
Emulation experiment
In order to verify feasibility of the invention and validity, using " barbara " multiple focussing image, size is 256 × 256, shown in such as Fig. 4 (a), 4 (b), fusion experiment is carried out according to the inventive method.
In sum, be can be seen that by comparing the fusion results and their enlarged drawing Fig. 5 of Fig. 4:Based on gradient gold The fused images contrast that the fusion method of word tower and the fusion method based on stationary wavelet and non-down sampling contourlet are obtained compared with It is low, based on weighted average method, the method based on pulse-couple, based on shearing wave with the method for pulse-couple and based on tensor The fused images that the method for expansion is obtained are all relatively fuzzyyer, from the amplification of the fusion method gained fused images based on structure tensor In figure, it can clearly be seen that right-hand component introduces error message, the inventive method gained fused images can effectively by two width The clear partial fusion of image maintains the features such as edge, the details in image to be fused together, preferably, effectively prevent The introducing of error message, thus the contrast and definition of image are higher, details is more prominent, and subjective vision effect preferably, that is, is melted Close result more preferable.
Table 1 gives the objective evaluation index using various fusion methods gained fusion results.Wherein, data overstriking shows Corresponding multi-focus image fusing method gained evaluation index value is optimal.
Table 1 is based on the fusion Performance comparision (Fig. 4 (a), 4 (b)) of various fusion methods
The matter of fused images is weighed in table 1 by standard deviation, entropy, definition, average gradient, edge strength, mutual information Amount, and then verify the feasibility and validity of fusion method of the present invention.Fusion method one is average weighted fusion side in upper table Method, fusion method two be based on grad pyramid conversion fusion method, fusion method three be based on stationary wavelet and it is non-under adopt The fusion method of sample profile wave convert, fusion method four is the fusion method of the expansion based on tensor, fusion method five be based on The fusion method of structure tensor and wavelet transformation, fusion method six is melting based on spatial frequency driving pulse coupled neural network Conjunction method, fusion method seven is the fusion method based on spatial frequency driving pulse coupled neural network and shearing wave.
From the data in table 1, it can be seen that the fused images that are obtained of the inventive method standard deviation, entropy, definition, average gradient, Other fusion methods are better than in the objective evaluation index such as edge strength, mutual information.Standard deviation reflects gradation of image relative to ash The discrete case of average is spent, its value is bigger, and gray level gets over dispersion, and image contrast is bigger, it can be seen that more information;Entropy reflects Image carry information content number, entropy is bigger, illustrates that the information content for including is more, and syncretizing effect is better;Definition reflects The ability that image is expressed minor detail contrast, definition then image syncretizing effect more high is better;Edge strength is used to weigh schemes As the abundant degree of edge details, its value are bigger, then the edge of fused images is more clear, and effect is better;Average gradient index is anti- The readability of image is reflected, value is bigger, show that image syncretizing effect is better;And mutual information is to weigh ash in two width gray level images The correlation of distribution is spent, the bigger image syncretizing effect of value is better.

Claims (6)

1. it is a kind of based on quaternion wavelet conversion multi-focus image fusing method, it is characterized in that:First to poly to be fused Burnt image carries out quaternion wavelet conversion, obtains a low frequency sub-band (LL) and every layer of three high-frequency sub-band (LH, HL, HH), its In each subband correspondence four quaternary number system number subbands, by quaternion algebra computing, this four coefficient subbands can be converted to One amplitude subband and three phason bands, use to low frequency sub-band coefficient and take big melting based on improved phase gradient characteristic value Normally, to high-frequency sub-band coefficient using the fusion method based on Quaternion Matrix synthesis multiple features, finally to the height after fusion Frequently, low frequency sub-band coefficient carries out quaternion wavelet inverse transformation and obtains final fused images.
2. it is according to claim 1 based on quaternion wavelet conversion multi-focus image fusing method, it is characterised in that bag Include step in detail below:
1) two width multiple focussing images to be fused are carried out with quaternion wavelet conversion, decomposition obtains different scale, different directions High and low frequency subband, each subband four coefficient subbands of correspondence, this four coefficient subbands can be converted to an amplitude subband three Phason band;
2) low frequency sub-band coefficient and high-frequency sub-band coefficient are merged respectively;
2.1) big fusion rule fusion low frequency sub-band coefficient is taken using based on improved phase gradient characteristic value;
2.2) using the fusion rule fusion high-frequency sub-band coefficient based on Quaternion Matrix synthesis multiple features;
A) the CHMM statistical models of quaternary number high-frequency sub-band coefficient are built, estimates that model is joined using expectation maximization EM algorithms Number, obtains quaternary number subband statistical model, and calculates its marginal probability density function Edge PDF, so as to obtain general based on edge The local energy of rate density function this feature.Then the feature based on phase gradient, the direction sign of high-frequency sub-band are sought respectively again Quasi- difference feature, sub-band coefficients Variance feature;
B) four features in step a) are carried out using Quaternion Matrix comprehensively obtaining a comprehensive characteristics, finally using synthesis Characteristic value takes big fusion rule and obtains high frequency fusion coefficients;
3) to step 2) gained high and low frequency subband fusion coefficients, carry out the image after quaternion wavelet inverse transformation is merged.
3. it is according to claim 2 based on quaternion wavelet conversion multi-focus image fusing method, it is characterised in that:Step It is rapid it is 1) described quaternion wavelet conversion is carried out to two width multiple focussing images to be fused, obtain a low frequency sub-band (LL) and every layer Three high-frequency sub-bands (LH, HL, HH), wherein each subband four quaternary number system number subbands of correspondence, by quaternion algebra computing, This four coefficient subbands can be converted to an amplitude subband and three phason bandsIn low frequency sub-band, these three Phason bandThe overall edge and texture information of image is given, amplitude subband reflects the general picture of image;In height In frequency subband, phaseConversion of the image along edge and grain direction is reflected, amplitude subband reflects image a certain The overall profile in direction.
4. it is according to claim 2 based on quaternion wavelet conversion multi-focus image fusing method, it is characterised in that:Step Rapid 2.1) described use takes big fusion rule fusion low frequency sub-band based on improved phase gradient characteristic value (ZML), specifically such as Under:
τ1(x, y)=ZMLA(x,y)>ZMLB(x,y)
Wherein,Represent that source images A, B and fused images F are located at the low of (x, y) place respectively Frequency sub-band coefficients.τ1(x, y) represents the selection weight of coefficient.
Due to low frequency sub-bandPhase and θ phases represent image texture information both vertically and horizontally respectively, and gradient Information can reflect the texture transformation of image, so, we devise a new index ZML, as follows:
Wherein, k represents source images A or B;Represent low frequency phase s subbands (x, Y) average gradient at place, is defined as follows:
Wherein,Represent the coefficient of (x, y) position on the low frequency s phason bands of source images k, W1×W2Represent window big It is small, here, W1=W2=9.
5. it is according to claim 2 based on quaternion wavelet conversion multi-focus image fusing method, it is characterised in that:Step It is rapid 2.2) in step a) it is specific as follows:
1) feature based on phase gradient
High-frequency sub-band mainly includes the texture information of image, in quaternion wavelet change, the phason band in high frequency HL directionsWith The phason in LH directions represents texture information vertically and horizontally with θ respectively.Gradient Features can well contrast not Tongfang To texture information, and reflect the definition of current pixel.For comprehensive prominent textural characteristics, it is based on we have proposed one kind The coefficient characteristics of phase gradient, its computing formula is as follows:
Wherein, newAGh1 k(x, y) represents HL directionsThe Grad of phason band (x, y) position, newAGh2 kRepresent LH directions θ phason band (x, y) position Grad.
2) region energy
A) first, three kinds of important relationships between quaternion wavelet decomposition coefficient are defined:1. current quaternion wavelet coefficient X and position Pass between in 8 quaternion wavelet coefficients on adjacent position in same yardstick same direction subband being neighbor coefficient (NX) System, representation space correlation;2. current quaternion wavelet coefficient X and additional space in adjacent rougher yardstick correspondence direction subband Quaternion wavelet coefficient on position is the relation between paternal number (PX), is represented across scale correlations;3. current quaternary number is small Wave system number X is cousins system with one group of quaternion wavelet coefficient on additional space position in same yardstick different directions subband Relation between number (CX), represents the correlation between different directions;
Secondly, according to the correlation calculations context variable value between quaternion wavelet coefficient, using two states, zero-mean Gauss hybrid models GMM portrays the non-gaussian distribution characteristic of high frequency direction sub-band coefficients, then each coefficient and about one Literary variable and a hidden state are associated, and CHMM statistical modelings are then carried out, it is possible thereby to calculate its marginal probability density function Edge PDF, its formula is as follows:
Wherein, x, y representation space location index, Vx,yIt is context variable, Sx,yIt is hidden state variable,It is shape Probability when state is m,It is that state is the probability of m under conditions of context variable value v,Gauss conditions probability density function is represented, average is zero,Represent standard deviation;Finally, using excellent The expectation maximization EM algorithms of change are divided to initialization and the step of repetitive exercise two to estimate model parameter.
The computing formula of context value is as follows:
Wherein, ωh(h=0,1,2,3) represents the weight of current coefficient relevant information, NAtRepresent direct neighbour coefficient, NBtRepresent Oblique neighborhood coefficient, PX represents paternal number, CX1And CX2Left fraternal coefficient and right fraternal coefficient are represented respectively.
The computing formula of corresponding context variable is as follows:
Wherein, EN,EP,Current sub-band coefficient, father's sub-band coefficients, left fraternal and right fraternal sub-band coefficients are represented respectively Average energy, computing formula is as follows:
Wherein,M and N represent the line number and columns of subband respectively.
B) correlation of coefficient is obtained by calculating marginal probability density function, so that the reliability of feature extraction is improved, its office The computing formula of portion's energy is as follows:
Wherein,Represent the marginal probability density function of the coefficient subband in j yardstick r directions, W1×W2Represent region Size, here, W1=W2=3.
In Quaternion Transformation field, structural information and noise all correspond to the big coefficient of numerical value, but the structural information of image is only divided Cloth is in several directions, and noise energy is distributed in all directions.In order to avoid noise is brought into fused images, it is proposed that A kind of coefficient characteristics based on standard error of direction, its computing formula is as follows:
Wherein, M (x, y) represents the average value of the coefficient of same yardstick different directions subband (x, y) position.Dir=3, r are from 1 to 3 HL, LH, HH these three high frequency directions, C are represented respectivelyj,r(x, y) represents the j layers of coefficient at r directions position (x, y) place.
4) sub-band coefficients variance
The distribution of the good Reaction coefficient of sub-band coefficients variance energy, so that more accurately picture engraving feature.Its computing formula is such as Under:
Wherein, Cj,rThe high frequency coefficient subband in j yardstick r directions is represented,Represent the average value of current region, W1×W2Represent region Size, here, W1=W2=3, Var (x, y) represent variance of the sub-band coefficients at position (x, y) place.
6. the multi-focus image fusing method based on quaternion wavelet conversion according to claim 1 and 2, its feature exists In:Step 2.2) in step b) it is specific as follows:
Effective comprehensive multiple feature can help prevent error message to introduce fused images, and we can by Quaternion Matrix Multiple features are synthesized into a feature for synthesis, Quaternion Matrix can be expressed as Qq=P1+iP2+jP3+kP4, it can be with Write as Qq=(P1+iP2)+(P3+iP4) j, wherein P1, P2, P3, P4 correspond to feature, region energy based on phase gradient respectively Feature, standard error of direction feature, sub-band coefficients Variance feature, Q is expressed as by Quaternion Matrixq=A(c)+B(c)J, institute's shedding is closed The computing formula of feature is as follows:
Finally, high-frequency sub-band coefficient takes big fusion rule using based on comprehensive multiple features, and its computing formula is as follows:
τ1(x, y)=ZFA(x,y)>ZFB(x,y)
Wherein,WithRepresent that source images A and B is the high-frequency sub-band of (x, y) in jth layer r directions position respectively Coefficient,Represent the high-frequency sub-band coefficient after fusion, τ1(x, y) represents the selection weight based on comprehensive characteristics value.
CN201611216386.5A 2016-12-26 2016-12-26 Multi-focus image fusing method based on quaternion wavelet conversion Pending CN106803242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611216386.5A CN106803242A (en) 2016-12-26 2016-12-26 Multi-focus image fusing method based on quaternion wavelet conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611216386.5A CN106803242A (en) 2016-12-26 2016-12-26 Multi-focus image fusing method based on quaternion wavelet conversion

Publications (1)

Publication Number Publication Date
CN106803242A true CN106803242A (en) 2017-06-06

Family

ID=58985038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611216386.5A Pending CN106803242A (en) 2016-12-26 2016-12-26 Multi-focus image fusing method based on quaternion wavelet conversion

Country Status (1)

Country Link
CN (1) CN106803242A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133938A (en) * 2016-08-31 2017-09-05 电子科技大学 Robust image fusion method based on wavelet coefficient statistical model
CN108564559A (en) * 2018-03-14 2018-09-21 北京理工大学 A kind of multi-focus image fusing method based on two scale focused views
CN108960041A (en) * 2018-05-17 2018-12-07 首都师范大学 Image characteristic extracting method and device
CN109345493A (en) * 2018-09-05 2019-02-15 上海工程技术大学 A kind of method of non-woven cloth multi-focal-plane image co-registration
CN109446924A (en) * 2018-10-10 2019-03-08 南京信息工程大学 A kind of RGB-D target identification method based on quaternary number generalized discriminant analysis
CN110570387A (en) * 2019-09-16 2019-12-13 江南大学 image fusion method based on feature level Copula model similarity
CN110895325A (en) * 2019-11-28 2020-03-20 宁波大学 Arrival angle estimation method based on enhanced quaternion multiple signal classification
CN111292284A (en) * 2020-02-04 2020-06-16 淮阴师范学院 Color image fusion method based on dual-tree-quaternion wavelet transform
CN111402183A (en) * 2020-01-10 2020-07-10 北京理工大学 Multi-focus image fusion method based on octave pyramid framework
CN112200887A (en) * 2020-10-10 2021-01-08 北京科技大学 Multi-focus image fusion method based on gradient perception
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN114079771A (en) * 2020-08-14 2022-02-22 华为技术有限公司 Image coding and decoding method and device based on wavelet transformation
CN116071520A (en) * 2023-03-31 2023-05-05 湖南省水务规划设计院有限公司 Digital twin water affair simulation test method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106647A (en) * 2013-03-06 2013-05-15 哈尔滨工业大学 Multi-focal-point image fusion method based on quaternion wavelet and region segmentation
CN103985105A (en) * 2014-02-20 2014-08-13 江南大学 Contourlet domain multi-modal medical image fusion method based on statistical modeling
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN105118057A (en) * 2015-08-18 2015-12-02 江南大学 Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106647A (en) * 2013-03-06 2013-05-15 哈尔滨工业大学 Multi-focal-point image fusion method based on quaternion wavelet and region segmentation
CN103106647B (en) * 2013-03-06 2015-08-19 哈尔滨工业大学 Based on the Multi-focal-point image fusion method of quaternion wavelet and region segmentation
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN103985105A (en) * 2014-02-20 2014-08-13 江南大学 Contourlet domain multi-modal medical image fusion method based on statistical modeling
CN105118057A (en) * 2015-08-18 2015-12-02 江南大学 Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
常莉红: "一种基于四元数小波变换的图像融合方法", 《宝鸡文理学院学报(自然科学版)》 *
殷明 等: "基于四元数小波变换的多聚焦图像融合", 《第六届全国几何设计与计算学术会议》 *
王治文 等: "基于四元数小波变换的清晰度评价", 《计算机应用》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133938B (en) * 2016-08-31 2019-08-13 电子科技大学 Robust image fusion method based on wavelet coefficient statistical model
CN107133938A (en) * 2016-08-31 2017-09-05 电子科技大学 Robust image fusion method based on wavelet coefficient statistical model
CN108564559A (en) * 2018-03-14 2018-09-21 北京理工大学 A kind of multi-focus image fusing method based on two scale focused views
CN108564559B (en) * 2018-03-14 2021-07-20 北京理工大学 Multi-focus image fusion method based on two-scale focus image
CN108960041B (en) * 2018-05-17 2020-11-27 首都师范大学 Image feature extraction method and device
CN108960041A (en) * 2018-05-17 2018-12-07 首都师范大学 Image characteristic extracting method and device
CN109345493A (en) * 2018-09-05 2019-02-15 上海工程技术大学 A kind of method of non-woven cloth multi-focal-plane image co-registration
CN109446924A (en) * 2018-10-10 2019-03-08 南京信息工程大学 A kind of RGB-D target identification method based on quaternary number generalized discriminant analysis
CN109446924B (en) * 2018-10-10 2021-07-13 南京信息工程大学 Quaternion generalized discriminant analysis-based RGB-D target identification method
CN110570387A (en) * 2019-09-16 2019-12-13 江南大学 image fusion method based on feature level Copula model similarity
CN110570387B (en) * 2019-09-16 2023-04-07 江南大学 Image fusion method based on feature level Copula model similarity
CN110895325A (en) * 2019-11-28 2020-03-20 宁波大学 Arrival angle estimation method based on enhanced quaternion multiple signal classification
CN110895325B (en) * 2019-11-28 2024-01-05 绍兴市上虞区舜兴电力有限公司 Arrival angle estimation method based on enhanced quaternion multiple signal classification
CN111402183A (en) * 2020-01-10 2020-07-10 北京理工大学 Multi-focus image fusion method based on octave pyramid framework
CN111402183B (en) * 2020-01-10 2023-08-11 北京理工大学 Multi-focus image fusion method based on octave pyramid frame
CN111292284A (en) * 2020-02-04 2020-06-16 淮阴师范学院 Color image fusion method based on dual-tree-quaternion wavelet transform
CN111292284B (en) * 2020-02-04 2024-03-01 淮阴师范学院 Color image fusion method based on dual-tree-quaternion wavelet transformation
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN113947554B (en) * 2020-07-17 2023-07-14 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN114079771A (en) * 2020-08-14 2022-02-22 华为技术有限公司 Image coding and decoding method and device based on wavelet transformation
CN112200887A (en) * 2020-10-10 2021-01-08 北京科技大学 Multi-focus image fusion method based on gradient perception
CN112200887B (en) * 2020-10-10 2023-08-01 北京科技大学 Multi-focus image fusion method based on gradient sensing
CN116071520A (en) * 2023-03-31 2023-05-05 湖南省水务规划设计院有限公司 Digital twin water affair simulation test method

Similar Documents

Publication Publication Date Title
CN106803242A (en) Multi-focus image fusing method based on quaternion wavelet conversion
CN111145131B (en) Infrared and visible light image fusion method based on multiscale generation type countermeasure network
Rao et al. TGFuse: An infrared and visible image fusion approach based on transformer and generative adversarial network
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
Wang et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion
CN110135375A (en) More people's Attitude estimation methods based on global information integration
CN102098440B (en) Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN111563915B (en) KCF target tracking method integrating motion information detection and Radon transformation
CN107253485A (en) Foreign matter invades detection method and foreign matter intrusion detection means
CN110826389B (en) Gait recognition method based on attention 3D frequency convolution neural network
CN112362072B (en) High-precision point cloud map creation system and method in complex urban environment
CN107481279A (en) A kind of monocular video depth map computational methods
CN108021889A (en) A kind of binary channels infrared behavior recognition methods based on posture shape and movable information
CN113283525B (en) Image matching method based on deep learning
CN112288008A (en) Mosaic multispectral image disguised target detection method based on deep learning
CN115496928A (en) Multi-modal image feature matching method based on multi-feature matching
CN109376641A (en) A kind of moving vehicle detection method based on unmanned plane video
CN109191416A (en) Image interfusion method based on sparse dictionary study and shearing wave
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN114170537A (en) Multi-mode three-dimensional visual attention prediction method and application thereof
Heather et al. Multimodal image registration with applications to image fusion
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Yang et al. FG-GAN: a fine-grained generative adversarial network for unsupervised SAR-to-optical image translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170606