CN109300096A - A kind of multi-focus image fusing method and device - Google Patents

A kind of multi-focus image fusing method and device Download PDF

Info

Publication number
CN109300096A
CN109300096A CN201810889769.1A CN201810889769A CN109300096A CN 109300096 A CN109300096 A CN 109300096A CN 201810889769 A CN201810889769 A CN 201810889769A CN 109300096 A CN109300096 A CN 109300096A
Authority
CN
China
Prior art keywords
pixel
low frequency
frequency subgraph
high frequency
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810889769.1A
Other languages
Chinese (zh)
Inventor
方沛宇
杨婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhimai Recognition Technology Co Ltd
Original Assignee
Beijing Zhimai Recognition Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhimai Recognition Technology Co Ltd filed Critical Beijing Zhimai Recognition Technology Co Ltd
Priority to CN201810889769.1A priority Critical patent/CN109300096A/en
Publication of CN109300096A publication Critical patent/CN109300096A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]

Abstract

The present invention provides a kind of multi-focus image fusing method and devices, are related to technical field of image processing.Method includes: that two width are carried out multi-layer filtering using shift-invariant spaces for the multiple focussing image after the registration of Same Scene, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component fusion coefficients;According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component fusion coefficients;Translation invariant discrete wavelet inverse transform is carried out according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency component of low frequency component fusion coefficients, generates blending image.The multi-focus image fusing method that the present invention can solve the prior art cannot be the problem of eliminating blending image falseness profile and retaining on image detail information while being optimal.

Description

A kind of multi-focus image fusing method and device
Technical field
The present invention relates to technical field of image processing more particularly to a kind of multi-focus image fusing methods and device, especially It is the multi-focus image fusing method and device weighted based on region energy consistency and similitude,
Background technique
Currently, the figure focused respectively using several in multi-focus image fusion technical treatment Same Scene to different target Picture can make full use of and be fused the redundancy for including in image and complementary information, and scene more comprehensively, is accurately retouched in acquisition It states.It usually first has to decomposing multiple focussing image into multiple scales, the blending image in multiresolution, common more resolutions Rate decomposition method mainly has: pyramid decomposition and Wavelet Transform.Compared with pyramid decomposition, Wavelet Transform has directionality And nonredundancy, it is wider in field of image processing application.Source images obtain 3N width high frequency subgraph and 1 by N layers of wavelet decomposition Width low frequency subgraph picture, high frequency subgraph retain details and marginal information of the image under each resolution ratio, and low frequency subgraph picture contains The background information of image.Image co-registration can carry out respectively on different scale, and the fusion rule of each scale determines blending image Quality.
The method for carrying out image co-registration in multiresolution in the prior art, can be divided into following three classes: the first kind It is mean value method, such methods are directly using the mean value of two filtering images as the fusion coefficients of the scale.This method due to Image focus area and non-focusing region are not distinguished, more defocus information is introduced.Blending image remains multi-focus figure The essential characteristic of picture, whole visual effect is general, and the details and edge of image are relatively fuzzyyer.Second class is region consistency detection Method, such methods calculate the maximum regional value matrix centered on each pixel, fall in two images according to maximum value in region In number and Region Matching degree critical value relationship determine fusion rule.This method using area maximum value is retouched as image It states, highlights provincial characteristics, weaken local feature, essentially eliminate the false profile in blending image, but will appear part The problem of detailed information is lost.Third class is similitude method of weighting, and such methods compare two images similitude and setting threshold The relationship of value determines fusion rule, and the algorithm is clearly easy, and calculation amount is small, and blending image remains the side of multiple focussing image substantially Edge and detailed information, but false profile phenomenon is obvious.
, can not in conclusion although there are many existing method in multiresolution angle fusion multiple focussing image It is optimal simultaneously on image detail information eliminating blending image falseness profile and retaining.
Summary of the invention
The embodiment of the present invention provides a kind of multi-focus image fusing method and device, to solve the multi-focus of the prior art Image interfusion method cannot eliminate blending image falseness profile and retain the problem of being optimal simultaneously on image detail information, The present invention can handle the left and right of two width rigid registrations of Same Scene respectively focusedimage fusion the problem of, can be applied to it is digital at The fields such as picture, computer vision, automatic target detection.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
A kind of multi-focus image fusing method, comprising:
Two width are more using shift-invariant spaces progress for the multiple focussing image after the registration of Same Scene Layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component Fusion coefficients;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component Fusion coefficients;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency point of low frequency component fusion coefficients Amount carries out translation invariant discrete wavelet inverse transform, generates blending image.
Specifically, described use translation invariant discrete wavelet for the multiple focussing image after the registration of Same Scene for two width Transformation carries out multi-layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture, comprising:
The first multiple focussing image and the second multiple focussing image after two width to be directed to the registration of Same Scene are respectively adopted flat It moves constant wavelet transform and carries out N layers of filtering, be decomposed to form corresponding 3N the first high frequency subgraph of the first multiple focussing image With 1 the first low frequency subgraph picture, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 Two low frequency subgraph pictures;It is first multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each second high Frequency subgraph, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
Specifically, it is described according to the corresponding high frequency subgraph of two width multiple focussing images, carry out high fdrequency component fusion, shape At high fdrequency component fusion coefficients, comprising:
Each the first high frequency subgraph and each the second high frequency subgraph are corresponded respectively, and determine each high frequency respectively The region energy of subgraph:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel;EA(i, j) is one first high Region energy of the frequency subgraph in ω 1;IA(i, j) is pixel value of the first high frequency subgraph of this in pixel (i, j);EB(i, It j) is region energy of second high frequency subgraph in ω 1;IB(i, j) be the second high frequency subgraph of this pixel (i, J) pixel value;
Individual element compares the corresponding region energy of the first high frequency subgraph and the corresponding region energy of the second high frequency subgraph Amount, and the first high frequency subgraph and the corresponding default matrix of the second high frequency subgraph are determined according to comparison result;The default square Battle array is the matrix that initial value identical with the first high frequency subgraph and the second high frequency subgraph size is 0;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) Locate corresponding default matrix;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix;
Calculate the big value number of energy in the region of (2n-1) * (2n-1) pixel centered on each pixel:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is Energy big value number of first high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2;
According to corresponding first high frequency subgraph of pixel (i, j) in the big value number of energy of ω 2 and the second high frequency subgraph As the big value number of energy in ω 2, take the pixel value of the corresponding pixel (i, j) more than the big value number of energy as the pixel The high fdrequency component fusion coefficients of (i, j).
Specifically, it is described according to the corresponding low frequency subgraph picture of two width multiple focussing images, carry out low frequency component fusion, shape At low frequency component fusion coefficients, comprising:
The corresponding first saliency map Sa of first multiple focussing image and the second multiple focussing image pair are determined using spectrum residual error method The the second saliency map Sb answered;
According to formula:Determine the first saliency map Sa each pixel (i, J) first area conspicuousness SI, j(Sa), and determine that the second area of each pixel (i, j) of the second saliency map Sb is significant Property SI, j(Sb);Wherein, Q indicates the region of the m*m pixel centered on pixel (i, j);Q is centered on pixel (i, j) M*m pixel region in each pixel;W (q) be value be 1 m*m size weight matrix;SF (Sa, q) is first aobvious The pixel value of each pixel in the region of m*m pixel in work degree figure Sa centered on pixel (i, j);SF (Sb, q) is the The pixel value of each pixel in the region of m*m pixel in two saliency map Sb centered on pixel (i, j);
According to formula:Determine that the first saliency map Sa and second is aobvious Coefficient R of the work degree figure Sb in each pixel (i, j)I, j
If the coefficient RI, jLess than or equal to pre-set coefficient threshold, the first low frequency subgraph picture and second are determined Low frequency subgraph picture is uncorrelated at pixel (i, j), by first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) In the corresponding low frequency subgraph picture of value greatly melt in the low frequency coefficient of pixel (i, j) as the low frequency component of the pixel (i, j) Collaboration number;
If the coefficient RI, jGreater than pre-set coefficient threshold, the first low frequency subgraph picture and the second low frequency are determined Subgraph is in pixel (i, j) correlation, by first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) compared Compared with determining first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) the corresponding weighted value of small value inAnd first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) the big value pair in The weighted value w answeredmax=1-wmin;By the first low frequency subgraph picture pixel (i, j) low frequency coefficient and the second low frequency subgraph picture It is weighted summation in the low frequency coefficient of pixel (i, j), obtains the low frequency component fusion coefficients of pixel (i, j);Wherein, T For pre-set coefficient threshold.
A kind of multi-focus image fusion device, comprising:
Multi-layer filtering unit, for by two width for Same Scene registration after multiple focussing image using translation invariant from It dissipates wavelet transformation and carries out multi-layer filtering, be decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph Picture;
High fdrequency component integrated unit, for carrying out high frequency according to the corresponding high frequency subgraph of two width multiple focussing images Component fusion, forms high fdrequency component fusion coefficients;
Low frequency component integrated unit, for carrying out low frequency according to the corresponding low frequency subgraph picture of two width multiple focussing images Component fusion, forms low frequency component fusion coefficients;
Blending image generation unit, for according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and low frequency component The corresponding low frequency component of fusion coefficients carries out translation invariant discrete wavelet inverse transform, generates blending image.
In addition, the multi-layer filtering unit, is specifically used for:
The first multiple focussing image and the second multiple focussing image after two width to be directed to the registration of Same Scene are respectively adopted flat It moves constant wavelet transform and carries out N layers of filtering, be decomposed to form corresponding 3N the first high frequency subgraph of the first multiple focussing image With 1 the first low frequency subgraph picture, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 Two low frequency subgraph pictures;It is first multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each second high Frequency subgraph, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
In addition, the high fdrequency component integrated unit, comprising:
Region energy determining module, for distinguishing one by one each the first high frequency subgraph and each the second high frequency subgraph It is corresponding, and the region energy of each high frequency subgraph is determined respectively:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel;EA(i, j) is one first high Region energy of the frequency subgraph in ω 1;IA(i, j) is pixel value of the first high frequency subgraph of this in pixel (i, j);EB(i, It j) is region energy of second high frequency subgraph in ω 1;IB(i, j) be the second high frequency subgraph of this pixel (i, J) pixel value;
Region energy comparison module compares the corresponding region energy of the first high frequency subgraph and second high for individual element The corresponding region energy of frequency subgraph, and determine that the first high frequency subgraph and the second high frequency subgraph are corresponding according to comparison result Default matrix;The default matrix is that initial value identical with the first high frequency subgraph and the second high frequency subgraph size is 0 Matrix;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) Locate corresponding default matrix;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix;
The big value number computing module of energy, for calculating the region of (2n-1) * (2n-1) pixel centered on each pixel The big value number of interior energy:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is Energy big value number of first high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2;
High fdrequency component fusion coefficients determining module is used for according to corresponding first high frequency subgraph of pixel (i, j) in ω 2 The big value number of energy and the second high frequency subgraph in the big value number of energy of ω 22, take the corresponding picture more than the big value number of energy High fdrequency component fusion coefficients of the pixel value of vegetarian refreshments (i, j) as the pixel (i, j).
In addition, the low frequency component integrated unit, comprising:
Saliency map determining module, for determining corresponding first saliency map of the first multiple focussing image using spectrum residual error method The Sa and corresponding second saliency map Sb of the second multiple focussing image;
Region significance determining module, for according to formula:Determine first The first area conspicuousness S of each pixel (i, j) of saliency map SaI, j(Sa), and determine the second saliency map Sb each picture The second area conspicuousness S of vegetarian refreshments (i, j)I, j(Sb);Wherein, Q indicates the area of the m*m pixel centered on pixel (i, j) Domain;Q is each pixel in the region of the m*m pixel centered on pixel (i, j);W (q) is the m*m size that value is 1 Weight matrix;SF (Sa, q) is each picture in the region of the m*m pixel in the first saliency map Sa centered on pixel (i, j) The pixel value of vegetarian refreshments;SF (Sb, q) is in the region of the m*m pixel in the second saliency map Sb centered on pixel (i, j) The pixel value of each pixel;
Related coefficient determining module, for according to formula:Determine Coefficient R of the one saliency map Sa and the second saliency map Sb in each pixel (i, j)I, j
Low frequency component fusion coefficients determining module, in the coefficient RI, jLess than or equal to pre-set coefficient When threshold value, determine that the first low frequency subgraph picture and the second low frequency subgraph picture are uncorrelated in pixel (i, j), by first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) low frequency coefficient of the corresponding low frequency subgraph picture of value greatly in pixel (i, j) Low frequency component fusion coefficients as the pixel (i, j);In the coefficient RI, jGreater than pre-set coefficient threshold When, the first low frequency subgraph picture and the second low frequency subgraph picture are determined in pixel (i, j) correlation, by first area conspicuousness SI, j (Sa) and second area conspicuousness SI, j(Sb) it is compared, determines first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) the corresponding weighted value of small value inAnd first area conspicuousness SI, j(Sa) and second Region significance SI, j(Sb) the corresponding weighted value w of value greatly inmax=1-wmin;By the first low frequency subgraph picture at pixel (i, j) Low frequency coefficient and the second low frequency subgraph picture in the low frequency coefficient of pixel (i, j) be weighted summation, obtain pixel (i, j) Low frequency component fusion coefficients;Wherein, T is pre-set coefficient threshold.
A kind of computer readable storage medium, is stored thereon with computer program, realization when which is executed by processor Following steps:
Two width are more using shift-invariant spaces progress for the multiple focussing image after the registration of Same Scene Layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component Fusion coefficients;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component Fusion coefficients;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency point of low frequency component fusion coefficients Amount carries out translation invariant discrete wavelet inverse transform, generates blending image.
A kind of computer equipment including memory, processor and is stored in the meter that storage is upper and can run on a processor Calculation machine program, the processor perform the steps of when executing described program
Two width are more using shift-invariant spaces progress for the multiple focussing image after the registration of Same Scene Layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component Fusion coefficients;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component Fusion coefficients;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency point of low frequency component fusion coefficients Amount carries out translation invariant discrete wavelet inverse transform, generates blending image.
Two width can be directed to Same Scene by a kind of multi-focus image fusing method and device provided in an embodiment of the present invention Registration after multiple focussing image using shift-invariant spaces carry out multi-layer filtering, be decomposed to form two width multi-focus figures As corresponding high frequency subgraph and low frequency subgraph picture;Then according to the corresponding high frequency subgraph of two width multiple focussing images Picture carries out high fdrequency component fusion, high fdrequency component fusion coefficients is formed, to retain the high-frequency information in multiple focussing image;And root According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, calculates significant information, forms low frequency point Fusion coefficients are measured, this is because multiple focussing image of the size of low frequency component with focal zone and without wavelet decomposition is consistent, and Source images are more clear, and can be obtained than directly calculating low frequency component saliency map more significant information;Later according to the height It is discrete that the corresponding high fdrequency component of frequency component fusion coefficients and the corresponding low frequency component of low frequency component fusion coefficients carry out translation invariant Wavelet inverse transformation generates blending image.The multi-focus image fusing method that the present invention can solve the prior art cannot eliminated The problem of being optimal simultaneously on blending image falseness profile and reservation image detail information, it is tight to can handle two width of Same Scene The problem of left and right difference focusedimage fusion of lattice registration, it can be applied to digital image-forming, computer vision, automatic target detection etc. Field.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart one of multi-focus image fusing method provided in an embodiment of the present invention;
Fig. 2 is a kind of flowchart 2 of multi-focus image fusing method provided in an embodiment of the present invention;
Fig. 3 is two width multiple focussing images and its respective saliency map;
(c) of (a) and Fig. 3 that Fig. 4 is Fig. 3 are melted in each decomposition layer using difference after the transformed 3 layers of filtering of SIDWT The image that conjunction method obtains;
Fig. 5 is the enlarged drawing of same area in (b), (c), (d) of Fig. 4;
Fig. 6 is a kind of structural schematic diagram one of multi-focus image fusion device provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram two of multi-focus image fusion device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Since there are many existing method in multiresolution angle fusion multiple focussing image, but can not be merged eliminating It is optimal simultaneously on image falseness profile and reservation image detail information, inventor thinks by filtered high and low frequency Subgraph is different to the expression emphasis of source images, different fusion rules should be used to be handled.
Therefore, in order to realize technical effect of the invention, as shown in Figure 1, the embodiment of the present invention provides a kind of multi-focus figure As fusion method, comprising:
Two width are used translation invariant discrete wavelet transformer for the multiple focussing image after the registration of Same Scene by step 101 Swap-in row multi-layer filtering is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture.
Step 102, according to the corresponding high frequency subgraph of two width multiple focussing images, carry out high fdrequency component fusion, formed High fdrequency component fusion coefficients.
Step 103, according to the corresponding low frequency subgraph picture of two width multiple focussing images, carry out low frequency component fusion, formed Low frequency component fusion coefficients.
Step 104 is corresponded to according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and low frequency component fusion coefficients Low frequency component carry out translation invariant discrete wavelet inverse transform, generate blending image.
A kind of multi-focus image fusing method provided in an embodiment of the present invention, can solve the multiple focussing image of the prior art Fusion method cannot eliminate blending image falseness profile and retain the problem of being optimal simultaneously on image detail information, can be with The problem of handling the left and right difference focusedimage fusion of two width rigid registrations of Same Scene, can be applied to digital image-forming, computer The fields such as vision, automatic target detection.
In order to make those skilled in the art be better understood by the present invention, a more detailed embodiment is set forth below, As shown in Fig. 2, the embodiment of the present invention provides a kind of multi-focus image fusing method, specifically include:
Step 201 obtains two width for the multiple focussing image after the registration of Same Scene.
Such as by the camera installations such as digital camera obtain two width for Same Scene registration after multiple focussing image, In, this two width refers to that the two images are the Same Scenes of shooting for the multiple focussing image after the registration of Same Scene, and schemes As boundary is identical.
Step 202, the first multiple focussing image and the second multiple focussing image point being directed to two width after the registration of Same Scene It Cai Yong not shift-invariant spaces (Shift Invariance Discrete Wavelet Transform, abbreviation SIDWT N layers of filtering) are carried out, corresponding 3N the first high frequency subgraph of the first multiple focussing image and 1 the first low frequency are decomposed to form Subgraph, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 the second low frequency subgraph picture.
Wherein, first multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each second high Frequency subgraph, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
N layer filtering herein for example can be using 3 layers of filtering, then available 9 the first high frequency subgraphs and 1 first Low frequency subgraph picture, and obtain 9 the second high frequency subgraphs and 1 the second low frequency subgraph picture.
Step 203 or step 207 are executed after step 202.
Step 203 corresponds each the first high frequency subgraph and each the second high frequency subgraph respectively, and true respectively The region energy of fixed each high frequency subgraph:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel, and n is preferably odd number, example herein If n is 3;EA(i, j) is region energy of first high frequency subgraph in ω 1;IA(i, j) is the first high frequency subgraph of this In the pixel value of pixel (i, j);EB(i, j) is region energy of second high frequency subgraph in ω 1;IB(i, j) is the width Pixel value of second high frequency subgraph in pixel (i, j).
One-to-one correspondence herein refers to when carrying out SIDWT transformation, and the first high frequency subgraph of the first width of first layer will be with Second high frequency subgraph of the first width of first layer is corresponding, and so on.
Step 204, individual element compare the corresponding region energy of the first high frequency subgraph and the second high frequency subgraph is corresponding Region energy, and the first high frequency subgraph and the corresponding default matrix of the second high frequency subgraph are determined according to comparison result.
Wherein, the default matrix is initial value identical with the first high frequency subgraph and the second high frequency subgraph size For 0 matrix;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) Locate corresponding default matrix;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix.
The big value number of energy in the region of (2n-1) * (2n-1) pixel of step 205, calculating centered on each pixel:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is Energy big value number of first high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2.
Step 206, according to corresponding first high frequency subgraph of pixel (i, j) in the big value number of energy of ω 2 and second High frequency subgraph takes the pixel value conduct of the corresponding pixel (i, j) more than the big value number of energy in the big value number of energy of ω 2 The high fdrequency component fusion coefficients of the pixel (i, j).
High fdrequency component fusion coefficients i.e. herein can be considered as corresponding pixel value, to merge system by each high fdrequency component Array at image be high fdrequency component fusion result.
Step 207 determines the corresponding first saliency map Sa of the first multiple focussing image and the second poly using spectrum residual error method The corresponding second saliency map Sb of burnt image.
Step 208, according to formula:Determine each picture of the first saliency map Sa The first area conspicuousness S of vegetarian refreshments (i, j)I, j(Sa), and determine the second saliency map Sb each pixel (i, j) second Region significance SI, j(Sb)。
Wherein, Q indicates the region of the m*m pixel centered on pixel (i, j), and m is preferably odd number, such as m etc. herein In 3;Q is each pixel in the region of the m*m pixel centered on pixel (i, j);W (q) is the m*m size that value is 1 Weight matrix;SF (Sa, q) is each in the region of the m*m pixel in the first saliency map Sa centered on pixel (i, j) The pixel value of pixel;SF (Sb, q) is in the region of the m*m pixel in the second saliency map Sb centered on pixel (i, j) Each pixel pixel value.
Step 209, according to formula:Determine the first saliency map Sa With the second saliency map Sb each pixel (i, j) coefficient RI, j
Coefficient RI, jChange between 0 to 1, value two width saliency maps of bigger expression are in pixel (i, the j) degree of correlation It is higher.
Step 210 or step 211 are executed after step 209.
If step 210, the coefficient RI, jLess than or equal to pre-set coefficient threshold, the first low frequency subgraph is determined Picture and the second low frequency subgraph picture are uncorrelated in pixel (i, j), by first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) low frequency of the low frequency coefficient as the pixel (i, j) of the corresponding low frequency subgraph picture of value greatly in pixel (i, j) Component fusion coefficients.
If step 211, the coefficient RI, jGreater than pre-set coefficient threshold, determine the first low frequency subgraph picture and Second low frequency subgraph picture is in pixel (i, j) correlation, by first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j (Sb) it is compared, determines first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) the corresponding power of small value in Weight valuesAnd first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) big in It is worth corresponding weighted value wmax=1-wmin;By the first low frequency subgraph picture in the low frequency coefficient of pixel (i, j) and the second low frequency Image is weighted summation in the low frequency coefficient of pixel (i, j), obtains the low frequency component fusion coefficients of pixel (i, j).
Wherein, T is pre-set coefficient threshold, such as T is 0.75.
Step 212 is executed after step 206, step 210 and step 211.
Step 212 is corresponded to according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and low frequency component fusion coefficients Low frequency component carry out translation invariant discrete wavelet inverse transform, generate blending image.
The process (for 3 layers) of specific SIDWT inverse transformation is general are as follows: by the low frequency component of top layer and 3 panel heights frequency Component with reconstruction filter convolution, is summed the result of convolution to obtain the low frequency component of middle layer, be repeated two more times above-mentioned respectively Final blending image can be obtained in convolution summation process.It is ground since the basic content of current SIDWT transformation and inverse transformation has Study carefully, no longer excessively repeats herein.
Technical solution of the present invention and technical effect are illustrated with specific image instance below.For example, as shown in figure 3, (a), (b), (c), (d) in Fig. 3 are respectively the first multiple focussing image (or being left focusedimage), the first multiple focussing image Saliency map, the second multiple focussing image (or be right focusedimage), the second multiple focussing image saliency map.Fig. 4 is Fig. 3 (a) and (c) of Fig. 3 is after 3 layers of filtering of SIDWT, in the image that each decomposition layer is obtained using different fusion methods.Such as Fig. 4 It (a) is the blending image obtained in high and low frequency using mean value method, (b) of Fig. 4 is to use similitude in high and low frequency The blending image that method of weighting obtains, (c) of Fig. 4 are the fusion figures obtained in high and low frequency using consistency desired result method Picture, (d) of Fig. 4 are the blending images obtained using the method for the present invention.In addition, Fig. 5 is Fig. 4 (b), (c), same zone in (d) The enlarged drawing in domain, (a) of Fig. 5 are the magnified partial views of Fig. 3 (b), and (b) of Fig. 5 is the magnified partial view of Fig. 4 (c), figure 5 (c) is the magnified partial view of Fig. 4 (d).
In order to more effectively by image interfusion method in the prior art compared with the image interfusion method of the embodiment of the present invention, Quantitative analysis has been done to the resulting blending image of each method to compare, and calculates separately average gradient (AG), comentropy (EN), standard Difference (SD), mutual information (MI), structural similarity (SSIM), the value of these evaluation indexes is bigger, and fused image quality is higher.Such as Shown in table 1:
Table 1
Image interfusion method AG EN SD MI SSIM
Mean value method 4.5082 6.8408 44.1810 1.9984 0.9083
Similitude weighting method 6.4173 7.2328 45.4676 1.6010 0.8881
Consistency desired result method 6.3149 7.1759 45.6327 1.5830 0.8811
Present invention method 6.4125 7.1899 45.6753 1.6103 0.8844
As shown in table 1, SD the and MI value of the blending image obtained using the method for the embodiment of the present invention is maximum, illustrate by The blending image clarity that the method for the embodiment of the present invention obtains is higher and the degree of correlation of source multiple focussing image is high.
Multiple focussing image is degraded to and source figure by the multi-focus image fusing method provided according to the present invention using SIDWT As high fdrequency component of the same size and low frequency component.Calculate high frequency coefficient region energy, the big more figure of value of chosen area self-energy As corresponding high frequency coefficient is as fusion coefficients, retains image medium-high frequency information, obtain better overall effect.Take source images aobvious Work degree figure is as low frequency component significance, because of the size of low frequency component and focal zone and the multi-focus figure without wavelet decomposition As consistent, and source images are more clear, and can be obtained than directly calculating low frequency component saliency map more significant information.By High fdrequency component using area energy coincidence verifies and is melted in low frequency component according to what the saliency map similitude of source images weighted Conjunction method, to realize the purpose of the present invention.
Two width can be directed to the registration of Same Scene by a kind of multi-focus image fusing method provided in an embodiment of the present invention Multiple focussing image afterwards carries out multi-layer filtering using shift-invariant spaces, is decomposed to form two width multiple focussing images difference Corresponding high frequency subgraph and low frequency subgraph picture;Then it according to the corresponding high frequency subgraph of two width multiple focussing images, carries out High fdrequency component fusion, forms high fdrequency component fusion coefficients, to retain the high-frequency information in multiple focussing image;And it is more according to two width The corresponding low frequency subgraph picture of focusedimage carries out low frequency component fusion, calculates significant information, forms low frequency component fusion system Number, this is because multiple focussing image of the size of low frequency component with focal zone and without wavelet decomposition is consistent, and source images are more It is clear to add, and can obtain than directly calculating low frequency component saliency map more significant information;Melted later according to the high fdrequency component The corresponding high fdrequency component of collaboration number and the corresponding low frequency component of low frequency component fusion coefficients carry out the inversion of translation invariant discrete wavelet It changes, generates blending image.The multi-focus image fusing method that the present invention can solve the prior art cannot eliminate blending image The problem of being optimal simultaneously on false profile and reservation image detail information, it can handle two width rigid registrations of Same Scene The problem of left and right difference focusedimage fusion, it can be applied to the fields such as digital image-forming, computer vision, automatic target detection.
Corresponding to embodiment of the method described in Fig. 1 and Fig. 2, as shown in fig. 6, the embodiment of the present invention also provides a kind of multi-focus Image fusion device, comprising:
Multi-layer filtering unit 31, for two width to be used translation invariant for the multiple focussing image after the registration of Same Scene Wavelet transform carries out multi-layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph Picture.
High fdrequency component integrated unit 32, for carrying out high according to the corresponding high frequency subgraph of two width multiple focussing images Frequency component fusion, forms high fdrequency component fusion coefficients.
Low frequency component integrated unit 33, for carrying out low according to the corresponding low frequency subgraph picture of two width multiple focussing images Frequency component fusion, forms low frequency component fusion coefficients.
Blending image generation unit 34, for according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and low frequency point It measures the corresponding low frequency component of fusion coefficients and carries out translation invariant discrete wavelet inverse transform, generate blending image.
In addition, the multi-layer filtering unit 31, is specifically used for:
The first multiple focussing image and the second multiple focussing image after two width to be directed to the registration of Same Scene are respectively adopted flat It moves constant wavelet transform and carries out N layers of filtering, be decomposed to form corresponding 3N the first high frequency subgraph of the first multiple focussing image With 1 the first low frequency subgraph picture, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 Two low frequency subgraph pictures;It is first multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each second high Frequency subgraph, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
In addition, as shown in fig. 7, the high fdrequency component integrated unit 32, comprising:
Region energy determining module 321, for distinguishing each the first high frequency subgraph and each the second high frequency subgraph It corresponds, and determines the region energy of each high frequency subgraph respectively:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel;EA(i, j) is one first high Region energy of the frequency subgraph in ω 1;IA(i, j) is pixel value of the first high frequency subgraph of this in pixel (i, j);EB(i, It j) is region energy of second high frequency subgraph in ω 1;IB(i, j) be the second high frequency subgraph of this pixel (i, J) pixel value.
Region energy comparison module 322 compares the corresponding region energy of the first high frequency subgraph and for individual element The corresponding region energy of two high frequency subgraphs, and the first high frequency subgraph and the second high frequency subgraph pair are determined according to comparison result The default matrix answered;The default matrix is initial value identical with the first high frequency subgraph and the second high frequency subgraph size For 0 matrix;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) Locate corresponding default matrix;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix.
The big value number computing module 323 of energy, for calculating the area of (2n-1) * (2n-1) pixel centered on each pixel The big value number of energy in domain:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is Energy big value number of first high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2.
High fdrequency component fusion coefficients determining module 324, for being existed according to corresponding first high frequency subgraph of pixel (i, j) The big value number of the energy of ω 2 and the second high frequency subgraph take corresponding more than the big value number of energy in the big value number of energy of ω 2 High fdrequency component fusion coefficients of the pixel value of pixel (i, j) as the pixel (i, j).
In addition, as shown in fig. 7, the low frequency component integrated unit 33, comprising:
Saliency map determining module 331, for determining the first multiple focussing image corresponding first significantly using spectrum residual error method The degree figure Sa and corresponding second saliency map Sb of the second multiple focussing image.
Region significance determining module 332, for according to formula:Determine The first area conspicuousness S of each pixel (i, j) of one saliency map SaI, j(Sa), and determine that the second saliency map Sb's is each The second area conspicuousness S of pixel (i, j)I, j(Sb);Wherein, Q indicates the area of the m*m pixel centered on pixel (i, j) Domain;Q is each pixel in the region of the m*m pixel centered on pixel (i, j);W (q) is the m*m size that value is 1 Weight matrix;SF (Sa, q) is each picture in the region of the m*m pixel in the first saliency map Sa centered on pixel (i, j) The pixel value of vegetarian refreshments;SF (Sb, q) is in the region of the m*m pixel in the second saliency map Sb centered on pixel (i, j) The pixel value of each pixel.
Related coefficient determining module 333, for according to formula:Really Coefficient R of the fixed first saliency map Sa and the second saliency map Sb in each pixel (i, j)I, j
Low frequency component fusion coefficients determining module 334, in the coefficient RI, jLess than or equal to pre-set system When number threshold value, determine that the first low frequency subgraph picture and the second low frequency subgraph picture are uncorrelated in pixel (i, j), first area is significant Property SI, j(Sa) and second area conspicuousness SI, j(Sb) the low frequency system of the corresponding low frequency subgraph picture of value greatly in pixel (i, j) Number is used as the low frequency component fusion coefficients of the pixel (i, j);In the coefficient RI, jGreater than pre-set coefficient threshold When, the first low frequency subgraph picture and the second low frequency subgraph picture are determined in pixel (i, j) correlation, by first area conspicuousness SI, j (Sa) and second area conspicuousness SI, j(Sb) it is compared, determines first area conspicuousness SI, j(Sa) and second area conspicuousness SI, j(Sb) the corresponding weighted value of small value inAnd first area conspicuousness SI, j(Sa) and second Region significance SI, j(Sb) the corresponding weighted value w of value greatly inmax=1-wmin;By the first low frequency subgraph picture at pixel (i, j) Low frequency coefficient and the second low frequency subgraph picture in the low frequency coefficient of pixel (i, j) be weighted summation, obtain pixel (i, j) Low frequency component fusion coefficients;Wherein, T is pre-set coefficient threshold.
Two width can be directed to the registration of Same Scene by a kind of multi-focus image fusion device provided in an embodiment of the present invention Multiple focussing image afterwards carries out multi-layer filtering using shift-invariant spaces, is decomposed to form two width multiple focussing images difference Corresponding high frequency subgraph and low frequency subgraph picture;Then it according to the corresponding high frequency subgraph of two width multiple focussing images, carries out High fdrequency component fusion, forms high fdrequency component fusion coefficients, to retain the high-frequency information in multiple focussing image;And it is more according to two width The corresponding low frequency subgraph picture of focusedimage carries out low frequency component fusion, calculates significant information, forms low frequency component fusion system Number, this is because multiple focussing image of the size of low frequency component with focal zone and without wavelet decomposition is consistent, and source images are more It is clear to add, and can obtain than directly calculating low frequency component saliency map more significant information;Melted later according to the high fdrequency component The corresponding high fdrequency component of collaboration number and the corresponding low frequency component of low frequency component fusion coefficients carry out the inversion of translation invariant discrete wavelet It changes, generates blending image.The multi-focus image fusing method that the present invention can solve the prior art cannot eliminate blending image The problem of being optimal simultaneously on false profile and reservation image detail information, it can handle two width rigid registrations of Same Scene The problem of left and right difference focusedimage fusion, it can be applied to the fields such as digital image-forming, computer vision, automatic target detection.
In addition, the embodiment of the present invention also provides a kind of computer readable storage medium, it is stored thereon with computer program, it should It is performed the steps of when program is executed by processor
Two width are more using shift-invariant spaces progress for the multiple focussing image after the registration of Same Scene Layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture.
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component Fusion coefficients.
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component Fusion coefficients.
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency point of low frequency component fusion coefficients Amount carries out translation invariant discrete wavelet inverse transform, generates blending image.
In addition, the embodiment of the present invention also provides a kind of computer equipment, including memory, processor and it is stored in storage And the computer program that can be run on a processor, the processor perform the steps of when executing described program
Two width are more using shift-invariant spaces progress for the multiple focussing image after the registration of Same Scene Layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture.
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component Fusion coefficients.
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component Fusion coefficients.
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency point of low frequency component fusion coefficients Amount carries out translation invariant discrete wavelet inverse transform, generates blending image.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Specific embodiment is applied in the present invention, and principle and implementation of the present invention are described, above embodiments Explanation be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, According to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion in this specification Appearance should not be construed as limiting the invention.

Claims (10)

1. a kind of multi-focus image fusing method characterized by comprising
Two width are subjected to multilayer filter using shift-invariant spaces for the multiple focussing image after the registration of Same Scene Wave is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component fusion Coefficient;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component fusion Coefficient;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency component of low frequency component fusion coefficients into Row translation invariant discrete wavelet inverse transform generates blending image.
2. multi-focus image fusing method according to claim 1, which is characterized in that described that two width are directed to Same Scene Registration after multiple focussing image using shift-invariant spaces carry out multi-layer filtering, be decomposed to form two width multi-focus figures As corresponding high frequency subgraph and low frequency subgraph picture, comprising:
Translation is respectively adopted not in the first multiple focussing image and the second multiple focussing image after two width to be directed to the registration of Same Scene Become wavelet transform and carry out N layers of filtering, is decomposed to form corresponding 3N the first high frequency subgraph of the first multiple focussing image and 1 width First low frequency subgraph picture, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 is second low Frequency subgraph;First multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each the second high frequency Image, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
3. multi-focus image fusing method according to claim 2, which is characterized in that described according to two width multiple focussing images Corresponding high frequency subgraph carries out high fdrequency component fusion, forms high fdrequency component fusion coefficients, comprising:
Each the first high frequency subgraph and each the second high frequency subgraph are corresponded respectively, and determine each high frequency subgraph respectively The region energy of picture:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel;EA(i, j) is the first high frequency Region energy of the image in ω 1;IA(i, j) is pixel value of the first high frequency subgraph of this in pixel (i, j);EB(i, j) is Region energy of one the second high frequency subgraph in ω 1;IB(i, j) is the second high frequency subgraph of this in pixel (i, j) Pixel value;
Individual element compares the corresponding region energy of the first high frequency subgraph and the corresponding region energy of the second high frequency subgraph, and The first high frequency subgraph and the corresponding default matrix of the second high frequency subgraph are determined according to comparison result;The default matrix be with The matrix that first high frequency subgraph and the identical initial value of the second high frequency subgraph size are 0;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) pair The default matrix answered;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix;
Calculate the big value number of energy in the region of (2n-1) * (2n-1) pixel centered on each pixel:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is first Energy big value number of the high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2;
Existed according to corresponding first high frequency subgraph of pixel (i, j) in the big value number of energy of ω 2 and the second high frequency subgraph The big value number of the energy of ω 2, take the pixel value of the corresponding pixel (i, j) more than the big value number of energy as the pixel (i, J) high fdrequency component fusion coefficients.
4. multi-focus image fusing method according to claim 2, which is characterized in that described according to two width multiple focussing images Corresponding low frequency subgraph picture carries out low frequency component fusion, forms low frequency component fusion coefficients, comprising:
Determine that the corresponding first saliency map Sa of the first multiple focussing image and the second multiple focussing image are corresponding using spectrum residual error method Second saliency map Sb;
According to formula:Determine each pixel (i, j) of the first saliency map Sa First area conspicuousness Si,j(Sa), and determine the second saliency map Sb each pixel (i, j) second area conspicuousness Si,j(Sb);Wherein, Q indicates the region of the m*m pixel centered on pixel (i, j);Q is centered on pixel (i, j) Each pixel in the region of m*m pixel;W (q) be value be 1 m*m size weight matrix;SF (Sa, q) is first significant The pixel value of each pixel in the region of m*m pixel in degree figure Sa centered on pixel (i, j);SF (Sb, q) is second The pixel value of each pixel in the region of m*m pixel in saliency map Sb centered on pixel (i, j);
According to formula:Determine the first saliency map Sa and the second significance Sb is schemed in the coefficient R of each pixel (i, j)i,j
If the coefficient Ri,jLess than or equal to pre-set coefficient threshold, the first low frequency subgraph picture and the second low frequency are determined Subgraph is uncorrelated at pixel (i, j), by first area conspicuousness Si,j(Sa) and second area conspicuousness Si,j(Sb) in The low frequency coefficient for being worth corresponding low frequency subgraph picture greatly in pixel (i, j) merges system as the low frequency component of the pixel (i, j) Number;
If the coefficient Ri,jGreater than pre-set coefficient threshold, the first low frequency subgraph picture and the second low frequency subgraph are determined As related in pixel (i, j), by first area conspicuousness Si,j(Sa) and second area conspicuousness Si,j(Sb) it is compared, really Determine first area conspicuousness Si,j(Sa) and second area conspicuousness Si,j(Sb) the corresponding weighted value of small value inAnd first area conspicuousness Si,j(Sa) and second area conspicuousness Si,j(Sb) the big value pair in The weighted value w answeredmax=1-wmin;By the first low frequency subgraph picture pixel (i, j) low frequency coefficient and the second low frequency subgraph picture It is weighted summation in the low frequency coefficient of pixel (i, j), obtains the low frequency component fusion coefficients of pixel (i, j);Wherein, T For pre-set coefficient threshold.
5. a kind of multi-focus image fusion device characterized by comprising
Multi-layer filtering unit, for two width are discrete small using translation invariant for the multiple focussing image after the registration of Same Scene Wave conversion carries out multi-layer filtering, is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
High fdrequency component integrated unit, for carrying out high fdrequency component according to the corresponding high frequency subgraph of two width multiple focussing images Fusion forms high fdrequency component fusion coefficients;
Low frequency component integrated unit, for carrying out low frequency component according to the corresponding low frequency subgraph picture of two width multiple focussing images Fusion forms low frequency component fusion coefficients;
Blending image generation unit, for being merged according to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and low frequency component The corresponding low frequency component of coefficient carries out translation invariant discrete wavelet inverse transform, generates blending image.
6. multi-focus image fusion device according to claim 5, which is characterized in that the multi-layer filtering unit, specifically For:
Translation is respectively adopted not in the first multiple focussing image and the second multiple focussing image after two width to be directed to the registration of Same Scene Become wavelet transform and carry out N layers of filtering, is decomposed to form corresponding 3N the first high frequency subgraph of the first multiple focussing image and 1 width First low frequency subgraph picture, and it is decomposed to form corresponding 3N the second high frequency subgraph of the second multiple focussing image and 1 is second low Frequency subgraph;First multiple focussing image, the second multiple focussing image, each the first high frequency subgraph, each the second high frequency Image, the first low frequency subgraph picture are identical with the picture size size of the second low frequency subgraph picture.
7. multi-focus image fusion device according to claim 6, which is characterized in that the high fdrequency component integrated unit, Include:
Region energy determining module, for each the first high frequency subgraph and each the second high frequency subgraph difference one is a pair of It answers, and determines the region energy of each high frequency subgraph respectively:
Wherein, ω 1 is centered on pixel (i, j), and size is the region of n*n pixel;EA(i, j) is the first high frequency Region energy of the image in ω 1;IA(i, j) is pixel value of the first high frequency subgraph of this in pixel (i, j);EB(i, j) is Region energy of one the second high frequency subgraph in ω 1;IB(i, j) is the second high frequency subgraph of this in pixel (i, j) Pixel value;
Region energy comparison module compares the corresponding region energy of the first high frequency subgraph and the second high frequency for individual element The corresponding region energy of image, and determine that the first high frequency subgraph and the second high frequency subgraph are corresponding default according to comparison result Matrix;The default matrix is the square that initial value identical with the first high frequency subgraph and the second high frequency subgraph size is 0 Battle array;Wherein:
Ah(i, j) is the first high frequency subgraph at pixel (i, j) pair The default matrix answered;BhFor the second high frequency subgraph at pixel (i, j) corresponding default matrix;
The big value number computing module of energy, in the region for calculating (2n-1) * (2n-1) pixel centered on each pixel The big value number of energy:
Wherein, ω 2 is centered on pixel (i, j), and size is the region of (2n-1) * (2n-1) pixel;Ca (i, j) is first Energy big value number of the high frequency subgraph in ω 2;Cb (i, j) is energy big value number of second high frequency subgraph in ω 2;
High fdrequency component fusion coefficients determining module is used for the energy according to corresponding first high frequency subgraph of pixel (i, j) in ω 2 The big value number of amount and the second high frequency subgraph take the corresponding pixel more than the big value number of energy in the big value number of energy of ω 2 High fdrequency component fusion coefficients of the pixel value of (i, j) as the pixel (i, j).
8. multi-focus image fusion device according to claim 6, which is characterized in that the low frequency component integrated unit, Include:
Saliency map determining module, for using spectrum residual error method determine the corresponding first saliency map Sa of the first multiple focussing image and The corresponding second saliency map Sb of second multiple focussing image;
Region significance determining module, for according to formula:Determine that first is significant The first area conspicuousness S of each pixel (i, j) of degree figure Sai,j(Sa), and determine the second saliency map Sb each pixel The second area conspicuousness S of (i, j)i,j(Sb);Wherein, Q indicates the region of the m*m pixel centered on pixel (i, j);Q is Each pixel in the region of m*m pixel centered on pixel (i, j);W (q) be value be 1 m*m size weight square Battle array;SF (Sa, q) is each pixel in the region of the m*m pixel in the first saliency map Sa centered on pixel (i, j) Pixel value;SF (Sb, q) is each pixel in the region of the m*m pixel in the second saliency map Sb centered on pixel (i, j) The pixel value of point;
Related coefficient determining module, for according to formula:Determine that first is aobvious Coefficient R of the work degree figure Sa and the second saliency map Sb in each pixel (i, j)i,j
Low frequency component fusion coefficients determining module, in the coefficient Ri,jLess than or equal to pre-set coefficient threshold When, determine that the first low frequency subgraph picture and the second low frequency subgraph picture are uncorrelated in pixel (i, j), by first area conspicuousness Si,j (Sa) and second area conspicuousness Si,j(Sb) the corresponding low frequency subgraph picture of value greatly in is made in the low frequency coefficient of pixel (i, j) For the low frequency component fusion coefficients of the pixel (i, j);In the coefficient Ri,jWhen greater than pre-set coefficient threshold, The first low frequency subgraph picture and the second low frequency subgraph picture are determined in pixel (i, j) correlation, by first area conspicuousness Si,j(Sa) and Second area conspicuousness Si,j(Sb) it is compared, determines first area conspicuousness Si,j(Sa) and second area conspicuousness Si,j (Sb) the corresponding weighted value of small value inAnd first area conspicuousness Si,j(Sa) and second area Conspicuousness Si,j(Sb) the corresponding weighted value w of value greatly inmax=1-wmin;By the first low frequency subgraph picture in the low of pixel (i, j) Frequency coefficient and the second low frequency subgraph picture are weighted summation in the low frequency coefficient of pixel (i, j), obtain the low of pixel (i, j) Frequency component fusion coefficients;Wherein, T is pre-set coefficient threshold.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor It is performed the steps of when row
Two width are subjected to multilayer filter using shift-invariant spaces for the multiple focussing image after the registration of Same Scene Wave is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component fusion Coefficient;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component fusion Coefficient;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency component of low frequency component fusion coefficients into Row translation invariant discrete wavelet inverse transform generates blending image.
10. a kind of computer equipment including memory, processor and is stored in the calculating that storage is upper and can run on a processor Machine program, which is characterized in that the processor performs the steps of when executing described program
Two width are subjected to multilayer filter using shift-invariant spaces for the multiple focussing image after the registration of Same Scene Wave is decomposed to form the corresponding high frequency subgraph of two width multiple focussing images and low frequency subgraph picture;
According to the corresponding high frequency subgraph of two width multiple focussing images, high fdrequency component fusion is carried out, forms high fdrequency component fusion Coefficient;
According to the corresponding low frequency subgraph picture of two width multiple focussing images, low frequency component fusion is carried out, forms low frequency component fusion Coefficient;
According to the corresponding high fdrequency component of the high fdrequency component fusion coefficients and the corresponding low frequency component of low frequency component fusion coefficients into Row translation invariant discrete wavelet inverse transform generates blending image.
CN201810889769.1A 2018-08-07 2018-08-07 A kind of multi-focus image fusing method and device Pending CN109300096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810889769.1A CN109300096A (en) 2018-08-07 2018-08-07 A kind of multi-focus image fusing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810889769.1A CN109300096A (en) 2018-08-07 2018-08-07 A kind of multi-focus image fusing method and device

Publications (1)

Publication Number Publication Date
CN109300096A true CN109300096A (en) 2019-02-01

Family

ID=65168054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810889769.1A Pending CN109300096A (en) 2018-08-07 2018-08-07 A kind of multi-focus image fusing method and device

Country Status (1)

Country Link
CN (1) CN109300096A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111292A (en) * 2019-04-30 2019-08-09 淮阴师范学院 A kind of infrared and visible light image fusion method
CN110322409A (en) * 2019-06-14 2019-10-11 浙江大学 A kind of modified wavelet image fusion method based on label figure
CN112001289A (en) * 2020-08-17 2020-11-27 海尔优家智能科技(北京)有限公司 Article detection method and apparatus, storage medium, and electronic apparatus
CN112019758B (en) * 2020-10-16 2021-01-08 湖南航天捷诚电子装备有限责任公司 Use method of airborne binocular head-mounted night vision device and night vision device
CN112241940A (en) * 2020-09-28 2021-01-19 北京科技大学 Method and device for fusing multiple multi-focus images
CN112561843A (en) * 2020-12-11 2021-03-26 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112862868A (en) * 2021-01-31 2021-05-28 南京信息工程大学 Motion sea wave image registration fusion method based on linear transformation and wavelet analysis
CN113160346A (en) * 2021-04-08 2021-07-23 深圳高性能医疗器械国家研究院有限公司 PET image reconstruction method based on wavelet fusion
WO2022213321A1 (en) * 2021-04-08 2022-10-13 深圳高性能医疗器械国家研究院有限公司 Wavelet fusion-based pet image reconstruction method
WO2023137956A1 (en) * 2022-01-18 2023-07-27 上海闻泰信息技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing
CN104504652A (en) * 2014-10-10 2015-04-08 中国人民解放军理工大学 Image denoising method capable of quickly and effectively retaining edge and directional characteristics
CN106530277A (en) * 2016-10-13 2017-03-22 中国人民解放军理工大学 Image fusion method based on wavelet direction correlation coefficient
KR101725076B1 (en) * 2016-01-05 2017-04-10 전남대학교산학협력단 Method for processing satellite image and apparatus for executing the method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing
CN104504652A (en) * 2014-10-10 2015-04-08 中国人民解放军理工大学 Image denoising method capable of quickly and effectively retaining edge and directional characteristics
KR101725076B1 (en) * 2016-01-05 2017-04-10 전남대학교산학협력단 Method for processing satellite image and apparatus for executing the method
CN106530277A (en) * 2016-10-13 2017-03-22 中国人民解放军理工大学 Image fusion method based on wavelet direction correlation coefficient

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ASHIRBANI SAHA 等: "Mutual spectral residual approach for multifocus image fusion", 《DIGITAL SIGNAL PROCESSING》 *
MAGDY BAYOUMI: "《传感器平台的视频监控-算法与结构》", 30 June 2018 *
张立保 等: "基于显著性分析的自适应遥感图像融合", 《中国激光》 *
王建 等: "基于区域一致性的多聚焦图像融合算法", 《兵工自动化》 *
罗南超 等: "基于低频边缘特征和能量的多聚焦图像融合方法", 《重庆工学院学报(自然科学版)》 *
邓立暖 等: "基于NSST的红外与可见光图像融合算法", 《电子学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111292B (en) * 2019-04-30 2023-07-21 淮阴师范学院 Infrared and visible light image fusion method
CN110111292A (en) * 2019-04-30 2019-08-09 淮阴师范学院 A kind of infrared and visible light image fusion method
CN110322409A (en) * 2019-06-14 2019-10-11 浙江大学 A kind of modified wavelet image fusion method based on label figure
CN110322409B (en) * 2019-06-14 2021-08-31 浙江大学 Improved wavelet transform image fusion method based on labeled graph
CN112001289A (en) * 2020-08-17 2020-11-27 海尔优家智能科技(北京)有限公司 Article detection method and apparatus, storage medium, and electronic apparatus
CN112241940A (en) * 2020-09-28 2021-01-19 北京科技大学 Method and device for fusing multiple multi-focus images
CN112241940B (en) * 2020-09-28 2023-12-19 北京科技大学 Fusion method and device for multiple multi-focus images
CN112019758B (en) * 2020-10-16 2021-01-08 湖南航天捷诚电子装备有限责任公司 Use method of airborne binocular head-mounted night vision device and night vision device
CN112561843A (en) * 2020-12-11 2021-03-26 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112862868A (en) * 2021-01-31 2021-05-28 南京信息工程大学 Motion sea wave image registration fusion method based on linear transformation and wavelet analysis
CN112862868B (en) * 2021-01-31 2023-12-01 南京信息工程大学 Motion sea wave image registration fusion method based on linear transformation and wavelet analysis
WO2022213321A1 (en) * 2021-04-08 2022-10-13 深圳高性能医疗器械国家研究院有限公司 Wavelet fusion-based pet image reconstruction method
CN113160346A (en) * 2021-04-08 2021-07-23 深圳高性能医疗器械国家研究院有限公司 PET image reconstruction method based on wavelet fusion
WO2023137956A1 (en) * 2022-01-18 2023-07-27 上海闻泰信息技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN109300096A (en) A kind of multi-focus image fusing method and device
CN106355570B (en) A kind of binocular stereo vision matching method of combination depth characteristic
Lai et al. Multi-scale visual attention deep convolutional neural network for multi-focus image fusion
CN105608667A (en) Method and device for panoramic stitching
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN108122191A (en) Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN109118544A (en) Synthetic aperture imaging method based on perspective transform
CN105513014A (en) Method and system for reconstruction of multiframe image super resolution
US8994809B2 (en) Method and apparatus for simulating depth of field (DOF) in microscopy
CN109801325A (en) A kind of Binocular Stereo Vision System obtains the method and device of disparity map
CN116847209B (en) Log-Gabor and wavelet-based light field full-focusing image generation method and system
CN111179173B (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN116128820A (en) Pin state identification method based on improved YOLO model
Anish et al. A survey on multi-focus image fusion methods
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN117094895B (en) Image panorama stitching method and system
Hua et al. Background extraction using random walk image fusion
Liu et al. A fast multi-focus image fusion algorithm by DWT and focused region decision map
Chen et al. New stereo high dynamic range imaging method using generative adversarial networks
Yoo et al. 3D image reconstruction from multi-focus microscope: axial super-resolution and multiple-frame processing
CN107392986A (en) A kind of image depth rendering intent based on gaussian pyramid and anisotropic filtering
CN107256562A (en) Image defogging method and device based on binocular vision system
Park et al. Unpaired image demoiréing based on cyclic moiré learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190201