CN105913407A - Method for performing fusion optimization on multi-focusing-degree image base on difference image - Google Patents

Method for performing fusion optimization on multi-focusing-degree image base on difference image Download PDF

Info

Publication number
CN105913407A
CN105913407A CN201610210196.6A CN201610210196A CN105913407A CN 105913407 A CN105913407 A CN 105913407A CN 201610210196 A CN201610210196 A CN 201610210196A CN 105913407 A CN105913407 A CN 105913407A
Authority
CN
China
Prior art keywords
frequency information
image
fusion
map2
map1
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610210196.6A
Other languages
Chinese (zh)
Other versions
CN105913407B (en
Inventor
李华锋
刘鑫坤
余正涛
毛存礼
郭剑毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan United Visual Technology Co ltd
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201610210196.6A priority Critical patent/CN105913407B/en
Publication of CN105913407A publication Critical patent/CN105913407A/en
Application granted granted Critical
Publication of CN105913407B publication Critical patent/CN105913407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a method for performing fusion optimization on a multi-focusing-degree image base on a difference image, comprising steps of performing preliminary fusion on a multi-focusing-degree image on the basis of multiple dimensions and multiple directions, and performing updating on the preliminary fusion image through residual information measured by the difference image and information in a focusing area. The image area obtained through the multiple dimension method and the multiple direction method is accurate in positioning, high in accuracy and applicable to a human visual system. The method of the invention performs updating operation on the preliminary fusion image through the residual information of the difference value to enable the fusion image to be more accurate and to has higher image fusion quality and has a wider application prospect in a military and civil field.

Description

A kind of method poly focal power image co-registration optimized based on differential chart
Technical field
The invention belongs to image processing field and information fusion field, particularly to a kind of method poly focal power image co-registration optimized based on differential chart, have wide practical use in military affairs and civil area.
Background technology
Poly focal power image fusion technology is in image co-registration to image difference focused on, and in conjunction with the complementarity of integrated information, forms time property more than, the image of multi views information fusion, obtains more comprehensively scene information.
At present, multi-focus image fusion is broadly divided into spatial domain and transform domain two class.Multi-focus image fusion technology based on spatial domain is divided into again blending algorithm (such as Weighted Average Algorithm) and the segmented areas blending algorithm of pixel, segmented areas blending algorithm is it is considered that contacting between the gray value of pixel and this point and neighbor, and image definition and contrast aspect that its method obtains will be more effective than pixel fusion method.Compared with integration technology based on spatial domain, integration technology based on transform domain is higher to the zone location accuracy of image, and precision is higher.The irreplaceable key player of multi-scale transform performer in the image fusion technology of transform domain, the most famous method is: Laplacian-pyramid method and wavelet transform (DWT).But in both the above method, if for the location misalignment of source images, or occurring moving region in image acquisition process, the syncretizing effect of image can significantly decline.So in order to solve this problem, having again scholar to propose translation invariant orthoselection dual-tree complex wavelet transform.In recent years, some new methods such as wavelet transform (DCT), non-down sampling contourlet transform (NSST) quaternion wavelet conversion (QWT) are proposed again.But, fusion process merges conversion coefficient original pixel value, multiple dimensioned change can be replaced may to become the main shortcoming in fusion method.
Another one has influence on the fusion rule that fusion product qualitative factor is sub-band.In recent years, a lot of scholar is had to do substantial amounts of research for this work.And achieve certain achievement in research.Most research is all based on the total framework of the rarefaction representation (SR) of Multiple Spanning Tree (MST) region.The method is to follow absolute maximum principle for high-frequency sub-band to merge, and method low frequency sub-band being taken based on to SR merges.This approach avoid the skew of smooth detail, but it still also exists some defects, the such as inherent shortcoming of independent image feature.Then, in order to solve this problem, also been proposed the method that spatial frequency combines in non-down sampling contourlet transformation domain with PCNN, the method overcomes the restriction of the labyrinth in conversion and too much iterations, but the problem that still cannot solve too much artificial setup parameter.Therefore, it is proposed again and converts based on neighborhood characteristics NSCT, low frequency sub-band information follows Weighted Fusion method based on neighboring region energy in this method, high-frequency sub-band is followed neighborhood characteristics and is merged, but the method is concentrated and is considered low frequency region energy information, but the extraction for effective coverage information the most further processes.
Summary of the invention
The present invention proposes a kind of method optimized multi-focus image fusion based on differential chart, based on multiple dimensioned multi-direction fusion preliminary to multiple focussing image, initial fusion image is updated by remaining information and the information of focal zone then measured by differential chart, the quality of the multi-focus image fusion obtained is more preferable, and precision is higher.The technical solution used in the present invention comprises the following specific steps that:
Step 1: the pretreatment of image;
To source images IAAnd IBCarry out gaussian filtering, calculate the meansigma methods of source images:
IARepresent source images A, IBRepresent source images B, IAVGRepresent the average of source images A and B;
Step 2: decomposition multi-direction to Image Multiscale obtains original fusion image;
Initially with neighborhood distance filter to source images IAAnd IBCarrying out multi-resolution decomposition, the number of plies is four layers, and each layer comprises high-frequency information and low-frequency information;
Then using NSCT multi-direction decomposition method that high-frequency information carries out multi-direction decomposition, wherein ground floor decomposes direction is 16, and the second layer is 8, and third layer is 4, and last layer is 1, obtains IAHigh-frequency information IA,hWith low-frequency information IA,l, IBHigh-frequency information IB,hWith low-frequency information IB,l
Finally calculate pixel spatial frequency values based on neighborhood in high-frequency information, construct the decision matrix that high-frequency information merges, by decision matrix, high-frequency information is tentatively merged, low-frequency information is taked to average, preliminary merge after high-frequency information and low-frequency information after averaging through the inverse transformation of multi-direction multi-scale transform, form original fusion image IIN
Step 3: build error image and calculate energy value;
Represent source images IAWith average image IAVGPixel (i, difference j),Represent original fusion image IINWith average image IAVGIn pixel (i, difference j);
Calculating energy value:
Represent IIN(i, j) and IAVG(i, difference j)Corresponding energy value,Represent IAVG(i, j) and IA(i, difference j)Corresponding energy value, M × N represents default field size, (i+m, j+n) represent with (i, j) centered by field M × N in any point;
Utilize two-sided filter to energy matrixWithCarry out protecting limit denoising, respectively obtain matrix char1 and char2;
Step 4: construct initial bianry image;
(1) by binary map MAP1 that char1 and char2 structure is initial, MAP2:
(i, j) represents pixel (i, j) value corresponding in binary map MAP1 to MAP1, MAP2 (i, j) pixel (i, j) value corresponding in binary map MAP2, char1 (i are represented, j) pixel (i is represented, j) value corresponding in char1, (i, j) represents pixel (i to char2, j) value corresponding in char2, δ is the threshold value set;
(2) bianry image is modified:
X × Y represents default field size, (i+a, j+b) represent with (i, j) centered by field M × N in any point;
By above formula, MAP1 and MAP2 is modified respectively, obtain revised binary map MAP1' and MAP2', for source images focuses on and the most inconspicuous district of non-focusing { (i, j) | MAP1'(i, j)=MAP2'(i, j) } process, take MAP1'(i, j)=MAP2'(i, j)=0.5;
Step 5: by binary map MAP1' revised, MAP2' is to source images IAAnd IBMerge.
(1) fusion based on spatial domain:
Binary map MAP1' relatively revised, MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, and j)=1, then select source images IB(i, j) is filled with, if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then select source images IA(i, j) is filled with, and other situation then selects original fusion image IIN(i j) is filled with.IF(i j) is the last fusion results optimizing and generating;
(2) fusion based on transform domain:
Binary map MAP1' relatively revised, MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, and j)=0, then select source images IAHigh-frequency information IA,h(i, j) with low-frequency information IA,l(i, j) is high-frequency information and the low-frequency information of fusion image, if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh-frequency information IB,h(i, j) generation and low-frequency information IB,l(i, j) is high-frequency information and the low-frequency information of fusion image, if meeting MAP1'(i, and j)=MAP2'(i, j)=0.5, then select original fusion image IINHigh-frequency information IIN , h(i, j) with low-frequency information IIN,l(i j) is the high-frequency information of fusion image and then low-frequency information carries out inverse transformation to the low-and high-frequency information of the fusion image obtained, obtains final fusion results IF
Beneficial effects of the present invention
The method uses differential chart to be updated fusion image, and the multiple focussing image information of residual can be updated fusion image more accurately, make fusion image more accurately, quality higher.Multiple dimensioned and multi-direction two methods merge gets up, the image-region accurate positioning obtained, precision is high, be suitable to the visual system of the mankind, multidirectional method compensate for some drawbacks of multi-scale method, its edge and these important informations of lines have been obtained effective preservation by multi-direction integration technology by the multi-direction situation that compensate for losing some edge effective informations in multi-scale image.
Accompanying drawing illustrates:
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is the flow chart forming original fusion image;
Fig. 3 is image co-registration flow chart based on spatial domain in the present invention;
Fig. 4 is image co-registration flow chart based on transform domain in the present invention;
Fig. 5 is to be fused two group poly focal power source images, and (a) and (b) is flower figure, and (c) and (d) is lab figure;
Fig. 6 is flower figure fusion results under the inventive method with other four kinds of methods, wherein (a) and (b) is respectively the syncretizing effect figure based on spatial domain and transform domain that the present invention proposes, and (c)-(f) is respectively four kinds of control methods non-down sampling contourlets change (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood distance (ND) and the neighborhood syncretizing effect figure apart from multi-direction method (MMND);
Fig. 7 is the differential chart of fusion figure corresponding in Fig. 6 and source images (b);
Fig. 8 is lab figure fusion results under the inventive method with other four kinds of methods, wherein (a) and (b) is respectively the syncretizing effect figure based on spatial domain and transform domain that the present invention proposes, and (c)-(f) is respectively four kinds of control methods non-down sampling contourlets change (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood distance (ND) and the neighborhood syncretizing effect figure apart from multi-direction method (MMND);
Fig. 9 is the differential chart of fusion image corresponding in Fig. 8 and source images (d).
Detailed description of the invention
Below in conjunction with the accompanying drawings with specific embodiment to the present invention is further illustrated.
Embodiment 1: as it is shown in figure 1, the present invention is based on multiple dimensioned multi-direction fusion preliminary to multiple focussing image, then carries out spatial domain and transform domain renewal by differential chart to initial fusion image, obtains final fusion image.As in Figure 2-4, concrete steps include:
Step 1: the pretreatment of image:
To source images IAAnd IBCarry out gaussian filtering, calculate the meansigma methods of source images:
IARepresent source images A, IBRepresent source images B, IAVGRepresent the average of source images A and B;
Step 2: decomposition multi-direction to Image Multiscale obtains original fusion image;
Initially with neighborhood distance filter to source images IAAnd IBCarrying out multi-resolution decomposition, the number of plies is four layers, and each layer comprises high-frequency information and low-frequency information;
Then using NSCT multi-direction decomposition method that high-frequency information carries out multi-direction decomposition, wherein ground floor decomposes direction is 16, and the second layer is 8, and third layer is 4, and last layer is 1, obtains IAHigh-frequency information IA,hWith low-frequency information IA,l, IBHigh-frequency information IB,hWith low-frequency information IB,l
Finally calculate pixel spatial frequency values based on neighborhood in high-frequency information, construct the decision matrix that high-frequency information merges, by decision matrix, high-frequency information is tentatively merged, low-frequency information is taked to average, preliminary merge after high-frequency information and low-frequency information after averaging through the inverse transformation of multi-direction multi-scale transform, form original fusion image IIN
Step 3: build error image and calculate energy value;
Represent source images IAWith average image IAVGPixel (i, difference j),Represent original fusion image IINWith average image IAVGIn pixel (i, difference j);
Calculating energy value:
Represent IIN(i, j) and IAVG(i, difference j)Corresponding energy value,Represent IAVG(i, j) and IA(i, difference j)Corresponding energy value, M × N represents default field size, (i+m, j+n) represent with (i, j) centered by field M × N in any point;M × N=11 × 11 in the present embodiment
Utilize two-sided filter to energy matrixWithIt is further processed, respectively obtains matrix char1 and char2.The effect of two-sided filter herein is the adjacency to energy value and the compromise of pixel value similarity, considers spatial information (si) and grey similarity simultaneously, reaches to protect the purpose of limit denoising.Wherein the parameter of two-sided filter is set to, and window size is window=11 × 11, and Gauss variance is sigma=21.
Step 4: construct initial bianry image;
(1) by binary map MAP1 that char1 and char2 structure is initial, MAP2:
(i, j) represents pixel (i, j) value corresponding in binary map MAP1 to MAP1, (i, j) represents pixel (i, j) value corresponding in binary map MAP2 to MAP2, char1 (i, j) pixel (i, j) value corresponding in char1, char2 (i are represented, j) pixel (i is represented, j) value corresponding in char2, δ is the threshold value set, δ=0.0025 in the present embodiment.
(2) the most again bianry image is modified:
X × Y represents default field size, (i+a, j+b) represent with (i, j) centered by field M × N in any point.
By above formula, MAP1 and MAP2 is modified respectively, obtain revised binary map MAP1' and MAP2', the purpose carrying out pixel consistency checking herein is, in some wrong choosing, the acnode of leakage choosing, fusion process is had bad impact, therefore to ignore these mistakes by pixel concordance and select pixel.For region (i, j) | MAP1'(i, j)=MAP2'(i, j) }, i.e. two Zhang Yuan's images focus on and the most inconspicuous region of non-focusing, these regions is processed: MAP1'(i, j)=MAP2'(i, j)=0.5.
Step 5: by binary map MAP1' revised, MAP2' is to source images IAAnd IBMerge.
(1) fusion based on spatial domain:
Binary map MAP1' relatively revised, MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, and j)=1, then select source images IB(i, j) is filled with, if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then select source images IA(i, j) is filled with, and other situation then selects original fusion image IIN(i j) is filled with.IF(i j) is the last fusion results optimizing and generating;
(2) fusion based on transform domain:
Binary map MAP1' relatively revised, MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, and j)=0, then select source images IAHigh-frequency information IA,h(i, j) with low-frequency information IA,l(i, j) is high-frequency information and the low-frequency information of fusion image, if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh-frequency information IB,h(i, j) generation and low-frequency information IB,l(i, j) is high-frequency information and the low-frequency information of fusion image, if meeting MAP1'(i, and j)=MAP2'(i, j)=0.5, then select original fusion image IINHigh-frequency information IIN , h(i, j) with low-frequency information IIN,l(i j) is the high-frequency information of fusion image and then low-frequency information carries out inverse transformation to the low-and high-frequency information of the fusion image obtained, obtains final fusion results IF
Experimental result
For comparative descriptions this method superiority compared with traditional method, select four kinds of control methods here: non-down sampling contourlet change (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood distance (ND) and neighborhood are apart from multi-direction method (MMND).
The one group of source images (a) as shown in Figure 5 chosen in the present embodiment and (b), for static flower figure.As shown in Figure 6, (a) and (b) represents optimization method (MMND-SD) based on spatial domain and the fusion results of optimization method based on transform domain (MMND-TD) in the present invention to fusion results respectively.Fig. 6 visually it does not appear that the effect of multi-focus region fusion, becomes apparent to compare with additive method, by poor to fusion image and source images, chooses the obvious source images (b) of differential chart visual effect here poor.It is observed that (a) and (b) the i.e. method syncretizing effect of the present invention is good in red frame region in Fig. 7, red frame does not observes the information of remaining source images substantially.In remaining differential chart, can substantially observe and all remain source image information, e () and (f) substantially also comprises the texture information of wall, can substantially observe that in (c) and (d) information of image lower right corner blade is not without merging completely.
In addition to evaluating fusion results on subjective vision, additionally use objective evaluation index here to carry out objective evaluation.In objective evaluation index, use interactive information MI, nonlinear correlation comentropy NCIE, measure Q_CB based on multiple dimensioned measure Q_M, Chen Shi.Index value is the biggest, illustrates that syncretizing effect is the best, as shown in table 1, the numerical value of four indexs of the inventive method all big than other four kinds of methods, illustrate that the image syncretizing effect taking the inventive method is good.
The objective evaluation of the different fusion method of table 1:Flower figure
Embodiment 2;The source images that the present embodiment is chosen is (c) and (d) in Fig. 5, for there being one group of figure of weak vibrations, has certain representativeness in multi-focus image fusion, and remaining step is identical with enforcement 1.
Fusion results is as shown in Figure 8, choose source images (d) and make differential chart as shown in Figure 9, can be seen that the fusion image of the inventive method does not has obvious remaining information, and in other four figures, the head people has the residue of the fuse information that obvious regional choice misalignment causes.Objective evaluation index is as shown in table 2, the numerical value of four indexs of the inventive method all big than other four kinds of methods, illustrates that the image syncretizing effect taking the inventive method is good.
The objective evaluation of the different fusion method of table 2:Lab figure
Above in conjunction with accompanying drawing, the specific embodiment of the present invention is explained in detail, but the present invention is not limited to above-described embodiment, in the ken that those of ordinary skill in the art are possessed, it is also possible to various changes can be made on the premise of without departing from present inventive concept.

Claims (2)

1. method poly focal power image co-registration optimized based on differential chart, it is characterised in that: include as Lower concrete steps:
Step 1: the pretreatment of image;
To source images IAAnd IBCarry out gaussian filtering, calculate the meansigma methods of source images:
I A V G = I A + I B 2
IARepresent source images A, IBRepresent source images B, IAVGRepresent the average of source images A and B;
Step 2: decomposition multi-direction to Image Multiscale obtains original fusion image;
Initially with neighborhood distance filter to source images IAAnd IBCarrying out multi-resolution decomposition, the number of plies is four layers, Each layer comprises high-frequency information and low-frequency information;
Then using NSCT multi-direction decomposition method that high-frequency information carries out multi-direction decomposition, wherein ground floor decomposes Direction is 16, and the second layer is 8, and third layer is 4, and last layer is 1, obtains IAHigh frequency Information IA,hWith low-frequency information IA,l, IBHigh-frequency information IB,hWith low-frequency information IB,l
Finally calculate pixel spatial frequency values based on neighborhood in high-frequency information, construct what high-frequency information merged Decision matrix, is tentatively merged high-frequency information by decision matrix, and low-frequency information is taked to average, tentatively High-frequency information after fusion and the low-frequency information after averaging, through the inverse transformation of multi-direction multi-scale transform, are formed Original fusion image IIN
Step 3: build error image and calculate energy value;
DIF ( I AVG , I A ) ( i , j ) = I AVG ( i , j ) - I A ( i , j )
DIF ( I IN , I AVG ) ( i , j ) = I IN ( i , j ) - I AVG ( i , j )
Represent source images IAWith average image IAVGPixel (i, difference j),Represent original fusion image IINWith average image IAVGIn pixel (i, difference j);
Calculating energy value:
Energy I A V G , I A ( i , j ) = Σ m = - ( M - 1 ) / 2 ( M - 1 ) / 2 Σ n = - ( N - 1 ) / 2 ( N - 1 ) / 2 | DIF ( I A V G , I A ) ( i + m , j + n ) |
Energy I I N , I A V G ( i , j ) = Σ m = - ( M - 1 ) / 2 ( M - 1 ) / 2 Σ n = - ( N - 1 ) / 2 ( N - 1 ) / 2 | DIF ( I I N , I A V G ) ( i + m , j + n ) |
Represent IIN(i, j) and IAVG(i, difference j)Corresponding energy Value,Represent IAVG(i, j) and IA(i, difference j)Corresponding energy Value, M × N represents default field size, (i+m, j+n) represent with (i, j) centered by field M × N Interior any point;
Utilize two-sided filter to energy matrixWithCarry out Protect limit denoising, respectively obtain matrix char1 and char2;
Step 4: construct initial bianry image;
(1) by binary map MAP1 that char1 and char2 structure is initial, MAP2:
(i j) represents that (i, j) value corresponding in binary map MAP1, (i j) represents MAP2 pixel to MAP1 (i, j) value corresponding in binary map MAP2, (i, (i, j) at char1 j) to represent pixel for char1 for pixel The value of middle correspondence, (i j) represents that (i, j) value corresponding in char2, δ is the threshold set to pixel to char2 Value;
(2) bianry image is modified:
X × Y represents default field size, (i+a, j+b) represent with (i, j) centered by field M × N in Any point;
By above formula, MAP1 and MAP2 is modified respectively, obtain revised binary map MAP1' and MAP2', for source images focuses on and the most inconspicuous district of non-focusing (i, j) | MAP1'(i, j)=MAP2'(i, j) } Process, take MAP1'(i, j)=MAP2'(i, j)=0.5;
Step 5: by binary map MAP1' revised, MAP2' is to source images IAAnd IBMerge.
Method poly focal power image co-registration optimized based on differential chart the most according to claim 1, its It is characterised by: the fusion rule in step 5 is as follows:
(1) fusion based on spatial domain:
Binary map MAP1' relatively revised, MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, Then select source images IB(i, j) is filled with, if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then select Source images IA(i, j) is filled with, and other situation then selects original fusion image IIN(i j) is filled with. IF(i j) is the last fusion results optimizing and generating;
(2) fusion based on transform domain:
I F , h ( i , j ) = I A , h ( i , j ) , I F , l ( i , j ) = I A , l ( i , j ) M A P 1 ′ ( i , j ) = 1 , M A P 2 ′ ( i , j ) = 0 I F , h ( i , j ) = I B , h ( i , j ) , I F , l ( i , j ) = I B , l ( i , j ) M A P 1 ′ ( i , j ) = 0 , M A P 2 ′ ( i , j ) = 1 I F , h ( i , j ) = I I N , h ( i , j ) , I F , l ( i , j ) = I I N , l ( i , j ) M A P 1 ′ ( i , j ) = M A P 2 ′ ( i , j ) = 0.5
Binary map MAP1' relatively revised and MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, Then select source images IAHigh-frequency information IA,h(i, j) with low-frequency information IA,l(i, j) be fusion image high frequency letter Breath and low-frequency information, if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh frequency Information IB,h(i, j) generation and low-frequency information IB,l(i, j) is high-frequency information and the low-frequency information of fusion image, if full Foot MAP1'(i, j)=MAP2'(i, j)=0.5, then select original fusion image IINHigh frequency letter IIN, h(i, j) With low-frequency information IIN,l(i is j) that the high-frequency information of fusion image and low-frequency information are then to the fusion image obtained Low-and high-frequency information carry out inverse transformation, obtain final fusion results IF
CN201610210196.6A 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart Active CN105913407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610210196.6A CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610210196.6A CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Publications (2)

Publication Number Publication Date
CN105913407A true CN105913407A (en) 2016-08-31
CN105913407B CN105913407B (en) 2018-09-28

Family

ID=56745700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610210196.6A Active CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Country Status (1)

Country Link
CN (1) CN105913407B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403416A (en) * 2017-07-26 2017-11-28 温州大学 Improvement filtering and the medical ultrasound image denoising method of threshold function table based on NSCT
CN108171676A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108399645A (en) * 2018-02-13 2018-08-14 中国传媒大学 Image encoding method based on contourlet transform and device
CN109658371A (en) * 2018-12-05 2019-04-19 北京林业大学 The fusion method of infrared image and visible images, system and relevant device
CN109754386A (en) * 2019-01-15 2019-05-14 哈尔滨工程大学 A kind of progressive Forward-looking Sonar image interfusion method
CN113379660A (en) * 2021-06-15 2021-09-10 深圳市赛蓝科技有限公司 Multi-dimensional rule multi-focus image fusion method and system
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN115205181A (en) * 2022-09-15 2022-10-18 季华实验室 Multi-focus image fusion method and device, electronic equipment and storage medium
CN116916166A (en) * 2023-09-12 2023-10-20 湖南湘银河传感科技有限公司 Telemetry terminal based on AI image analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130101177A1 (en) * 2011-04-12 2013-04-25 Hitoshi Yamada Motion estimation apparatus, depth estimation apparatus, and motion estimation method
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130101177A1 (en) * 2011-04-12 2013-04-25 Hitoshi Yamada Motion estimation apparatus, depth estimation apparatus, and motion estimation method
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JING TIAN ET AL.: "Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure", 《SIGNAL PROCESSING》 *
YONG YANG ET AL.: "Multifocus Image Fusion Based on NSCT and Focused Area Detection", 《IEEE SENSORS JOURNAL》 *
刘羽 等: "结合小波变换和自适应分块的多聚焦图像快速融合", 《中国图象图形学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403416B (en) * 2017-07-26 2020-07-28 温州大学 NSCT-based medical ultrasonic image denoising method with improved filtering and threshold function
CN107403416A (en) * 2017-07-26 2017-11-28 温州大学 Improvement filtering and the medical ultrasound image denoising method of threshold function table based on NSCT
CN108171676A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108171676B (en) * 2017-12-01 2019-10-11 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108399645A (en) * 2018-02-13 2018-08-14 中国传媒大学 Image encoding method based on contourlet transform and device
CN108399645B (en) * 2018-02-13 2022-01-25 中国传媒大学 Image coding method and device based on contourlet transformation
CN109658371A (en) * 2018-12-05 2019-04-19 北京林业大学 The fusion method of infrared image and visible images, system and relevant device
CN109658371B (en) * 2018-12-05 2020-12-15 北京林业大学 Fusion method and system of infrared image and visible light image and related equipment
CN109754386A (en) * 2019-01-15 2019-05-14 哈尔滨工程大学 A kind of progressive Forward-looking Sonar image interfusion method
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN113947554B (en) * 2020-07-17 2023-07-14 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN113379660A (en) * 2021-06-15 2021-09-10 深圳市赛蓝科技有限公司 Multi-dimensional rule multi-focus image fusion method and system
CN115205181A (en) * 2022-09-15 2022-10-18 季华实验室 Multi-focus image fusion method and device, electronic equipment and storage medium
CN116916166A (en) * 2023-09-12 2023-10-20 湖南湘银河传感科技有限公司 Telemetry terminal based on AI image analysis
CN116916166B (en) * 2023-09-12 2023-11-17 湖南湘银河传感科技有限公司 Telemetry terminal based on AI image analysis

Also Published As

Publication number Publication date
CN105913407B (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN105913407A (en) Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN103985108B (en) Method for multi-focus image fusion through boundary detection and multi-scale morphology definition measurement
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN103390280B (en) Based on the Fast Threshold dividing method of Gray Level-Gradient two-dimensional symmetric Tsallis cross entropy
CN103455991B (en) A kind of multi-focus image fusing method
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN104881855B (en) A kind of multi-focus image fusing method of utilization morphology and free boundary condition movable contour model
CN109801292A (en) A kind of bituminous highway crack image partition method based on generation confrontation network
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
CN104408700A (en) Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN103971338B (en) Variable-block image repair method based on saliency map
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN104036479A (en) Multi-focus image fusion method based on non-negative matrix factorization
CN109685732A (en) A kind of depth image high-precision restorative procedure captured based on boundary
CN104182952B (en) Multi-focus sequence image fusion method
CN104809698A (en) Kinect depth image inpainting method based on improved trilateral filtering
CN105894513B (en) Take the remote sensing image variation detection method and system of imaged object change in time and space into account
CN108564597A (en) A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods
CN106447640B (en) Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering
CN104966274B (en) A kind of On Local Fuzzy restored method using image detection and extracted region
CN109064505A (en) A kind of depth estimation method extracted based on sliding window tensor
CN103985104B (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference
CN106408580A (en) Liver region extraction method based on fuzzy C mean and mean shift
CN101216936A (en) A multi-focus image amalgamation method based on imaging mechanism and nonsampled Contourlet transformation
CN105184761A (en) Image rain removing method based on wavelet analysis and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210224

Address after: 650000 room 1701, 17th floor, block a, science and Technology Information Innovation Incubation Center, Chenggong District, Kunming City, Yunnan Province

Patentee after: YUNNAN UNITED VISUAL TECHNOLOGY Co.,Ltd.

Address before: 650093 No. 253, Xuefu Road, Wuhua District, Yunnan, Kunming

Patentee before: Kunming University of Science and Technology

TR01 Transfer of patent right