CN105913407B - A method of poly focal power image co-registration is optimized based on differential chart - Google Patents

A method of poly focal power image co-registration is optimized based on differential chart Download PDF

Info

Publication number
CN105913407B
CN105913407B CN201610210196.6A CN201610210196A CN105913407B CN 105913407 B CN105913407 B CN 105913407B CN 201610210196 A CN201610210196 A CN 201610210196A CN 105913407 B CN105913407 B CN 105913407B
Authority
CN
China
Prior art keywords
frequency information
image
map2
map1
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610210196.6A
Other languages
Chinese (zh)
Other versions
CN105913407A (en
Inventor
李华锋
刘鑫坤
余正涛
毛存礼
郭剑毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
YUNNAN UNITED VISUAL TECHNOLOGY Co.,Ltd.
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201610210196.6A priority Critical patent/CN105913407B/en
Publication of CN105913407A publication Critical patent/CN105913407A/en
Application granted granted Critical
Publication of CN105913407B publication Critical patent/CN105913407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention relates to a kind of methods optimized to multi-focus image fusion based on differential chart, be primarily based on it is multiple dimensioned it is multi-direction multiple focussing image is tentatively merged, the information of the remaining information and focal zone then measured by differential chart is updated initial blending image.It is multiple dimensioned to be merged with multi-direction two methods, obtained image-region accurate positioning, precision is high, suitable for the vision system of the mankind, being updated operation by the residual risk original fusion image of differential chart keeps blending image more accurate, image co-registration quality higher has wide practical use in military and civil field.

Description

A method of poly focal power image co-registration is optimized based on differential chart
Technical field
The invention belongs to image processing fields and information to merge field, more particularly to a kind of to be based on differential chart to multi-focus The method for spending image co-registration optimization has wide practical use in military and civil field.
Background technology
Poly focal power image fusion technology is by the image co-registration of different focusing a to image, in conjunction with integrated information Complementarity forms when property more than one, the image that multiple view information merges, obtains more comprehensive scene information.
Currently, multi-focus image fusion is broadly divided into two class of spatial domain and transform domain.Multiple focussing image based on spatial domain Integration technology is divided into blending algorithm (such as Weighted Average Algorithm) and segmented areas blending algorithm for pixel, segmented areas fusion What algorithm considered is contacting between the gray value of pixel and the point and adjacent pixel, the image definition that method obtains and It will be better than pixel fusion method effect in terms of contrast.Compared with the integration technology based on spatial domain, based on transform domain Integration technology is stronger to the zone location accuracy of image, precision higher.Image fusion technology of the multi-scale transform in transform domain The irreplaceable key player of middle performer, wherein method the most famous is:Laplacian-pyramid method and discrete wavelet It converts (DWT).But in both the above method, if for the location misalignment of source images, or go out in image acquisition process Existing moving region, the syncretizing effect of image can significantly decline.So in order to solve this problem, and there is scholar to propose translation Constant M8003 line dual-tree complex wavelet transform.In recent years, and it is proposed that some new method such as wavelet transforms (DCT), non-down sampling contourlet transform (NSST) quaternion wavelet transformation (QWT).But transformation series is merged in fusion process Number can replace original pixel value, multiple dimensioned variation that may become the main disadvantage in fusion method.
Another influences the fusion rule that fusion product qualitative factor is sub-band.In recent years, many scholars are directed to A large amount of research has been done in this work.And achieve certain achievement in research.Most research is all based on the dilute of Multiple Spanning Tree (MST) region It dredges and indicates total frame (SR).The method is to follow absolute maximum principle for high-frequency sub-band to merge, and low frequency sub-band is adopted The method based on SR is taken to be merged.This approach avoid the offsets of smooth detail, but it remains unchanged, there is some defects, examples Such as the inherent shortcoming of independent image feature.Then, in order to solve the problems, such as this, spatial frequency is had also been proposed with PCNN in non-lower sampling The method being combined in contourlet transform domain, the method overcome the limit of labyrinth and excessive iterations in transformation System, but still can not solve the problems, such as excessive artificial setup parameter.Therefore, and it is proposed that it is based on neighborhood characteristics NSCT It converts, low frequency sub-band information follows the Weighted Fusion method based on neighboring region energy in this method, and high-frequency sub-band follows neighborhood spy Sign is merged, but the method is concentrated and considers low frequency region energy information, but not for the extraction of effective coverage information There is further processing.
Invention content
The present invention proposes a kind of method optimized to multi-focus image fusion based on differential chart, based on multiple dimensioned multi-direction Multiple focussing image is tentatively merged, the information of the remaining information and focal zone then measured by differential chart is to initial Blending image is updated, and the quality of obtained multi-focus image fusion is more preferable, precision higher.The technical solution adopted by the present invention It comprises the following specific steps that:
Step 1:The pretreatment of image;
To source images IAAnd IBGaussian filtering is carried out, the average value of source images is calculated:
IAIndicate source images A, IBIndicate source images B, IAVGIndicate the mean value of source images A and B;
Step 2:Decomposition multi-direction to Image Multiscale obtains original fusion image;
Use neighborhood distance filter to source images I firstAAnd IBMulti-resolution decomposition is carried out, the number of plies is four layers, each layer of packet Containing high-frequency information and low-frequency information;
Then the multi-direction decomposition methods of NSCT are used to carry out multi-direction decomposition to high-frequency information, wherein first layer decomposition direction is 16, the second layer is 8, and third layer is 4, last layer is 1, obtains IAHigh-frequency information IA,hWith low-frequency information IA,l, IB High-frequency information IB,hWith low-frequency information IB,l
Spatial frequency values of the pixel based on neighborhood in high-frequency information are finally calculated, the decision of high-frequency information fusion is constructed Matrix tentatively merges high-frequency information by decision matrix, and low-frequency information, which is taken, averages, the high frequency letter after preliminary fusion Low-frequency information after ceasing and averaging passes through the inverse transformation of multi-direction multi-scale transform, forms original fusion image IIN
Step 3:Structure error image simultaneously calculates energy value;
Indicate source images IAWith mean value image IAVGIn the difference of pixel (i, j),
Indicate original fusion image IINWith mean value image IAVGIn the difference of pixel (i, j);
Calculate energy value:
Indicate IIN(i, j) and IAVGThe difference of (i, j)Corresponding energy value,Indicate IAVG(i, j) and IAThe difference of (i, j)Corresponding energy value, M × N indicate pre- If Size of Neighborhood, (i+m, j+n) indicates any point in neighborhood M × N centered on (i, j);
Using two-sided filter to energy matrixWithProtect at the denoising of side Reason, respectively obtains matrix char1 and char2;
Step 4:Construct initial bianry image;
(1) initial binary map MAP1, MAP2 are constructed by char1 and char2:
MAP1 (i, j) indicates that pixel (i, j) corresponding value in binary map MAP1, MAP2 (i, j) indicate pixel (i, j) corresponding value in binary map MAP2, char1 (i, j) indicate pixel (i, j) corresponding value in char1, Char2 (i, j) indicates that pixel (i, j) corresponding value in char2, δ are the threshold value of setting;
(2) bianry image is modified:
X × Y indicates that preset Size of Neighborhood, (i+a, j+b) indicate any one in neighborhood X × Y centered on (i, j) Point;
MAP1 and MAP2 is modified respectively by above formula, obtains revised binary map MAP1' and MAP2', for In source images focus and the equal unobvious area of non-focusing (i, j) | MAP1'(i, j)=MAP2'(i, j) handled, take MAP1' (i, j)=MAP2'(i, j)=0.5;
Step 5:By modified binary map MAP1', MAP2' is to source images IAAnd IBIt is merged.
(1) fusion based on spatial domain:
More modified binary map MAP1', MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then it selects Source images IB(i, j) is filled, if meeting MAP1'(i, j) and=1, MAP2'(i, j)=0, then select source images IA(i, j) into Row filling, other the case where then select original fusion image IIN(i, j) is filled, IF(i, j) is melting for last optimization generation Close result;
(2) fusion based on transform domain:
More modified binary map MAP1', MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then it selects Source images IAHigh-frequency information IA,h(i, j) and low-frequency information IA,l(i, j) is the high-frequency information and low-frequency information of blending image, if Meet MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh-frequency information IB,h(i, j) generation and low-frequency information IB,l(i, j) is the high-frequency information and low-frequency information of blending image, if meeting MAP1'(i, j) and=MAP2'(i, j)=0.5, then it selects Select original fusion image IINHigh-frequency information IIN, h(i, j) and low-frequency information IIN,l(i, j) be blending image high-frequency information and Then low-frequency information carries out inverse transformation to the high and low frequency information of obtained blending image, obtains final fusion results IF
Beneficial effects of the present invention
This method is updated blending image using differential chart, makes remaining multiple focussing image information more accurate Blending image is updated, make that blending image is more accurate, quality higher.It is multiple dimensioned to be merged with multi-direction two methods, Obtained image-region accurate positioning, precision is high, suitable for the vision system of the mankind, and multidirectional method compensates for multi-scale method Some drawbacks, it is multi-direction to compensate for the case where some edge effective informations are lost in multi-scale image, pass through multi-direction fusion To its edge, these important informations have obtained effective preservation to technology with lines.
Description of the drawings:
Fig. 1 is the flow chart of the method for the present invention;
Fig. 2 is the flow chart to form original fusion image;
Fig. 3 is the image co-registration flow chart based on spatial domain in the present invention;
Fig. 4 is the image co-registration flow chart based on transform domain in the present invention;
Fig. 5 is two groups of poly focal power source images to be fused, and (a) and (b) is that flower schemes, and (c) and (d) is lab figures;
Fig. 6 is fusion results of the flower figures under the method for the present invention and other four kinds of methods, wherein (a) and (b) is respectively For the syncretizing effect figure proposed by the present invention based on spatial domain and transform domain, (c)-(f) is respectively to be adopted under four kinds of control methods are non- Sample profile wave changes (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood apart from (ND) and neighborhood apart from multi-direction side The syncretizing effect figure of method (MMND);
Fig. 7 is the differential chart of corresponding fusion figure and source images (b) in Fig. 6;
Fig. 8 is fusion results of the lab figures under the method for the present invention and other four kinds of methods, wherein (a) and (b) is respectively this The syncretizing effect figure based on spatial domain and transform domain proposed is invented, (c)-(f) is respectively four kinds of control methods non-lower sampling wheels Wide wave variation (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood distance (ND) and neighborhood are apart from multi-direction method (MMND) syncretizing effect figure;
Fig. 9 is the differential chart of corresponding blending image and source images (d) in Fig. 8.
Specific implementation mode
In the following with reference to the drawings and specific embodiments to the present invention is further illustrated.
Embodiment 1:As shown in Figure 1, the present invention is based on it is multiple dimensioned it is multi-direction multiple focussing image is tentatively merged, then pass through Differential chart carries out spatial domain and transformation area update to initial blending image, obtains final blending image.As in Figure 2-4, have Body step includes:
Step 1:The pretreatment of image:
To source images IAAnd IBGaussian filtering is carried out, the average value of source images is calculated:
IAIndicate source images A, IBIndicate source images B, IAVGIndicate the mean value of source images A and B;
Step 2:Decomposition multi-direction to Image Multiscale obtains original fusion image;
Use neighborhood distance filter to source images I firstAAnd IBMulti-resolution decomposition is carried out, the number of plies is four layers, each layer of packet Containing high-frequency information and low-frequency information;
Then the multi-direction decomposition methods of NSCT are used to carry out multi-direction decomposition to high-frequency information, wherein first layer decomposition direction is 16, the second layer is 8, and third layer is 4, last layer is 1, obtains IAHigh-frequency information IA,hWith low-frequency information IA,l, IB High-frequency information IB,hWith low-frequency information IB,l
Spatial frequency values of the pixel based on neighborhood in high-frequency information are finally calculated, the decision of high-frequency information fusion is constructed Matrix tentatively merges high-frequency information by decision matrix, and low-frequency information, which is taken, averages, the high frequency letter after preliminary fusion Low-frequency information after ceasing and averaging passes through the inverse transformation of multi-direction multi-scale transform, forms original fusion image IIN
Step 3:Structure error image simultaneously calculates energy value;
Indicate source images IAWith mean value image IAVGIn the difference of pixel (i, j),
Indicate original fusion image IINWith mean value image IAVGIn the difference of pixel (i, j);
Calculate energy value:
Indicate IIN(i, j) and IAVGThe difference of (i, j)Corresponding energy value,Indicate IAVG(i, j) and IAThe difference of (i, j)Corresponding energy value, M × N indicate pre- If Size of Neighborhood, (i+m, j+n) indicates any point in neighborhood M × N centered on (i, j);M × N=in the present embodiment 11×11
Using two-sided filter to energy matrixWithIt carries out further Processing, respectively obtain matrix char1 and char2.The effect of two-sided filter herein is the proximity and pixel to energy value It is worth the compromise of similarity, while considers spatial information (si) and grey similarity, achievees the purpose that protect side denoising.Wherein bilateral filter The parameter of wave device is set as, and window size is window=11 × 11, and Gauss variance is sigma=21.
Step 4:Construct initial bianry image;
(1) initial binary map MAP1, MAP2 are constructed by char1 and char2:
MAP1 (i, j) indicates that pixel (i, j) corresponding value in binary map MAP1, MAP2 (i, j) indicate pixel (i, j) corresponding value in binary map MAP2, char1 (i, j) indicate pixel (i, j) corresponding value in char1, Char2 (i, j) indicates that pixel (i, j) corresponding value in char2, δ be the threshold value set, δ in the present embodiment= 0.0025。
(2) and then again bianry image is modified:
X × Y indicates that preset Size of Neighborhood, (i+a, j+b) indicate any one in neighborhood X × Y centered on (i, j) Point.
MAP1 and MAP2 is modified respectively by above formula, obtains revised binary map MAP1' and MAP2', herein The purpose that place carries out pixel consistency checking is bad influence to be had to fusion process in the acnode of some mistake choosings, leakage choosing, therefore By pixel consistency pixel is selected to ignore these mistakes.For region (i, j) | MAP1'(i, j)=MAP2'(i, j), i.e., Focusing and the equal unobvious region of non-focusing, handle these regions in two Zhang Yuan's images:MAP1'(i, j)=MAP2'(i, J)=0.5.
Step 5:By modified binary map MAP1', MAP2' is to source images IAAnd IBIt is merged.
(1) fusion based on spatial domain:
More modified binary map MAP1', MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then it selects Source images IB(i, j) is filled, if meeting MAP1'(i, j) and=1, MAP2'(i, j)=0, then select source images IA(i, j) into Row filling, other the case where then select original fusion image IIN(i, j) is filled.IF(i, j) is melting for last optimization generation Close result;
(2) fusion based on transform domain:
More modified binary map MAP1', MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then it selects Source images IAHigh-frequency information IA,h(i, j) and low-frequency information IA,l(i, j) is the high-frequency information and low-frequency information of blending image, if Meet MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh-frequency information IB,h(i, j) generation and low-frequency information IB,l(i, j) is the high-frequency information and low-frequency information of blending image, if meeting MAP1'(i, j) and=MAP2'(i, j)=0.5, then it selects Select original fusion image IINHigh-frequency information IIN, h(i, j) and low-frequency information IIN,l(i, j) be blending image high-frequency information and Then low-frequency information carries out inverse transformation to the high and low frequency information of obtained blending image, obtains final fusion results IF
Experimental result
For superiority of the comparative descriptions this method compared with conventional method, four kinds of control methods are selected here:It is adopted under non- Sample profile wave changes (NSCT), NSCT contrast enhancement process (NSCT-Co), neighborhood apart from (ND) and neighborhood apart from multi-direction side Method (MMND).
The one group of source images chosen in the present embodiment as shown in Figure 5 (a) and (b), are schemed for static flower.Fusion knot Fruit is as shown in fig. 6, (a) and (b) respectively represents the optimization method (MMND-SD) in the present invention based on spatial domain and be based on transform domain Optimization method (MMND-TD) fusion results.Fig. 6 is visually it does not appear that the effect of multi-focus region fusion, is More more obvious with other methods, it is poor that blending image and source images are made, and selection differential chart visual effect is more apparent here Source images (b) make difference.It is observed that (a) and (b) is that method syncretizing effect of the invention is good in Fig. 7 in red frame region, Substantially the information of remaining source images is not observed in red frame.In remaining differential chart, source can be obviously observed all remaining Image information (e) and in (f) obviously also includes the texture information of wall, image bottom right can be obviously observed in (c) and (d) The information of angle blade is without without fusion completely.
Other than evaluating fusion results from subjective vision, objective evaluation index is additionally used here to carry out objective comment Valence.Using interactive information MI, nonlinear correlation comentropy NCIE, based on multiple dimensioned measure Q_ in objective evaluation index M, Chen Shi measures Q_CB.Index value is bigger, illustrates that syncretizing effect is better, as shown in table 1, four indexs of the method for the present invention Numerical value is bigger than other four kinds of methods, illustrates to take the image syncretizing effect of the method for the present invention good.
Table 1:Flower schemes the objective evaluation of different fusion methods
Embodiment 2;The source images that the present embodiment is chosen are (c) and (d) in Fig. 5, to there is a group picture of weak vibrations, Have certain representativeness, remaining step identical as implementing 1 in multi-focus image fusion.
To make differential chart as shown in Figure 9 as shown in figure 8, choosing source images (d) for fusion results, it can be seen that the method for the present invention Blending image does not have apparent remaining information, and in other four figures, it has apparent regional choice misalignment on the head of people and leads The residue of the fuse information of cause.Objective evaluation index is as shown in table 2, the numerical value of four indexs of the method for the present invention than other four Kind of method it is big, illustrate to take the image syncretizing effect of the method for the present invention good.
Table 2:Lab schemes the objective evaluation of different fusion methods
Specific embodiments of the present invention are explained in detail above in conjunction with attached drawing, but the present invention is not limited to above-mentioned realities Example is applied, it within the knowledge of a person skilled in the art, can also be without departing from the purpose of the present invention Various changes can be made.

Claims (2)

1. a kind of method optimized to poly focal power image co-registration based on differential chart, it is characterised in that:It comprises the following specific steps that:
Step 1:The pretreatment of image;
To source images IAAnd IBGaussian filtering is carried out, the average value of source images is calculated:
IAIndicate source images A, IBIndicate source images B, IAVGIndicate the mean value of source images A and B;
Step 2:Decomposition multi-direction to Image Multiscale obtains original fusion image;
Use neighborhood distance filter to source images I firstAAnd IBMulti-resolution decomposition is carried out, the number of plies is four layers, and each layer includes height Frequency information and low-frequency information;
Then the multi-direction decomposition methods of NSCT are used to carry out multi-direction decomposition to high-frequency information, it is 16 that wherein first layer, which decomposes direction, A, the second layer is 8, and third layer is 4, last layer is 1, obtains IAHigh-frequency information IA,hWith low-frequency information IA,l, IB's High-frequency information IB,hWith low-frequency information IB,l
Spatial frequency values of the pixel based on neighborhood in high-frequency information are finally calculated, the decision square of high-frequency information fusion is constructed Battle array, tentatively merges high-frequency information by decision matrix, and low-frequency information, which is taken, averages, the high-frequency information after preliminary fusion Pass through the inverse transformation of multi-direction multi-scale transform with the low-frequency information after averaging, forms original fusion image IIN
Step 3:Structure error image simultaneously calculates energy value;
Indicate source images IAWith mean value image IAVGIn the difference of pixel (i, j),Table Show original fusion image IINWith mean value image IAVGIn the difference of pixel (i, j);
Calculate energy value:
Indicate IIN(i, j) and IAVGThe difference of (i, j)Corresponding energy value,Indicate IAVG(i, j) and IAThe difference of (i, j)Corresponding energy value, M × N indicate pre- If Size of Neighborhood, (i+m, j+n) indicates any point in neighborhood M × N centered on (i, j);
Using two-sided filter to energy matrixWithIt carries out protecting side denoising, Respectively obtain matrix char1 and char2;
Step 4:Construct initial bianry image;
(1) initial binary map MAP1, MAP2 are constructed by char1 and char2:
MAP1 (i, j) indicates that pixel (i, j) corresponding value in binary map MAP1, MAP2 (i, j) indicate pixel (i, j) The corresponding value in binary map MAP2, char1 (i, j) indicate pixel (i, j) corresponding value, char2 in char1 (i, j) indicates that pixel (i, j) corresponding value in char2, δ are the threshold value of setting;
(2) bianry image is modified:
X × Y indicates that preset Size of Neighborhood, (i+a, j+b) indicate any point in neighborhood X × Y centered on (i, j);
MAP1 and MAP2 is modified respectively by above formula, revised binary map MAP1' and MAP2' is obtained, for source figure As in focus and the equal unobvious area of non-focusing (i, j) | MAP1'(i, j)=MAP2'(i, j) handled, take MAP1'(i, j) =MAP2'(i, j)=0.5;
Step 5:By modified binary map MAP1', MAP2' is to source images IAAnd IBIt is merged.
2. the method according to claim 1 optimized to poly focal power image co-registration based on differential chart, it is characterised in that:Step Fusion rule in rapid 5 is as follows:
(1) fusion based on spatial domain:
More modified binary map MAP1', MAP2', if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then select source figure As IB(i, j) is filled, if meeting MAP1'(i, j) and=1, MAP2'(i, j)=0, then select source images IA(i, j) is filled out Fill, other the case where then select original fusion image IIN(i, j) is filled, IF(i, j) is the fusion knot that last optimization generates Fruit;
(2) fusion based on transform domain:
More modified binary map MAP1' and MAP2', if meeting MAP1'(i, j)=1, MAP2'(i, j)=0, then select source figure As IAHigh-frequency information IA,h(i, j) and low-frequency information IA,l(i, j) is the high-frequency information and low-frequency information of blending image, if meeting MAP1'(i, j)=0, MAP2'(i, j)=1, then select source images IBHigh-frequency information IB,h(i, j) and low-frequency information IB,l(i, J) it is the high-frequency information and low-frequency information of blending image, if meeting MAP1'(i, j)=MAP2'(i, j)=0.5, then selection is initial Blending image IINHigh-frequency information IIN, h(i, j) and low-frequency information IIN,l(i, j) is the high-frequency information and low frequency letter of blending image Then breath carries out inverse transformation to the high and low frequency information of obtained blending image, obtains final fusion results IF
CN201610210196.6A 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart Active CN105913407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610210196.6A CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610210196.6A CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Publications (2)

Publication Number Publication Date
CN105913407A CN105913407A (en) 2016-08-31
CN105913407B true CN105913407B (en) 2018-09-28

Family

ID=56745700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610210196.6A Active CN105913407B (en) 2016-04-06 2016-04-06 A method of poly focal power image co-registration is optimized based on differential chart

Country Status (1)

Country Link
CN (1) CN105913407B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403416B (en) * 2017-07-26 2020-07-28 温州大学 NSCT-based medical ultrasonic image denoising method with improved filtering and threshold function
CN108171676B (en) * 2017-12-01 2019-10-11 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108399645B (en) * 2018-02-13 2022-01-25 中国传媒大学 Image coding method and device based on contourlet transformation
CN109658371B (en) * 2018-12-05 2020-12-15 北京林业大学 Fusion method and system of infrared image and visible light image and related equipment
CN109754386A (en) * 2019-01-15 2019-05-14 哈尔滨工程大学 A kind of progressive Forward-looking Sonar image interfusion method
CN113947554B (en) * 2020-07-17 2023-07-14 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN113379660B (en) * 2021-06-15 2022-09-30 深圳市赛蓝科技有限公司 Multi-dimensional rule multi-focus image fusion method and system
CN115205181B (en) * 2022-09-15 2022-11-18 季华实验室 Multi-focus image fusion method and device, electronic equipment and storage medium
CN116916166B (en) * 2023-09-12 2023-11-17 湖南湘银河传感科技有限公司 Telemetry terminal based on AI image analysis

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9092875B2 (en) * 2011-04-12 2015-07-28 Panasonic Intellectual Property Management Co., Ltd. Motion estimation apparatus, depth estimation apparatus, and motion estimation method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure;Jing Tian et al.;《Signal Processing》;20120930;第92卷(第9期);第2137-2146页 *
Multifocus Image Fusion Based on NSCT and Focused Area Detection;Yong Yang et al.;《IEEE Sensors Journal》;20150531;第15卷(第5期);第2824-2838页 *
结合小波变换和自适应分块的多聚焦图像快速融合;刘羽 等;《中国图象图形学报》;20131130;第18卷(第11期);第1435-1444页 *

Also Published As

Publication number Publication date
CN105913407A (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN105913407B (en) A method of poly focal power image co-registration is optimized based on differential chart
CN109242888B (en) Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109685732A (en) A kind of depth image high-precision restorative procedure captured based on boundary
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN105445277A (en) Visual and intelligent detection method for surface quality of FPC (Flexible Printed Circuit)
CN107369148A (en) Based on the multi-focus image fusing method for improving SML and Steerable filter
CN106898048B (en) A kind of undistorted integration imaging 3 D displaying method being suitable for complex scene
CN106846289A (en) A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
Bai Morphological infrared image enhancement based on multi-scale sequential toggle operator using opening and closing as primitives
CN104966274B (en) A kind of On Local Fuzzy restored method using image detection and extracted region
CN106447640B (en) Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering
CN108564597A (en) A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods
CN110363719A (en) A kind of cell layered image processing method and system
CN109509163A (en) A kind of multi-focus image fusing method and system based on FGF
CN109118440B (en) Single image defogging method based on transmissivity fusion and adaptive atmospheric light estimation
CN106022337B (en) A kind of planar target detection method based on continuous boundary feature
CN109559273A (en) A kind of quick joining method towards vehicle base map picture
CN103942756B (en) A kind of method of depth map post processing and filtering
Chen et al. A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery
CN109961016A (en) The accurate dividing method of more gestures towards Intelligent household scene
Yu et al. Image and video dehazing using view-based cluster segmentation
CN106530277B (en) A kind of image interfusion method based on small echo directional correlation coefficient
CN113887624A (en) Improved feature stereo matching method based on binocular vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210224

Address after: 650000 room 1701, 17th floor, block a, science and Technology Information Innovation Incubation Center, Chenggong District, Kunming City, Yunnan Province

Patentee after: YUNNAN UNITED VISUAL TECHNOLOGY Co.,Ltd.

Address before: 650093 No. 253, Xuefu Road, Wuhua District, Yunnan, Kunming

Patentee before: Kunming University of Science and Technology

TR01 Transfer of patent right