CN109754385A - It is not registrated the rapid fusion method of multiple focussing image - Google Patents
It is not registrated the rapid fusion method of multiple focussing image Download PDFInfo
- Publication number
- CN109754385A CN109754385A CN201910028159.7A CN201910028159A CN109754385A CN 109754385 A CN109754385 A CN 109754385A CN 201910028159 A CN201910028159 A CN 201910028159A CN 109754385 A CN109754385 A CN 109754385A
- Authority
- CN
- China
- Prior art keywords
- image
- registrated
- pixel
- num
- saliency maps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 18
- 230000009466 transformation Effects 0.000 claims abstract description 16
- 238000002156 mixing Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 101100517651 Caenorhabditis elegans num-1 gene Proteins 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 19
- 230000000694 effects Effects 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 12
- 230000004927 fusion Effects 0.000 description 5
- 238000005267 amalgamation Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010297 mechanical methods and process Methods 0.000 description 1
- 230000005226 mechanical processes and functions Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses the rapid fusion methods that one kind is not registrated multiple focussing image, including obtaining several multiple focussing images under Same Scene;Construct the scale space pyramid of each image;It obtains the feature point set of each image and generates Saliency maps picture;Most images is counted as reference picture using feature, calculates original image and Saliency maps picture after transformation model is registrated;Weight image is formed according to the Saliency maps picture after registration;The weight image of every image is multiplied and is added up with the corresponding original image step-by-step being registrated, final blending image is obtained.The method of the present invention can reduce the binding character to the multiple focussing image to be fused of input, and guarantee that fused global clear image can retain the information of different size scale to reach preferable visual quality, and the calculating speed of the method for the present invention is fast, and high reliablity, syncretizing effect are good and efficiency is higher.
Description
Technical field
Present invention relates particularly to the rapid fusion methods that one kind is not registrated multiple focussing image.
Background technique
Multi-focus image fusion is by under the same visual field, and it is clear that camera focuses on a series of parts that different zones are shot
Clear but global fuzzy image is fused into the technology for remaining the global clear image of original image information, to meet
Actual application demand.But since in actual mechanical process, when adjusting focal zone, phase chance is generated to a certain extent
Shake, there are positional shifts for a series of multiple focussing images obtained so as to cause shooting, not fully meet the item of the same visual field
Part forms the multiple focussing image not being registrated therefrom.If the multiple focussing image for not directly being registrated these merges, effect
Fruit has apparent distortion, destroys original image information, so that image co-registration loses meaning.
Multi-focus image fusing method is mainly based upon Pixel-level and is handled, and retains more original image informations with this
Guarantee the high efficiency of processing simultaneously, but this method is more demanding to the phase equalization of original image to be fused, it is required that
Input is the image being registrated completely, and otherwise, the result of fusion will have apparent geometry deformation and information loss.Existing poly
Focus image amalgamation method mainly has: (1) being handled under the premise of image has been registrated.This hypotheses are in practical applications simultaneously
It cannot fully meet, limit the applicability of method;(2) for the input picture not being registrated, registration process is first carried out to it, then
It is merged.The processing mode of this segmented largely affects fusion treatment there are certain redundant operation
Timeliness, especially high-definition picture is merged.(3) part is small-scale not to be registrated image to existing, while into
Row fusion and registration process.The mode of this Combined Treatment is more efficient, but its be only applicable to subtle offset be not registrated situation,
It is not suitable for the presence of the case where obvious offset, such as the multiple focussing image not being registrated acquired under microscope.It is existing based on above-mentioned discussion
Some multi-focus image fusing methods can not carry out effective integration to the multiple focussing image not being registrated in practical application.
Summary of the invention
The purpose of the present invention is to provide a kind of high reliablity, syncretizing effect is good and efficiency is higher is not registrated multi-focus figure
The rapid fusion method of picture.
This rapid fusion method for not being registrated multiple focussing image provided by the invention, includes the following steps:
S1. several multiple focussing images under Same Scene are obtained;
S2. the every image obtained to step S1 constructs the scale space pyramid of each image;
S3. according to the scale space pyramid of each obtained image of step S2, the feature point set of each image is obtained, simultaneously
Generate Saliency maps picture;
S4. according to the feature point set of each obtained image of step S3, most images is counted as with reference to figure using feature
Residual image is successively compared with the characteristic point in reference picture, transformation model is calculated, thus after obtaining registration by picture
Original image and Saliency maps picture;
S5. the Saliency maps picture after the registration that comparison step S4 is obtained, by all Saliency maps as maximum in same position
Pixel value in image where responding on position is set to the first setting value, and pixel value of the residual image on same position is set to the
Two setting values, and after being filtered to all images, form weight image;
S6. the weight image of every obtained image of step S5 is multiplied with the corresponding original image step-by-step being registrated, and
Obtained result is added up, to obtain final blending image.
The scale space pyramid of each image, the scale space pyramid packet specially constructed are constructed described in step S2
Containing O image group, each image group includes L tomographic image G;Wherein, using box filter to same integrogramX-,
The quick filter that original image i is carried out on tri- directions y- and xy- handles and obtains filter result Dxx、DyyAnd Dxy, calculate each picture
The approximate Hessian matrix of vegetarian refreshments, and the Hessian determinant of a matrix is calculated to obtain the figure of equivalent layer in scale space
As G.
The box filter is Gauss second order derviation filter, and the calculation formula of the size w of box filter is w
=(2o* s+1) * 3, wherein o is the group number at place, and s is the corresponding number of plies.
The integrogram specially calculates each pixel value in integrogram using following formula:
I ∑ (x, y) is the pixel value in integrogram in formula, and I (i, j) is pixel value of the image I at point (i, j).
The approximate Hessian matrix of each pixel of the calculating specially calculates each pixel using following formula
The approximate Hessian matrix of point:
D in formulaxx、DyyAnd DxyBox filter is followed successively by same integrogramIt is enterprising in tri- directions x-, y- and xy-
The quick filter of row original image i handles the filter result obtained from;α is approximation coefficient.
The feature point set of each image of acquisition described in step S3 successively takes each pixel specially for every group of image
Centered on point, by the region layer num*num where the pixel and itself, upper one layer of region num*num of layer where itself
It is compared with the remaining 3*num*num-1 pixel in the next layer of region num*num of the layer where itself: if pixel
Value is maximum value and the first threshold for being greater than setting, then the pixel is regarded as characteristic point;Otherwise then assert that the pixel is
Non- characteristic point;Num is natural number.
Generation Saliency maps picture described in step S3 specially calculates Saliency maps as S using following formulai:
S in formulai(x, y) is Saliency maps as SiIn pixel (x, y) value,For in scale space, o
Pixel value of the kth tomographic image G at point (x, y) in group,Indicate the L for the o group in scale space
Tomographic image takes all images maximum pixel value at position (x, y);Wherein k:1 → L indicates traversal from the 1st layer to L layers
All images.
Original image and Saliency maps picture after being registrated described in step S4 are specially registrated using following steps
Original image and Saliency maps picture afterwards:
A. Feature Descriptor is generated for comparing according to the characteristic point in image;
B. the Euclidean distance of each characteristic point in image subject to registration and all characteristic points in reference picture is calculated, and will
Record minimum distance and second closely;
C. the ratio of step B obtained minimum distance and the second short distance is calculated, and is judged according to following rule, from
And form the characteristic point pair of several pairings:
If minimum distance and the ratio of the second short distance are less than second threshold, the nearest point of distance in reference picture is made
The characteristic point matched for success;
Otherwise, assert pairing failure;
D. several nearest match points of adjusting the distance before retaining, calculate the offset of each match point in the horizontal and vertical directions
Measure dxAnd dy, and count the most offset t of frequency of occurrencexAnd ty;
E. transformation model is calculatedBy the coordinate position (x in image subject to registrations,ys) transformed mould
After type T transformation, corresponding coordinate position (x on a reference is obtainedr,yr), thus the image after being registrated;
Image in above-mentioned steps A~E includes original image and Saliency maps picture.
The Feature Descriptor specially generates Feature Descriptor using following steps:
A. the region of N* δ range around characteristic point is taken;N is natural number, and δ is characterized the corresponding scale of place layer;
B. region division step a obtained is num1*num1Subregion, and each subregion is filtered to obtain
Four-dimensional vector v=(∑ dx, Σ | dx |, ∑ dy, ∑ | dy |);
C. it combines four-dimensional vector corresponding to each sub-regions, to obtain the Feature Descriptor of this feature point.
Transformation model T is reduced to translation transformation, it can be with the processing speed of boosting algorithm.
This rapid fusion method for not being registrated multiple focussing image provided by the invention, the image that every is inputted first,
The filter result to obtain original image is calculated on corresponding integrogram using box filter, and is calculated approximatively
Hessian matrix, calculates its determinant and obtains every layer in scale space of response diagram, goes out scale space with this rapid build;
It is then based on the scale space, is responded to obtain the feature point set for image registration operation according to local neighborhood maximum value, simultaneously
Saliency maps picture is obtained according to corresponding position peak response;Feature is counted into most images as reference image, successively by it
Remaining image is registrated with the image, and is registrated Saliency maps picture by identical registration mode;Later based on registration
Saliency maps picture afterwards keeps Space Consistency to generate every according to the mode combination Steerable filter of corresponding position maximum value response
Open the weight of image;Final blending image is obtained after finally by the original image after registration in conjunction with weight;Therefore present invention side
Method can reduce the binding character to the multiple focussing image to be fused of input, and guarantee that fused global clear image can retain
The information of different size scale is to reach preferable visual quality, and the calculating speed of the method for the present invention is fast, and high reliablity is melted
Conjunction effect is good and efficiency is higher.
Detailed description of the invention
Fig. 1 is the method flow diagram of the method for the present invention.
Fig. 2 is the multiple focussing image collection schematic diagram of acquisition not being registrated.
Fig. 3 is the flow chart of scale space building.
Fig. 4 is related three kinds of box filter schematic diagrames when being filtered.
Fig. 5 is calculation method schematic diagram of the box filter on integrogram.
Fig. 6 is the preceding 20 pairing result schematic diagrams for adjusting the distance nearest characteristic point.
Fig. 7 is the original image schematic diagram after registration.
Fig. 8 be registration after Saliency maps as schematic diagram.
Fig. 9 is corresponding weight image schematic diagram.
Figure 10 is to the result schematic diagram for not being registrated original image and being merged.
Figure 11 is the fused result schematic diagram of the present invention.
Specific embodiment
It is as shown in Figure 1 the method flow diagram of the method for the present invention: provided by the invention this not to be registrated multiple focussing image
Rapid fusion method, includes the following steps:
S1. several multiple focussing images under Same Scene are obtained, as shown in Fig. 2 (a)~(h);
S2. the every image obtained to step S1 constructs the scale space pyramid of each image;The ruler specially constructed
Spending spatial pyramid includes O image group, and each image group includes L tomographic image G;Wherein, using box filter to same
One integrogramThe quick filter that original image i is carried out on tri- directions x-, y- and xy- handles and obtains filter result Dxx、Dyy
And Dxy, the approximate Hessian matrix of each pixel is calculated, and calculate the Hessian determinant of a matrix to obtain scale
The image G of equivalent layer in space;The resolution ratio of all images is identical in scale space;As shown on the left side of figure 3;
Box filter is the approximate filter (as shown in Figure 4) of Gauss second order derviation, and the size w of box filter
Calculation formula is w=(2o*s+1) * 3, and wherein o is the group number at place, and s is the corresponding number of plies;
Each pixel value in integrogram is calculated using following formula:
I ∑ (x, y) is the pixel value in integrogram in formula, and I (i, j) is pixel value of the image I at (i, j);
Each approximate Hessian matrix of pixel is calculated using following formula, as shown in Figure 5:
D in formulaxx、DyyAnd DxyBox filter is followed successively by same integrogramIt is enterprising in tri- directions x-, y- and xy-
The quick filter of row original image i handles the filter result obtained from;α is approximation coefficient, can be with value for 0.9;
L tomographic image G in each image group is to carry out original image to same integrogram by various sizes of box filter
The quick filter of picture is handled, and calculates the determinant of a matrix to obtain the image of equivalent layer in scale space;
S3. according to the scale space pyramid of each obtained image of step S2, the feature point set of each image is obtained, simultaneously
Generate Saliency maps picture;
The feature point set of each image is obtained, point centered on each pixel successively to be taken, by the pixel for every group of image
Point with the region layer num*num (such as the region 3*3) where itself, the layer where itself upper one layer of region num*num and oneself
Remaining 3*num*num-1 pixel in the next layer of region num*num of the layer where body is compared: if the value of pixel is
Maximum value and the first threshold for being greater than setting, then regard as characteristic point for the pixel;Otherwise then assert that the pixel is non-spy
Sign point;Num is natural number;
Saliency maps picture is generated, to calculate Saliency maps as S using following formulai, as shown in the right side Fig. 3:
S in formulai(x, y) is Saliency maps as SiIn pixel (x, y) value,For in scale space, o
Pixel value of the kth tomographic image G at point (x, y) in group,Indicate the L for the o group in scale space
Tomographic image takes all images maximum pixel value at position (x, y);Wherein k:1 → L indicates traversal from the 1st layer to L layers
All images;
S4. according to the feature point set of each obtained image of step S3, most images is counted as with reference to figure using feature
Residual image is successively compared with the characteristic point in reference picture, transformation model is calculated, thus after obtaining registration by picture
Original image and Saliency maps picture;Original image and Saliency maps picture after being specially registrated using following steps:
A. Feature Descriptor is generated for comparing according to the characteristic point in image;
B. the Euclidean distance of each characteristic point in image subject to registration and all characteristic points in reference picture is calculated, and will
Record minimum distance and second closely;
C. the ratio of step B obtained minimum distance and the second short distance is calculated, and is judged according to following rule, from
And form the characteristic point pair of several pairings:
If minimum distance and the ratio of the second short distance are less than second threshold (such as 0.8), by distance in reference picture
The characteristic point that nearest point is matched as success;
Otherwise, assert pairing failure;
D. retain before it is several to (such as 20 pairs) apart from nearest match point, as shown in fig. 6, calculating each match point in water
Offset d in gentle vertical directionxAnd dy, and count the most offset t of frequency of occurrencexAnd ty;
E. transformation model is calculatedBy the coordinate position (x in image subject to registrations,ys) transformed mould
After type T transformation, corresponding coordinate position (x on a reference is obtainedr,yr), thus the image after being registrated, such as Fig. 7 institute
Show;Meanwhile the Saliency maps picture of the figure also changes the Saliency maps picture after being registration through identical step, as shown in Figure 8;
Image in above-mentioned steps A~E includes original image and Saliency maps picture;
Wherein, Feature Descriptor is generated using following steps:
A. the region of N* δ range around characteristic point is taken;N is natural number, and δ is characterized the corresponding scale of place layer;
B. region division step a obtained is num1*num1Subregion, and each subregion is filtered to obtain
Four-dimensional vector v=(∑ dx, ∑ | dx |, ∑ dy, ∑ | dy |);
C. it combines four-dimensional vector corresponding to each sub-regions, to obtain the Feature Descriptor of this feature point;
Meanwhile in algorithm process, transformation model T is reduced to translation transformation, it can be with the processing speed of boosting algorithm;
S5. the Saliency maps picture after the registration that comparison step S4 is obtained, by all Saliency maps as maximum in same position
Pixel value in image where responding on position is set to the first setting value (such as 1), pixel of the residual image on same position
Value is set to the second setting value (such as 0), and after being filtered to all images, forms weight image, as shown in Figure 9;
S6. the weight image of every obtained image of step S5 is multiplied with the corresponding original image step-by-step being registrated, and
Obtained result is added up, to obtain final blending image;
Blending image is specially calculated using following formula:
If Figure 10 is not to be registrated that image directly merges as a result, Figure 11 is the fusion results of proposition method of the present invention, method
It is average time-consuming for 15.4s on machine 4.20GHz CPU 8GB memory.
Claims (10)
1. one kind is not registrated the rapid fusion method of multiple focussing image, include the following steps:
S1. several multiple focussing images under Same Scene are obtained;
S2. the every image obtained to step S1 constructs the scale space pyramid of each image;
S3. according to the scale space pyramid of each obtained image of step S2, the feature point set of each image is obtained, is generated simultaneously
Saliency maps picture;
S4. according to the feature point set of each obtained image of step S3, most images is counted as reference picture using feature, according to
It is secondary to be compared residual image with the characteristic point in reference picture, transformation model is calculated, thus the original after being registrated
Image and Saliency maps picture;
S5. the Saliency maps picture after the registration that comparison step S4 is obtained, by all Saliency maps as peak response in same position
Pixel value in the image at place on position is set to the first setting value, and pixel value of the residual image on same position is set to second and sets
Definite value, and after being filtered to all images, form weight image;
S6. the weight image of every obtained image of step S5 is multiplied with the corresponding original image step-by-step being registrated, and will
To result add up, to obtain final blending image.
2. the rapid fusion method according to claim 1 for not being registrated multiple focussing image, it is characterised in that described in step S2
Building each image scale space pyramid, the scale space pyramid specially constructed includes O image group, each
Image group includes L tomographic image G;Wherein, using box filter to same integrogramOn tri- directions x-, y- and xy-
The quick filter for carrying out original image i handles and obtains filter result Dxx、DyyAnd Dxy, calculate the approximate Hessian of each pixel
Matrix, and the Hessian determinant of a matrix is calculated to obtain the image G of equivalent layer in scale space.
3. the rapid fusion method according to claim 2 for not being registrated multiple focussing image, it is characterised in that the boxlike
Filter is Gauss second order derviation filter, and the calculation formula of the size w of box filter is w=(2o* s+1) * 3, wherein o
For the group number at place, s is the corresponding number of plies.
4. the rapid fusion method according to claim 2 for not being registrated multiple focussing image, it is characterised in that the integral
Figure specially calculates each pixel value in integrogram using following formula:
I ∑ (x, y) is the pixel value in integrogram in formula, and I (i, j) is pixel value of the image I at (i, j).
5. the rapid fusion method according to claim 2 for not being registrated multiple focussing image, it is characterised in that the calculating
Each approximate Hessian matrix of pixel specially calculates each approximate Hessian of pixel using following formula
Matrix:
D in formulaxx、DyyAnd DxyBox filter is followed successively by same integrogramIt is carried out on tri- directions x-, y- and xy- former
The quick filter of image i handles the filter result obtained from;α is approximation coefficient.
6. the rapid fusion method according to claim 5 for not being registrated multiple focussing image, it is characterised in that described in step S3
The feature point set of each image of acquisition successively take point centered on each pixel specially for every group of image, by the pixel
It is next with the region layer num*num where itself, upper one layer of region num*num of the layer where itself and the layer where itself
Remaining 3*num*num-1 pixel in the region layer num*num is compared: if the value of pixel is maximum value and is greater than setting
First threshold, then the pixel is regarded as into characteristic point;Otherwise then assert that the pixel is non-characteristic point;Num is natural number.
7. the rapid fusion method according to claim 5 for not being registrated multiple focussing image, it is characterised in that described in step S3
Generation Saliency maps picture, specially using following formula calculate Saliency maps as Si:
S in formulai(x, y) is Saliency maps as SiIn pixel (x, y) value,For in scale space, in o group
Pixel value of the kth tomographic image G at point (x, y),It indicates to scheme the L layer of the o group in scale space
Picture takes all images maximum pixel value at position (x, y);Wherein k:1 → L indicates that traversal is all from the 1st layer to L layers
Image.
8. the rapid fusion method according to claim 7 for not being registrated multiple focussing image, it is characterised in that described in step S4
Be registrated after original image and Saliency maps picture, specially using following steps be registrated after original image and conspicuousness
Image:
A. Feature Descriptor is generated for comparing according to the characteristic point in image;
B. the Euclidean distance of each characteristic point in image subject to registration and all characteristic points in reference picture is calculated, and will record
Minimum distance and second is closely;
C. the ratio of step B obtained minimum distance and the second short distance is calculated, and is judged according to following rule, thus shape
At the characteristic point pair of several pairings:
If minimum distance and the ratio of the second short distance are less than second threshold, using the nearest point of distance in reference picture as at
The characteristic point of function pairing;
Otherwise, assert pairing failure;
D. several nearest match points of adjusting the distance before retaining, calculate the offset d of each match point in the horizontal and vertical directionsx
And dy, and count the most offset t of frequency of occurrencexAnd ty;
E. transformation model is calculatedBy the coordinate position (x in image subject to registrations,ys) transformed model T
After transformation, corresponding coordinate position (x on a reference is obtainedr,yr), thus the image after being registrated;
Image in above-mentioned steps A~E includes original image and Saliency maps picture.
9. the rapid fusion method according to claim 8 for not being registrated multiple focussing image, it is characterised in that the feature
Description, specially generates Feature Descriptor using following steps:
A. the region of N* δ range around characteristic point is taken;N is natural number, and δ is characterized the corresponding scale of place layer;
B. region division step a obtained is num1*num1Subregion, and each subregion is filtered to obtain the four-dimension
Vector v=(∑ dx, ∑ | dx |, ∑ dy, ∑ | dy |);
C. it combines four-dimensional vector corresponding to each sub-regions, to obtain the Feature Descriptor of this feature point.
10. the rapid fusion method according to claim 9 for not being registrated multiple focussing image, it is characterised in that by transformation model
T is reduced to translation transformation, can be with the processing speed of boosting algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910028159.7A CN109754385A (en) | 2019-01-11 | 2019-01-11 | It is not registrated the rapid fusion method of multiple focussing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910028159.7A CN109754385A (en) | 2019-01-11 | 2019-01-11 | It is not registrated the rapid fusion method of multiple focussing image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109754385A true CN109754385A (en) | 2019-05-14 |
Family
ID=66405542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910028159.7A Pending CN109754385A (en) | 2019-01-11 | 2019-01-11 | It is not registrated the rapid fusion method of multiple focussing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754385A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517213A (en) * | 2019-08-22 | 2019-11-29 | 杭州图谱光电科技有限公司 | A kind of real time field depth continuation method based on laplacian pyramid of microscope |
CN113012087A (en) * | 2021-03-31 | 2021-06-22 | 中南大学 | Image fusion method based on convolutional neural network |
CN115965844A (en) * | 2023-01-04 | 2023-04-14 | 哈尔滨工业大学 | Multi-focus image fusion method based on visual saliency priori knowledge |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1850270A1 (en) * | 2006-04-28 | 2007-10-31 | Toyota Motor Europe NV | Robust interest point detector and descriptor |
US20140119598A1 (en) * | 2012-10-31 | 2014-05-01 | Qualcomm Incorporated | Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN107248150A (en) * | 2017-07-31 | 2017-10-13 | 杭州电子科技大学 | A kind of Multiscale image fusion methods extracted based on Steerable filter marking area |
CN107369148A (en) * | 2017-09-20 | 2017-11-21 | 湖北工业大学 | Based on the multi-focus image fusing method for improving SML and Steerable filter |
CN108052988A (en) * | 2018-01-04 | 2018-05-18 | 常州工学院 | Guiding conspicuousness image interfusion method based on wavelet transformation |
CN108364273A (en) * | 2018-01-30 | 2018-08-03 | 中南大学 | A kind of method of multi-focus image fusion under spatial domain |
CN108830818A (en) * | 2018-05-07 | 2018-11-16 | 西北工业大学 | A kind of quick multi-focus image fusing method |
CN112287929A (en) * | 2020-10-22 | 2021-01-29 | 北京理工大学 | Remote sensing image significance analysis method based on feature integration deep learning network |
-
2019
- 2019-01-11 CN CN201910028159.7A patent/CN109754385A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1850270A1 (en) * | 2006-04-28 | 2007-10-31 | Toyota Motor Europe NV | Robust interest point detector and descriptor |
US20140119598A1 (en) * | 2012-10-31 | 2014-05-01 | Qualcomm Incorporated | Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN107248150A (en) * | 2017-07-31 | 2017-10-13 | 杭州电子科技大学 | A kind of Multiscale image fusion methods extracted based on Steerable filter marking area |
CN107369148A (en) * | 2017-09-20 | 2017-11-21 | 湖北工业大学 | Based on the multi-focus image fusing method for improving SML and Steerable filter |
CN108052988A (en) * | 2018-01-04 | 2018-05-18 | 常州工学院 | Guiding conspicuousness image interfusion method based on wavelet transformation |
CN108364273A (en) * | 2018-01-30 | 2018-08-03 | 中南大学 | A kind of method of multi-focus image fusion under spatial domain |
CN108830818A (en) * | 2018-05-07 | 2018-11-16 | 西北工业大学 | A kind of quick multi-focus image fusing method |
CN112287929A (en) * | 2020-10-22 | 2021-01-29 | 北京理工大学 | Remote sensing image significance analysis method based on feature integration deep learning network |
Non-Patent Citations (8)
Title |
---|
HUA SHAO 等: "Halo-Free Multi-Exposure Image Fusion Based on Sparse Representation of Gradient Features", 《APPL. SCI. 2018, 8(9), 1543; HTTPS://DOI.ORG/10.3390/APP8091543》, pages 1 - 18 * |
LAVI_QQ_2910138025: "一文读懂图像中点的坐标变换(刚体变换,相似 变换,仿射变换,投影变换)", pages 1 - 2, Retrieved from the Internet <URL:https://blog.csdn.net/liuweiyuxiang/article/details/86510191> * |
牛静: "基于局部不变特征的图像匹配算法研究", 《中国优秀硕士论文全文数据库科技信息辑》 * |
牛静: "基于局部不变特征的图像匹配算法研究", 《中国优秀硕士论文全文数据库科技信息辑》, 15 February 2015 (2015-02-15), pages 15 * |
胡晓军: "《MATLAB应用图像处理》", 31 March 2011 * |
胡梦云: "基于SURF和KAZE的图像配准算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
胡梦云: "基于SURF和KAZE的图像配准算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 May 2017 (2017-05-15) * |
邵欣: "《机器视觉与传感器技术》", 31 August 2017 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517213A (en) * | 2019-08-22 | 2019-11-29 | 杭州图谱光电科技有限公司 | A kind of real time field depth continuation method based on laplacian pyramid of microscope |
CN110517213B (en) * | 2019-08-22 | 2021-11-09 | 杭州图谱光电科技有限公司 | Laplacian pyramid-based real-time depth of field extension method for microscope |
CN113012087A (en) * | 2021-03-31 | 2021-06-22 | 中南大学 | Image fusion method based on convolutional neural network |
CN113012087B (en) * | 2021-03-31 | 2022-11-04 | 中南大学 | Image fusion method based on convolutional neural network |
CN115965844A (en) * | 2023-01-04 | 2023-04-14 | 哈尔滨工业大学 | Multi-focus image fusion method based on visual saliency priori knowledge |
CN115965844B (en) * | 2023-01-04 | 2023-08-18 | 哈尔滨工业大学 | Multi-focus image fusion method based on visual saliency priori knowledge |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106960414B (en) | Method for generating high-resolution HDR image from multi-view LDR image | |
CN110930309B (en) | Face super-resolution method and device based on multi-view texture learning | |
CN108537743A (en) | A kind of face-image Enhancement Method based on generation confrontation network | |
CN109754385A (en) | It is not registrated the rapid fusion method of multiple focussing image | |
CN106940876A (en) | A kind of quick unmanned plane merging algorithm for images based on SURF | |
CN109767388A (en) | Method, the mobile terminal, camera of image mosaic quality are promoted based on super-pixel | |
CN108805839A (en) | Combined estimator image defogging method based on convolutional neural networks | |
CN107767339B (en) | Binocular stereo image splicing method | |
CN106780303A (en) | A kind of image split-joint method based on local registration | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN109300096A (en) | A kind of multi-focus image fusing method and device | |
CN104616247B (en) | A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT | |
CN108470324A (en) | A kind of binocular stereo image joining method of robust | |
CN106548494A (en) | A kind of video image depth extraction method based on scene Sample Storehouse | |
CN110349215A (en) | A kind of camera position and orientation estimation method and device | |
CN107945110A (en) | A kind of blind depth super-resolution for light field array camera calculates imaging method | |
CN110838086A (en) | Outdoor image splicing method based on correlation template matching | |
CN106780326A (en) | A kind of fusion method for improving panoramic picture definition | |
CN113724155A (en) | Self-boosting learning method, device and equipment for self-supervision monocular depth estimation | |
CN104751508B (en) | The full-automatic of new view is quickly generated and complementing method in the making of 3D three-dimensional films | |
CN106875371A (en) | Image interfusion method and image fusion device based on Bayer format | |
CN109300098A (en) | A kind of multi-focus microscopic image fusing method based on wavelet transformation | |
CN107067368B (en) | Streetscape image splicing method and system based on deformation of image | |
CN112767246A (en) | Multi-magnification spatial super-resolution method and device for light field image | |
CN106851130A (en) | A kind of video-splicing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190514 |
|
RJ01 | Rejection of invention patent application after publication |