CN102393958B - Multi-focus image fusion method based on compressive sensing - Google Patents
Multi-focus image fusion method based on compressive sensing Download PDFInfo
- Publication number
- CN102393958B CN102393958B CN 201110199364 CN201110199364A CN102393958B CN 102393958 B CN102393958 B CN 102393958B CN 201110199364 CN201110199364 CN 201110199364 CN 201110199364 A CN201110199364 A CN 201110199364A CN 102393958 B CN102393958 B CN 102393958B
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- subblock
- multiple focussing
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a multi-focus image fusion method based on compressive sensing and relates to the technical field of image processing. By the multi-focus image fusion method, the main problem that a clear image with all focused sceneries is difficult to acquire due to a limited depth of field of an optical lens in the prior art can be solved. The multi-focus image fusion method is implemented by the following steps of: (1) blocking an image; (2) calculating an average gradient of each image sub block to determine a fusion weight value; (3) performing sparse representation on each image sub block and observing each image sub block by adopting a random Gaussian matrix; (4) performing weighted fusion on the fusion weight value of an observed value of each image sub block; and (5) recovering a fused image observed value by adopting an orthogonal matching traceback algorithm and performing wavelet inverse transformation on a recovered result to acquire a fused fully-focused image. By the multi-focus image fusion method based on the compressive sensing, a better image fusion effect can be achieved and higher convergence property is realized; and the method can be applied to fusion of a multi-focus image.
Description
Technical field
The invention belongs to technical field of image processing, relate to image fusion technology, a kind of multi-focus image fusing method that combines compressive sensing theory specifically, the method can be used in multi-focus image fusion.
Background technology
Image co-registration has vast potential for future development as an emerging scientific research field.It is by extracting and comprehensively from the information of a plurality of sensor images, obtain more accurate, comprehensive, the reliable iamge description to Same Scene or target, so as to image further analyze, detection, identification or the tracking of understanding and target.From the early 1980s so far, Multi-sensor Image Fusion has caused worldwide broad interest and research boom, and it has a wide range of applications in fields such as automatic target identification, computer vision, remote sensing, machine learning, Medical Image Processing and Military Application.Through the development of 30 years nearly, the research of image fusion technology reaches a certain scale, and has developed multiple emerging system both at home and abroad, but this does not show that this technology is perfect.From present case, image fusion technology also exists the problem of many theory and technologies aspects to have to be solved.Especially it is to be noted that the research that image fusion technology is carried out at home starts late with respect to international research work, also be in a backward condition.Therefore in the urgent need to carry out extensively and profoundly basic theory and the research of basic technology.
Along with the develop rapidly of infotech, people grow with each passing day to the demand of quantity of information.under this background, traditional image interfusion method, such as the fusion method based on multi-scale transform, referring to article " Region based multisensor image fusion using generalized Gaussian distribution ", in Int.Workshop on Nonlinear Sign.and Image Process.Sep.2007, need data volume to be processed very considerable, this has just caused signal sampling, the immense pressure of transmission and storage, how alleviating this pressure, effectively to extract again the useful information that is carried in signal be one of urgent problem in Signal and Information Processing.The compressive sensing theory CS that occurred in the world in recent years provides solution route for alleviating these pressure.Useful information compressed sensing can fully extract image under not needing to suppose in advance any prior imformation of image in like this, only merges to the useful information that extracts calculating and the storage pressure that can greatly alleviate data.At present, scholars have launched applied research widely at numerous areas such as simulation-intelligence sample, synthetic aperture radar image-forming, remotely sensed image, Magnetic resonance imaging, recognition of face, information source codings to compressed sensing.Recent years the domestic research boom that also starts compressed sensing.But with compressive sensing theory be used on image co-registration research also seldom.The people such as scholar T.Wan take the lead in the theory of compressed sensing the is used for trial of image co-registration, referring to article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008. the method adopts absolute value to get large fusion rule, not only computation complexity is high, and fusion results exists much noise and striped.
Summary of the invention
The object of the invention is to overcome the shortcoming of above-mentioned prior art, proposed a kind of multi-focus image fusing method based on compressed sensing, to reduce data volume, reduce the complexity of calculating, and improve the effect of image co-registration when reducing complexity.
the key problem in technology of realizing the object of the invention is to utilize compressed sensing to reduce data volume to the incomplete sampling of signal, utilize orthogonal matching pursuit algorithm to reduce the complexity of calculating, whole processing is divided into three parts, at first each multiple focussing image sub-block after multiple focussing image being carried out piecemeal and adopts the random Gaussian matrix to rarefaction representation is observed, adopt the fusion method based on the average gradient weighting to merge to each image subblock observed reading after observation again, then the observed reading after adopting orthogonal matching pursuit algorithm to fusion is reconstructed the total focus image after being merged.Its concrete steps comprise as follows:
(1) two width multiple focussing image A and the B of input are carried out piecemeal, obtain n size and be 32 * 32 image subblock x
iAnd y
i(i=1,2 Λ n);
(2) calculate and record each correspondence image sub-block of multiple focussing image A and B x
iAnd y
iAverage gradient
With
(3) to each correspondence image sub-block x of multiple focussing image A and B
iAnd y
iCarry out wavelet transformation, obtain the image subblock a after sparse conversion
iAnd b
i, the small echo that adopts in experiment is wave filter CDF 9/7 wavelet basis of biorthogonal wavelet, decomposing the number of plies is 3;
(4) with each image subblock a after wavelet transformation
iAnd b
iVector arranged in columns is observed with the random Gaussian matrix, obtains the observed reading y of every two the correspondence image sub-blocks of multiple focussing image A and B
AAnd y
B
(5) to the observed reading y of the image subblock of multiple focussing image A and every two correspondences of B
AAnd y
B, merging as follows, the image subblock observed reading after being merged is y:
(5a) calculate the blending weight of the image subblock of multiple focussing image A and every two correspondences of B:
w
B=1-w
A
Wherein,
Be respectively the average gradient of multiple focussing image A and B correspondence image sub-block, w
A, w
BBe respectively the blending weight of image A and B correspondence image sub-block.
(5b) observed reading of the image subblock of multiple focussing image A and every two correspondences of B is weighted fusion:
y=w
1y
A+w
2y
B
Wherein, y
A, y
BBe respectively the observed reading of multiple focussing image A and B two correspondence image sub-blocks, y is value after merging for it.
(6) the observed reading y with the fused image sub-block adopts orthogonal matching pursuit OMP algorithm to recover, the image subblock f after being restored;
(7) the image subblock f after recovering carries out wavelet inverse transformation, the total focus image F after being merged.
The present invention, has the following advantages so compare with traditional image interfusion method owing to adopting image fusion quality assessment index average gradient determine the image subblock blending weight and combine compressive sensing theory:
(A) sampling process does not need to suppose in advance any prior imformation of image;
(B) fusion can obtain more excellent blending weight to the multiple focussing image sub-block;
(C) data volume of reconstruct is little, saves storage space.
Experiment showed, the present invention to the multi-focus image fusion problem, the visual effect of fusion results is better, and speed of convergence is also very fast.
Description of drawings
Figure l is whole realization flow figure of the present invention;
Fig. 2 is the source images figure of two groups of multiple focussing images;
Fig. 3 is with the present invention and existing two kinds of figure as a result that blending algorithm merges multi-focus Clock image;
Fig. 4 is with the present invention and existing two kinds of figure as a result that blending algorithm merges multi-focus Pepsi image.
Embodiment
With reference to Fig. 1, specific implementation step of the present invention is as follows:
Step 1 is to two width multiple focussing image A and the B piecemeal of inputting and the average gradient that calculates each image subblock
With
Image A and B are respectively the two left focusing of width and right focusedimages, the message complementary sense of the clear part of two width images, and purpose is to obtain all total focus images clearly of about width by fusion.Image is carried out piecemeal, be conducive to process, and can reduce computation complexity, it is 32 * 32 image subblock that the present invention is divided into size to two width multiple focussing image A and B, and calculates the average gradient of each image subblock, is calculated as follows:
Wherein, Δ xf (x, y), Δ yf (x, y) be multiple focussing image I sub-block pixel (i, j) at x, the first order difference on the y direction, I=A, B, M * N are image subblock size,
Average gradient for multiple focussing image I sub-block.
Each correspondence image sub-block x of step 2 couple multiple focussing image A and B
iAnd y
iCarry out wavelet transformation, obtain the image subblock a after sparse conversion
iAnd b
i
It is in order to allow signal satisfy the precondition of compressed sensing that each image subblock of multiple focussing image A and B is carried out sparse conversion, namely needing only signal is compressible or is sparse at certain transform domain, just can project on a lower dimensional space with the high dimensional signal of the incoherent observing matrix of transform-based with the conversion gained with one, then just can reconstruct original signal with high probability from these a small amount of projections by finding the solution an optimization problem.Sparse bi-orthogonal filter CDF 9/7 wavelet transformation that is transformed to that this example adopts, decomposing the number of plies is 3 layers, but is not limited to wavelet transformation, for example can use the discrete cosine dct transform, Fourier FT conversion etc.
Step 3 is carried out CS observation with the random Gaussian matrix to each correspondence image sub-block of multiple focussing image A and B.
The CS observation of image is a linear process, and in order to guarantee Accurate Reconstruction, system of linear equations exists determines that the necessary and sufficient condition of separating is that observing matrix and sparse transform-based matrix satisfy limited equidistant character RIP.The random Gaussian matrix is uncorrelated with the matrix that most of fixedly orthogonal basiss consist of, this characteristic has determined to select it as observing matrix, other orthogonal basis is during as sparse transform-based, so can satisfy RIP character. the present invention adopts the random Gaussian matrix as observing matrix, each correspondence image sub-block is carried out CS observation, and concrete operations are as follows:
(3a) with the image subblock a of the N * N of multiple focussing image A and every two correspondences of B
iAnd b
iLine up N
2* 1 column vector θ
AAnd θ
B
(3b) generate at random M * N
2Random Gaussian matrix and to its orthogonalization utilizes the random Gaussian matrix that column vector is observed, and specific formula for calculation is as follows;
y
I=Φθ
I
Wherein, Φ is the random Gaussian observing matrix, θ
IBe the column vector of image subblock, I=A, B, y
IObserved reading for each image subblock.The sampling rate of each image subblock in this example is
Control sampling rate by the value of regulating the random Gaussian matrix M.
Obtain the observed reading of each image subblock after each image subblock of multiple focussing image A and B is observed, the size of observation vector is M * 1, and that in experiment, each image subblock is observed employing is same observing matrix Φ.
Step 4 merges two width multiple focussing image A and every two the correspondence image sub-blocks of the B method with weighting.
The observed reading that each image subblock of multiple focussing image A and B obtains after observing through the random Gaussian matrix is still keeping all information of original image sub-block, so determine the blending weight of rear each image subblock observed reading of observation by the average gradient that calculates the original image sub-block, average gradient is an evaluation index of image co-registration, the readability that has reflected image, image block clearly, average gradient is just large, and the weights of getting in the time of fusion also should be large.
The image subblock of image A and every two correspondences of the B method with weighting is merged, specifically is implemented as follows:
(4a) calculate the blending weight of the image subblock of multiple focussing image A and every two correspondences of B:
w
B=1-w
A
Wherein,
Be respectively the average gradient of multiple focussing image A and B correspondence image sub-block, w
A, w
BBe respectively the blending weight of image A and B correspondence image sub-block;
(4b) observed reading of the image subblock of multiple focussing image A and every two correspondences of B is weighted fusion:
y=w
Ay
A+w
By
B
Wherein, y
A, y
BBe respectively the observed reading of multiple focussing image A and B two correspondence image sub-blocks, y is observed reading after merging for it.
Step 5 adopts orthogonal matching pursuit algorithm to recover the observed reading of fused image sub-block, the image subblock after being restored.
quadrature coupling OMP tracing algorithm is based on the algorithm of greedy iteration, exchange the reduction of computation complexity for the number of samples that needs more than base tracking BP algorithm. utilize orthogonal matching pursuit OMP algorithm to come solving-optimizing problem reformulation signal, greatly improved the speed of calculating, and be easy to realize. during concrete operations, that the image block after merging is recovered one by one, the concrete steps of algorithm are referring to " Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit ", IEEE Transactions on Information Theory, vol.53, No.12, December 2007.
Step 6 is carried out wavelet inverse transformation to the multiple focussing image A of orthogonal matching pursuit algorithm recovery and the image subblock of B.
Adopting the data after orthogonal matching pursuit algorithm recovers is the sparse form of the total focus image of process fusion, image subblock to each recovery carries out the total focus image subblock that wavelet inverse transformation obtains merging, and the total focus image subblock after merging is combined into total focus image after piece image just can be merged.
Effect of the present invention can illustrate by emulation experiment:
1. experiment condition
Testing microcomputer CPU used is Intel Core (TM) 2Duo 2.33GHz internal memory 2GB, and programming platform is Matlab7.0.1.The view data that adopts in experiment is two groups of multiple focussing images of registration, and size is respectively 512 * 512, and 512 * 512, two groups of multi-focus source images derive from the image co-registration website
Http: ∥ www.imagefusion.org/First group is the Clock image, as Fig. 2 (a) and Fig. 2 (b), wherein Fig. 2 (a) focuses on the source images on the right for Clock, Fig. 2 (b) focuses on the source images on the left side for Clock, second group is the Pepsi image, as Fig. 2 (c) and Fig. 2 (d), wherein Fig. 2 (c) focuses on the source images on the right for Pepsi, and Fig. 2 (d) focuses on the source images on the left side for Clock.
2. experiment content
(2a) with method of the present invention and existing two kinds of fusion methods, the Clock image is carried out fusion experiment, every group of sampling rate is set to respectively 0.3,0.5,0.7, fusion results such as Fig. 3, wherein Fig. 3 (a) is the fusion results figure of the existing method of average, Fig. 3 (b) is article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008 fusion results figure, Fig. 3 (c) is fusion results figure of the present invention, the sampling rate of three picture groups is 0.5.
(2b) with method of the present invention and existing two kinds of fusion methods, the Pepsi image is carried out fusion experiment, every group of sampling rate is set to respectively 0.3,0.5,0.7, fusion results such as Fig. 4, wherein Fig. 4 (a) is the fusion results figure of the existing method of average, Fig. 4 (b) is article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008 fusion results figure, Fig. 4 (c) is fusion results figure of the present invention, the sampling rate of three picture groups is equal 0.5.The method of average is identical with method operation of the present invention, and just fusion rule is different, and the blending weight of the method for average is w
A=w
B=0.5.
3. experimental result
With fusion method of the present invention and method of weighted mean and article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008. method compares on three kinds of picture appraisal indexs, estimates effect of the present invention.Fusion method of the present invention and method of weighted mean and article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311, fusion qualitative evaluation index such as the table 1 of 2008. method on two groups of multiple focussing images:
Table 1, multi-focus image fusion qualitative evaluation index
In table 1, Mean CS-max-abs Ours is respectively the existing method of average, article " Compressive Image Fusion ", in Proc.IEEE Int.Conf.Image Process, pp.1308-1311,2008. method and method of the present invention; R is sampling rate, and MI is mutual information, and IE is information entropy, and Q is the edge conservation degree, and T is the time that Image Reconstruction needs, and unit is second (s).Wherein:
Mutual information (M I): mutual information has embodied fused images what of information extraction from original image, and mutual information is larger, and the information of extracting is more.
Information entropy (IE): image information entropy is to weigh an abundant important indicator of image information, and what of quantity of information that image carries are the size of entropy reflected, entropy is larger, illustrates that the quantity of information of carrying is larger.
Edge conservation degree (Q): its essence is and weigh fused image to the maintenance degree of the marginal information in input picture, the scope of value is 0~1, more near 1, illustrates that the edge keeps degree better.
As seen from the data in Table 1: on performance index, the edge conservation degree Q index of method of the present invention is existing higher than the method for average and CS-max-abs method, on mutual information MI index, method of the present invention is higher than the method for average, and it is more more than CS-max-abs method height, information entropy IE index and the method for average are suitable, but lower than CS-max-abs method.On the Image Reconstruction time T, the required time of method of the present invention is lacked a lot than the CS-max-abs fusion method.Along with the raising of sampling rate, the indices of fusion results also improves gradually.
From Fig. 3-Fig. 4 as seen: the fusion results of fusion method of the present invention on two groups of multiple focussing images is better than the method for average and CS-max-abs fusion method visual effect, the fusion results of CS-max-abs fusion method exists much noise and ribbon grain, and contrast is also lower.CS-max-abs fusion method visual effect is not so good as method of the present invention but is because produced noise in fusion process higher than method of the present invention on information entropy IE index, causes information entropy IE index can not truly reflect the useful information amount of fused images.
The multi-focus image fusing method based on compressed sensing that the above-mentioned the present invention of experiment showed, proposes can be obtained good visual effect to the multi-focus image fusion problem, and computation complexity is also lower.
Claims (3)
1. the multi-focus image fusing method based on compressed sensing, comprise the steps:
(1) two width multiple focussing image A and the B of input are carried out piecemeal, obtain n size and be 32 * 32 image subblock x
iAnd y
i, i=1,2 ... n;
(2) calculate and record each correspondence image sub-block of multiple focussing image A and B x
iAnd y
iAverage gradient
With
(3) to each correspondence image sub-block x of multiple focussing image A and B
iAnd y
iCarry out wavelet transformation, obtain the image subblock a after sparse conversion
iAnd b
i, the small echo of employing is the wave filter CDF9/7 wavelet basis of biorthogonal wavelet, decomposing the number of plies is 3;
(4) with each image subblock a after wavelet transformation
iAnd b
iVector arranged in columns is observed column vector with the random Gaussian matrix, obtains the observed reading y of every two the correspondence image sub-blocks of multiple focussing image A and B
AAnd y
B
(5) to the observed reading y of the image subblock of multiple focussing image A and every two correspondences of B
AAnd y
B, merging as follows, the image subblock observed reading after being merged is y:
(5a) calculate the blending weight of the image subblock of multiple focussing image A and every two correspondences of B:
w
B=1-w
A
Wherein,
,
Be respectively the average gradient of multiple focussing image A and B correspondence image sub-block, w
A, w
BBe respectively the blending weight of image A and B correspondence image sub-block;
(5b) observed reading of the image subblock of multiple focussing image A and every two correspondences of B is weighted fusion:
y=w
Ay
A+w
By
B
Wherein, y
A, y
BBe respectively the observed reading of multiple focussing image A and B two correspondence image sub-blocks, y is value after merging for it;
(6) the observed reading y with the fused image sub-block adopts orthogonal matching pursuit OMP algorithm to recover, the image subblock f after being restored;
(7) the image subblock f after recovering carries out wavelet inverse transformation, the total focus image F after being merged.
2. the multi-focus image fusing method based on compressed sensing according to claim 1, the wherein described calculating of step (2) and record multiple focussing image A and each correspondence image sub-block of B x
iAnd y
iAverage gradient, be calculated as follows:
3. the multi-focus image fusing method based on compressed sensing according to claim 1, wherein step (4) is described observes column vector with the random Gaussian matrix, is to be undertaken by following formula;
y
I=Φθ
I
Wherein, Φ is the random Gaussian observing matrix, θ
IBe the column vector of image subblock, I=A, B, y
IObserved reading for each image subblock.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110199364 CN102393958B (en) | 2011-07-16 | 2011-07-16 | Multi-focus image fusion method based on compressive sensing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110199364 CN102393958B (en) | 2011-07-16 | 2011-07-16 | Multi-focus image fusion method based on compressive sensing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102393958A CN102393958A (en) | 2012-03-28 |
CN102393958B true CN102393958B (en) | 2013-06-12 |
Family
ID=45861269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110199364 Active CN102393958B (en) | 2011-07-16 | 2011-07-16 | Multi-focus image fusion method based on compressive sensing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102393958B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164850B (en) * | 2013-03-11 | 2016-09-21 | 南京邮电大学 | A kind of multi-focus image fusing method based on compressed sensing and device |
CN103593833A (en) * | 2013-10-25 | 2014-02-19 | 西安电子科技大学 | Multi-focus image fusion method based on compressed sensing and energy rule |
CN103839244B (en) * | 2014-02-26 | 2017-01-18 | 南京第五十五所技术开发有限公司 | Real-time image fusion method and device |
EP3055831B1 (en) | 2014-06-10 | 2019-11-20 | Ramot at Tel-Aviv University Ltd. | Method and system for processing an image |
CN105208259B (en) * | 2014-06-17 | 2019-12-03 | 中兴通讯股份有限公司 | The method and camera of camera auto-focusing optimization |
CN104835130A (en) * | 2015-04-17 | 2015-08-12 | 北京联合大学 | Multi-exposure image fusion method |
CN106651749B (en) * | 2015-11-02 | 2019-12-13 | 福建天晴数码有限公司 | Graph fusion method and system based on linear equation |
CN105534606B (en) * | 2016-02-04 | 2018-11-09 | 清华大学 | Intelligent imaging system for surgical operation |
CN106097263A (en) * | 2016-06-03 | 2016-11-09 | 江苏大学 | Image reconstructing method based on full variation norm image block gradient calculation |
CN106991665B (en) * | 2017-03-24 | 2020-03-17 | 中国人民解放军国防科学技术大学 | Parallel computing method based on CUDA image fusion |
CN107454330B (en) * | 2017-08-24 | 2019-01-22 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer readable storage medium |
CN108171680B (en) * | 2018-01-24 | 2019-06-25 | 沈阳工业大学 | Supersparsity CS fusion method applied to structure light image |
CN108399611B (en) * | 2018-01-31 | 2021-10-26 | 西北工业大学 | Multi-focus image fusion method based on gradient regularization |
CN109785282B (en) * | 2019-01-22 | 2021-03-26 | 厦门大学 | Multi-focus image fusion method |
CN110287826B (en) * | 2019-06-11 | 2021-09-17 | 北京工业大学 | Video target detection method based on attention mechanism |
CN110456348B (en) * | 2019-08-19 | 2020-08-25 | 中国石油大学(华东) | Wave truncation wavelength compensation method for multi-view-direction SAR wave spectrum data fusion |
CN112019758B (en) * | 2020-10-16 | 2021-01-08 | 湖南航天捷诚电子装备有限责任公司 | Use method of airborne binocular head-mounted night vision device and night vision device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1177298C (en) * | 2002-09-19 | 2004-11-24 | 上海交通大学 | Multiple focussing image fusion method based on block dividing |
CN102063713B (en) * | 2010-11-11 | 2012-06-06 | 西北工业大学 | Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method |
CN102096913B (en) * | 2011-01-25 | 2012-06-27 | 西安电子科技大学 | Multi-strategy image fusion method under compressed sensing framework |
-
2011
- 2011-07-16 CN CN 201110199364 patent/CN102393958B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN102393958A (en) | 2012-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102393958B (en) | Multi-focus image fusion method based on compressive sensing | |
Zhang et al. | SwinSUNet: Pure transformer network for remote sensing image change detection | |
Wang et al. | SwinFuse: A residual swin transformer fusion network for infrared and visible images | |
CN102254314B (en) | Visible-light/infrared image fusion method based on compressed sensing | |
CN109509164B (en) | Multi-sensor image fusion method and system based on GDGF | |
CN102789633B (en) | Based on the image noise reduction system and method for K-SVD and locally linear embedding | |
Chen et al. | EGDE-Net: A building change detection method for high-resolution remote sensing imagery based on edge guidance and differential enhancement | |
Yang et al. | A hybrid method for multi-focus image fusion based on fast discrete curvelet transform | |
CN103218825B (en) | Quick detection method of spatio-temporal interest points with invariable scale | |
CN106557784A (en) | Fast target recognition methodss and system based on compressed sensing | |
CN102629374A (en) | Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding | |
CN107909560A (en) | A kind of multi-focus image fusing method and system based on SiR | |
Xia et al. | PANDA: Parallel asymmetric network with double attention for cloud and its shadow detection | |
CN102148987A (en) | Compressed sensing image reconstructing method based on prior model and 10 norms | |
CN115170638A (en) | Binocular vision stereo matching network system and construction method thereof | |
CN105574835A (en) | Image fusion method based on linear regular transformation | |
CN103473559A (en) | SAR image change detection method based on NSCT domain synthetic kernels | |
CN109509163A (en) | A kind of multi-focus image fusing method and system based on FGF | |
Huang et al. | Algebraic multi-grid based multi-focus image fusion using watershed algorithm | |
Das et al. | Gca-net: utilizing gated context attention for improving image forgery localization and detection | |
CN115984330A (en) | Boundary-aware target tracking model and target tracking method | |
Cao et al. | Detecting large-scale underwater cracks based on remote operated vehicle and graph convolutional neural network | |
Ge et al. | WGI-Net: A weighted group integration network for RGB-D salient object detection | |
Cheng et al. | Tiny object detection via regional cross self-attention network | |
CN104463822A (en) | Multi-focus image fusing method and device based on multi-scale overall filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |