CN110097530B - Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation - Google Patents

Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation Download PDF

Info

Publication number
CN110097530B
CN110097530B CN201910318421.1A CN201910318421A CN110097530B CN 110097530 B CN110097530 B CN 110097530B CN 201910318421 A CN201910318421 A CN 201910318421A CN 110097530 B CN110097530 B CN 110097530B
Authority
CN
China
Prior art keywords
feature
super
matrix
low
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910318421.1A
Other languages
Chinese (zh)
Other versions
CN110097530A (en
Inventor
张强
王凡
焦强
刘健
韩军功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910318421.1A priority Critical patent/CN110097530B/en
Publication of CN110097530A publication Critical patent/CN110097530A/en
Application granted granted Critical
Publication of CN110097530B publication Critical patent/CN110097530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Discrete Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation, relates to the technical field of image processing, and aims to solve the problem that a registered source image I is received A And I B Using spectral clustering algorithm, for I A And I B And carrying out superpixel segmentation, clustering superpixels by using a K-means algorithm, carrying out feature extraction on the source image by taking the superpixels as units, and constructing a feature matrix for each superpixel class. Constructing a dictionary and a combined low-rank representation model, and combining Laplace consistency constraint terms to respectively carry out comparison on the source image I A And I B Performing combined low-rank representation on feature matrixes corresponding to the super-pixel classes in the system, and calculating I A Low rank of (3) represents the coefficient Z A And error matrix E A And I B Low rank of (2) represents the coefficient Z B And error matrix E B The final fusion image is constructed by utilizing the expression coefficient and the error design fusion rule, the problems of block fuzzy and abrupt transition edge phenomena in the fusion result in the prior art are solved, and the quality and the visual effect of image fusion are improved.

Description

Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-focus image fusion method based on super-pixel clustering and combined low-rank representation.
Background
The multi-focus image fusion is an important branch of image fusion, and can integrate all focusing information on different images of the same scene into a whole to generate a full-focus clear image, so that the shot scene can be accurately described.
The traditional multi-focus image fusion mainly divides an image into regions, measures an image focus region by adopting a focalization measurement index, obtains a clearer region in a source image and fuses the regions. The method has poor stability and is sensitive to noise and misregistration, and a large amount of regional blurring can occur in a fusion result. The multi-focus image fusion is carried out by utilizing multi-scale decomposition, so that the image can be decomposed into different scales, and corresponding fusion rules are selected for fusion according to the different scales, and the fusion effect is improved to a certain extent.
Since sparse representation has strong robustness to noise and misregistration, many scholars apply it to multi-focus image fusion. Such as: the paper "B.Yang, S.Li, multi-focus image fusion and restoration with sparse representation, IEEE Transactions on Instrumentation and Measurement, vol.59, no.4, pp.884-892,2010" originally proposed the application of sparse representations in image fusion; the article "S.T.Li, H.T.yin, L.Y.Fang, group-sparse representation with a semantic image differentiation and fusion [ J ]. IEEE Transactions on biological Engineering,2012,59 (12): 3450-3459." proposes a fusion algorithm based on the Group sparse representation model (GSR); the paper "N.Yu, T.Qiu, et al., image features extraction and fusion based on joint sparse representation, IEEE Journal of Selected topocs in Signal Processing 5 (5) (2011) 1074-1082" proposes a multi-focus Image fusion algorithm based on a joint sparse representation model; the paper "J.Wang, J.Peng, et al, fusion method for rendering and visual images by using non-negative sparse representation, rendered Physics & Technology,67 (2014) 477-489" proposes an Infrared and visible light image Fusion algorithm based on a non-negative sparse representation model. Compared with the traditional image fusion algorithm based on airspace and multi-scale transformation, the appearance of the algorithms improves the quality of image fusion to a greater extent, and simultaneously fully explains the flexibility and strong representation capability of a sparse representation model. However, the fusion method based on sparse representation generally performs sparse representation and fusion in units of image "subblocks", which causes a large number of blocky fuzzy and abrupt transition edges, i.e., "blocky effect" and "jagged" boundaries, to appear in the fusion result. To alleviate this problem and improve the quality and visual effect of image fusion, most algorithms employ sliding window techniques to sample the source image. However, the sliding window not only greatly increases the amount of calculation, but also smoothes some high-frequency information, thereby causing loss of detail information.
The patent application with the application number of 201810086733.X provides a multi-focus image fusion algorithm based on consistency and non-negative sparse representation, and the algorithm ensures the spatial consistency of the source image feature extraction, so that the representation coefficients corresponding to image blocks with adjacent space and similar features have higher similarity. These methods employ spatial consistency information, which alleviates block-like blur to a certain extent, but cannot solve the problem of "jagged" boundaries. The paper "j.duan, l.chen, c.l.p.chen, multifocus image fusion segmentation and superpixel-based mean filtering, applied Optics 55 (36) (2016) 10352-10362" although superpixel segmentation techniques are also used, since it generates a sharpness enhanced image using block-based sparse representation and the mean image of the source image, and then performs segmentation using superpixel segmentation techniques and sharpness comparison in units of superpixels, although the problem of "blocking effect" can be solved, since the sharpness enhanced image generated using sparse representation has large errors, and the superpixel segmentation techniques are excessively relied on to segment and distinguish the in-focus and out-of-focus regions, resulting in a large amount of regional blurring, resulting in a degradation of fusion quality.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the invention provides a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation, which comprises the following steps:
(1) Receiving a registered source image I A And I B Wherein, I A ∈R m×n ,I B ∈R m×n M denotes the source image I A And I B N represents I A And I B The height of (d);
(2) Using spectral clustering algorithmsTo 1, pair A Or I B Performing superpixel segmentation on any source image, mapping the superpixel mark matrix obtained after segmentation into another source image, and enabling the I to be in a form of a matrix A And I B Having the same number of superpixel blocks and the superpixel blocks at the same position have the same shape and size, resulting in the same shape and size as I A And I B Corresponding superpixel set { sp A,i I =1,2, …, N } and { sp B,i I =1,2, …, N }, where N represents the number of superpixels;
(3) To I A And I B Is clustered
(31) To I A Or I B Set of superpixels { sp } of any one of the images A/B,i I =1,2, …, N }, extracting R values, G values and B values of all pixel points in the super-pixel set, respectively calculating the mean values of the R values, the G values and the B values to obtain color features corresponding to all super-pixels, and then utilizing a K-Means algorithm and the color feature pair I A Or I B Clustering the super pixels to obtain a clustering result;
(32) Mapping the clustering result to the super-pixel set of another source image to obtain I A And I B Corresponding two superpixel classes { C A,k I K =1,2, …, K } and { C [ ] B,k I K =1,2, …, K }, where K represents the number of clusters;
(4) To I A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding feature matrix X A/B,k
(5) Constructing dictionaries
(51) Using formulas
Figure BDA0002033877630000041
To obtain I A And I B Mean value image I of M (x,y);
(52) Using formulas
Figure BDA0002033877630000042
Carrying out Gaussian blur on the mean value image to obtain a Gaussian blur image I N (x, y) extracting the Gauss modeBlurred image I N (x, y) feature values, constructing a feature matrix of the mean image;
(53) Reducing the dimension of the feature matrix by using a Principal Component Analysis (PCA) technology to obtain a low-rank representation dictionary D;
(6) Constructing a combined low-rank representation model and carrying out combined low-rank representation on the feature matrix corresponding to the super-pixel class;
(7) Solving the combined low-rank representation model by using a linear iteration direction method based on an adaptive penalty factor, and calculating I A Low rank of (3) represents the coefficient Z A And error matrix E A And I B Low rank of (3) represents the coefficient Z B And error matrix E B
(8) Construction of fused decision Mark maps
(81) Constructing a focality measurement factor, namely MOF, by a weighted summation method according to a low-rank expression coefficient Z and an error matrix E A/B,i =η||z A/B,i || 2 +(1-η)||e A/B,i || 2 Wherein z is A/B,i And e A/B,i Are each Z A/B And E A/B The ith row of (1) | · | | non-conducting phosphor 2 Representing vector l 2 Norm, eta is weight value;
(82) Using the focusability measurement factor, a fusion marker pattern y (x, y) having the same size as the source image was constructed, wherein,
Figure BDA0002033877630000051
(9) According to formula I F (x,y)=Υ(x,y)I A (x,y)+(1-Υ(x,y))I B (x, y), constructing a final fusion image.
Preferably, for I A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding feature matrix X A/B,k The method comprises the following steps:
for each pixel point p in the source image A/B,j Establishing a feature vector v A/B,j ∈R d Where d =44 is the dimension of the feature vector, including colorThe color feature is a 6-dimensional feature comprising RGB components and HSI components, the edge feature is an 18-dimensional feature comprising high-pass filter response and generated by a discrete wavelet transform edge operator, and the texture feature is a 20-dimensional feature in 4 directions generated by a gray level co-occurrence matrix;
to I A And I B Each of the super pixels sp A/B,i Establishing a feature vector x A/B,i ∈R d The feature vector is obtained by averaging the feature vectors corresponding to the pixels included in the super-pixel, i.e. the feature vectors are obtained by averaging the feature vectors
Figure BDA0002033877630000052
Wherein i represents the ith super pixel, <' > H>
Figure BDA0002033877630000053
Representing a pixel p contained in the ith super pixel j The number of (2);
constructing a feature matrix according to each super-pixel class, and connecting each class C A/B,k The feature vectors corresponding to the super-pixels contained in (1) are combined and arranged according to columns to obtain a feature matrix corresponding to the category
Figure BDA0002033877630000054
Namely, it is
Figure BDA0002033877630000055
Figure BDA0002033877630000061
Wherein x is A/B,k,i Representing class k C in the source image A/B,k In the ith super pixel sp A/B,i The corresponding feature vector.
Preferably, constructing a joint low-rank representation model and performing joint low-rank representation on the feature matrices corresponding to the superpixel classes comprises:
performing combined low-rank constraint on different superpixel classes by using a low-rank representation model, and satisfying X according to Laplacian constraint terms of intra-class similarity and spatial position consistency k =DZ k +E k On the premise of obtaining a combined low-rank representation model
Figure BDA0002033877630000062
Satisfy X k =DZ k +E k K =1,2, …, K. Wherein DZ k Representation is contained in a feature matrix X k An intrinsic low rank portion ″, which is greater than or equal to>
Figure BDA0002033877630000063
For a low rank representation coefficient matrix that needs to be solved, the value is based on>
Figure BDA0002033877630000064
Representing the error or noise portion in the matrix, | Z k || * Represents the pair coefficient Z k Performing kernel norm constraint, K representing K classes of superpixels,
Figure BDA0002033877630000065
representing combined low-rank constraint on the representation coefficients of the K clustered classes, wherein the matrix Z belongs to the R M×N And E ∈ R d×N Are respectively defined as Z = [) 1 ,Z 2 ,…,Z K ]And E = [ E = 1 ,E 2 ,…,E K ],||E|| 2,1 L represents E 2,1 Norm, alpha is a weighing factor controlling the sparse error E, beta is a weighing factor controlling the similarity constraint term, and beta is a weighing factor controlling the similarity constraint term>
Figure BDA0002033877630000066
Z T A transposed matrix representing Z, Z t Represents the tth column of the low rank representation coefficient Z, t ∈ [1,N],z m Represents the mth column of the low rank representation coefficient Z, m ∈ [1,N],ω t,m Representing the similarity of the t-th and m-th superpixels, K =1,2, …, K, where,
Figure BDA0002033877630000067
wherein the content of the first and second substances,
x t representing the t-th column, X, in the feature matrix X m Representing the m-th super imageThe feature vector corresponding to the element, sigma, is a scale parameter, and omega is used t,m Establishing a similarity matrix W epsilon R N×N And the diagonal matrix H ∈ R N×N Wherein the (t, m) th term W of the similarity matrix W t,m =ω t,m The t-th diagonal element of the diagonal matrix H is H t,t =∑ m W t,m Using W and H, a laplacian matrix L can be established, i.e., L = H-W.
The multi-focus image fusion method based on super-pixel clustering and combined low-rank representation provided by the embodiment of the invention has the following beneficial effects:
by adopting the super-pixel clustering technology and the combined low-rank representation technology, the problem that blocky fuzzy and abrupt transition edges appear in a fusion result in the prior art is solved, and the quality and the visual effect of image fusion are improved.
Drawings
Fig. 1 is a schematic flowchart of a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation according to an embodiment of the present invention;
FIGS. 2a-2d are schematic diagrams of fusion results obtained by using a convolutional neural network, guided filtering, sparse representation, and robust sparse representation in combination with Laplace consistency constraint, respectively;
fig. 2e is a schematic diagram of a fusion result obtained by using a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided by the embodiment of the present invention;
FIGS. 3 a-3 g are schematic diagrams of fusion results obtained using a conventional algorithm;
fig. 3h is a schematic diagram of a fusion result obtained by using a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided by the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided by the embodiment of the present invention includes the following steps:
s101, receiving a registered source image I A And I B Wherein, I A ∈R m×n ,I B ∈R m×n M denotes the source image I A And I B N represents I A And I B Of (c) is measured.
Further, two registered source images I A And I B Equal in size, the contents correspond to each other, and there is no geometric deformation.
S102, using spectral clustering algorithm to pair I A Or I B Performing superpixel segmentation on any source image, mapping the superpixel mark matrix obtained after segmentation into another source image, and enabling the I to be in a form of a matrix A And I B Having the same number of superpixel blocks and the superpixel blocks at the same position have the same shape and size, resulting in the same shape and size as I A And I B Corresponding set of superpixels { sp A,i I =1,2, …, N } and { sp B,i I =1,2, …, N }, where N represents the number of superpixels.
S103, for I A And I B The super-pixels of (a) are clustered.
S1031, to I A Or I B Set of superpixels { sp } of any one of the images A/B,i I =1,2, …, N }, extracting R values, G values and B values of all pixel points in the super-pixel set, respectively calculating the mean values of the R values, the G values and the B values to obtain color features corresponding to all super-pixels, and then utilizing a K-Means algorithm and the color feature pair I A Or I B Clustering the super pixels to obtain a clustering result;
s1032, mapping the clustering result to the super-pixel set of another source image to obtain I A And I B Corresponding two superpixel classes { C A,k I K =1,2, …, K } and { C [ ] B,k I K =1,2, …, K }, where K represents the number of clusters;
s104, for I A And I B Performing feature extraction to construct each superpixel class C in the source image A/B,k Corresponding feature matrix X A/B,k
And S105, constructing a dictionary.
S1051, using formula
Figure BDA0002033877630000081
To obtain I A And I B Mean value image I of M (x,y);
S1052, using the formula
Figure BDA0002033877630000091
Carrying out Gaussian blur on the mean value image to obtain a Gaussian blur image I N (x, y) extracting the Gaussian blur image I N (x, y) feature values, and constructing a feature matrix of the mean image;
and S1053, performing dimension reduction on the feature matrix by using a Principal Component Analysis (PCA) technology to obtain a low-rank representation dictionary D.
And S106, constructing a combined low-rank representation model and carrying out combined low-rank representation on the feature matrix corresponding to the super-pixel class.
S107, solving the combined low-rank representation model by using a linear iteration direction method based on the adaptive penalty factor, and calculating I A Low rank of (2) represents the coefficient Z A And error matrix E A And I B Low rank of (3) represents the coefficient Z B And error matrix E B
And S108, designing a fusion rule and constructing a fusion decision marking map.
S1081, a focusing measurement factor, namely MOF, is constructed by adopting a weighted summation method according to the low-rank expression coefficient Z and the error matrix E A/B,i =η||z A/B,i || 2 +(1-η)||e A/B,i || 2 Wherein z is A/B,i And e A/B,i Are each Z A/B And E A/B The ith row of (1) | · | | non-conducting phosphor 2 Representing vector l 2 Norm, eta is weight value;
s1082, a fusion mark pattern y (x, y) having the same size as the source image is constructed using the focalization measurement factor, wherein,
Figure BDA0002033877630000092
as a specific embodiment, y (x, y) is a primary focusing decision diagram, which represents the corresponding position of the focusing region in the source image, and the edge of the focusing label of the decision diagram can be processed by removing the isolated small region in the focusing decision diagram, and then using the image matting principle to process the edge of the focusing label of the decision diagram, so that the edge is more accurate, and finally using the guiding filtering to process the edge, so as to obtain the final focusing decision diagram.
S109, according to formula I F (x,y)=Υ(x,y)I A (x,y)+(1-Υ(x,y))I B (x, y), constructing a final fusion image.
Alternatively, to I A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding feature matrix X A/B,k The method comprises the following steps:
for each pixel point p in the source image A/B,j Establishing a feature vector v A/B,j ∈R d Wherein d =44 is a dimension of the feature vector, and includes a color feature, an edge feature and a texture feature, wherein the color feature is a 6-dimensional feature including RGB components and HSI components, the edge feature is an 18-dimensional feature generated by a high-pass filter response and a discrete wavelet transform edge operator, and the texture feature is a 20-dimensional feature in 4 directions generated by a gray level co-occurrence matrix;
to I A And I B Each of the super pixels sp A/B,i Establishing a feature vector x A/B,i ∈R d The feature vector is obtained by averaging the feature vectors corresponding to the pixels included in the super-pixel, i.e. the feature vectors are obtained by averaging the feature vectors
Figure BDA0002033877630000101
Wherein i represents the ith super pixel, <' > H>
Figure BDA0002033877630000102
Representing a pixel point p contained in the ith super pixel j The number of (2);
constructing a feature matrix according to each super-pixel class, and connecting each class C A/B,k The feature vectors corresponding to the super-pixels contained in (1) are combined and arranged according to columns to obtain a feature matrix corresponding to the category
Figure BDA0002033877630000103
Namely, it is
Figure BDA0002033877630000104
Figure BDA0002033877630000105
Wherein x is A/B,k,i Represented in the kth class C of source images A/B,k Of (1), the ith super pixel sp A/B,i The corresponding feature vector.
Optionally, constructing a joint low-rank representation model and performing joint low-rank representation on the feature matrices corresponding to the superpixel classes includes:
performing combined low-rank constraint on different superpixel classes by using a low-rank representation model, and satisfying X according to Laplacian constraint terms of intra-class similarity and spatial position consistency k =DZ k +E k On the premise of obtaining a combined low-rank representation model
Figure BDA0002033877630000111
Satisfy X k =DZ k +E k K =1,2, …, K. Wherein DZ k Representation is contained in a feature matrix X k An intrinsic low rank portion ″, which is greater than or equal to>
Figure BDA0002033877630000112
For a low-rank representation coefficient matrix that needs to be solved, based on the evaluation value of the coefficient matrix in the evaluation unit>
Figure BDA0002033877630000113
Representing the error or noise portion in the matrix, | Z k || * Represents the pair coefficient Z k Performing kernel norm constraint, K representing K classes of superpixels,
Figure BDA0002033877630000114
representing the joint low-rank constraint on the representation coefficients of the K clustered classes, wherein the matrix Z belongs to the R M×N And E ∈ R d×N Are respectively defined as Z = [) 1 ,Z 2 ,…,Z K ]And E = [ E = 1 ,E 2 ,…,E K ],||E|| 2,1 L represents E 2,1 Norm, alpha is a trade-off factor that controls the sparse error E, beta is a trade-off factor that controls the similarity-to-consistency constraint,
Figure BDA0002033877630000115
Z T a transposed matrix representing Z, Z t Represents the tth column of the low rank representation coefficient Z, t ∈ [1,N],z m Represents the mth column of the low rank representation coefficient Z, m ∈ [1,N],ω t,m Representing the similarity of the t-th and m-th superpixels, K =1,2, …, K, where,
Figure BDA0002033877630000116
wherein the content of the first and second substances,
x t representing the t-th column, X, in the feature matrix X m Representing the feature vector corresponding to the mth super-pixel, sigma being a scale parameter, using omega t,m Establishing a similarity matrix W epsilon R N×N And the diagonal matrix H ∈ R N×N Wherein the (t, m) th term W of the similarity matrix W t,m =ω t,m The t-th diagonal element of the diagonal matrix H is H t,t =∑ m W t,m Using W and H, a laplacian matrix L can be established, i.e., L = H-W.
The effect of the multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided by the embodiment of the invention is explained in detail by combining specific experiments.
Conditions of the experiment
The simulation of the multi-focus image fusion method based on super-pixel clustering and combined low-rank representation provided by the embodiment of the invention is realized in an Intel (R) Core (TM) I7 with a main frequency of 3GHZ, a hardware environment with a memory of 8G and a Matlab R2017a software environment.
Content of the experiment
Experiment 1:
as shown in fig. 2, fig. 2a to 2d are fusion results obtained by using a Convolutional Neural Network (CNN), a guided filtering (GFF), a Sparse Representation (SR), and a robust sparse representation combined laplacian consistency constraint (LR _ RSR), respectively, and fig. 2e is a fusion result obtained by using a multi-focus image fusion method based on superpixel clustering and combined low-rank representation provided by the embodiment of the present invention. The experimental result shows that compared with the existing image fusion algorithms, the multi-focus image fusion method based on the super-pixel clustering and the combined low-rank representation provided by the embodiment of the invention has the advantages that the fusion result does not contain block or regional blur, and the marginal region of the focusing mark of the decision diagram is more natural and accurate, so that the multi-focus image fusion method based on the super-pixel clustering and the combined low-rank representation provided by the embodiment of the invention can achieve better fusion effect.
Experiment 2:
fig. 3a to 3g show fusion results obtained by using a conventional algorithm, and fig. 3h shows fusion results obtained by using a multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided in an embodiment of the present invention, and the subjective focus decision diagrams in fig. 3a to 3h are compared with each other, so that it can be seen that the multi-focus image fusion method based on super-pixel clustering and joint low-rank representation provided in the embodiment of the present invention can achieve more accurate fusion results. Meanwhile, the multi-focus image fusion method based on super-pixel clustering and combined low-rank representation provided by the embodiment of the invention adopts a Petrovic's metric (Q) AB/F ) Characteristic mutual information (FMI), information retention (Q) uiqi ) Normalized mutual information (Q) MI ) Degree of color distortion (Q) SSIM ) Phase uniformity (Q) 4 ) The fusion results are compared by the six objective evaluation indexes, and the larger the value of the six fusion indexes is, the better the fusion effect is represented. The results of the experiment are shown in table 1.
TABLE 1
Figure BDA0002033877630000131
From table 1, it can be found that the performance of the multi-focus image fusion algorithm designed by the invention is obviously superior to that of other existing fusion algorithms participating in comparison, and further, the invention can obtain a more accurate multi-focus image fusion result.
The embodiment of the invention provides a method for clustering and combining based on superpixelsThe multi-focus image fusion method with low rank representation is realized by receiving a registered source image I A And I B Using spectral clustering algorithm, for I A Or I B Performing superpixel segmentation on any source image in the image, mapping a superpixel mark matrix obtained after segmentation into another source image to obtain a super pixel mark matrix I A And I B Corresponding superpixel set, pair I A And I B Is clustered to I A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding feature matrix X A/B,k Constructing a dictionary and a combined low-rank representation model, respectively carrying out combined low-rank representation on feature matrixes corresponding to super-pixel classes, solving the combined low-rank representation model by using a linear iteration direction method based on an adaptive penalty factor, and calculating I A Low rank of (3) represents the coefficient Z A And error matrix E A And I B Low rank of (3) represents the coefficient Z B And error matrix E B The final fusion image is constructed by utilizing the expression coefficient and the error matrix to design the fusion rule, so that the phenomena of blocky fuzzy and abrupt transition edges in the fusion result in the prior art are solved, and the quality and the visual effect of image fusion are improved.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are used to distinguish the embodiments, and do not represent merits of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In addition, the memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
It should be noted that the above-mentioned embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the protection scope of the present invention.

Claims (3)

1. A multi-focus image fusion method based on super-pixel clustering and joint low-rank representation is characterized by comprising the following steps:
(1) Receiving a registered source image I A And I B Wherein, I A ∈R m×n ,I B ∈R m×n M denotes the source image I A And I B N represents I A And I B Height of (2);
(2) Using spectral clustering algorithm, for I A Or I B Performing superpixel segmentation on any source image, mapping the superpixel mark matrix obtained after segmentation into another source image, and enabling the I to be in a form of a matrix A And I B Having the same number of superpixel blocks and the superpixel blocks at the same position have the same shape and size, resulting in the same shape and size as I A And I B Corresponding set of superpixels { sp A,i I =1,2, …, N } and { sp B,i I =1,2, …, N }, where N represents the number of superpixels;
(3) To I A And I B Is clustered
(31) To I A Or I B Of any image { sp A/B,i I =1,2, …, N }, extracting R values, G values and B values of all pixel points in the super-pixel set, respectively calculating the mean values of the R values, the G values and the B values to obtain color features corresponding to all super-pixels, and then utilizing a K-Means algorithm and the color feature pair I A Or I B Clustering the super pixels to obtain a clustering result;
(32) Mapping the clustering result to a super-pixel set of another source image to obtain I A And I B Corresponding two superpixel classes { C A,k I K =1,2, …, K } and { C [ ] B,k I K =1,2, …, K }, where K represents the number of clusters;
(4) To I A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding feature matrix X A/B,k
(5) Constructing dictionaries
(51) Using formulas
Figure FDA0003977662920000011
To obtain I A And I B Mean value image I of M (x,y);
(52) Using formulas
Figure FDA0003977662920000021
Subjecting the mean image to line heightObtaining a Gaussian blur image I N (x, y) extracting the Gaussian blur image I N (x, y) feature values, and constructing a feature matrix of the mean image;
(53) Reducing the dimension of the feature matrix by using a Principal Component Analysis (PCA) technology to obtain a low-rank representation dictionary D;
(6) Constructing a combined low-rank representation model and carrying out combined low-rank representation on the feature matrix corresponding to the super-pixel class;
(7) Solving the combined low-rank representation model by using a linear iteration direction method based on the adaptive penalty factor, and calculating I A Low rank of (3) represents the coefficient Z A And error matrix E A And I B Low rank of (3) represents the coefficient Z B And error matrix E B
(8) Construction of fused decision Mark maps
(81) Constructing a focality measurement factor, namely MOF, by a weighted summation method according to a low-rank expression coefficient Z and an error matrix E A/B,i =η||z A/B,i || 2 +(1-η)||e A/B,i || 2 Wherein z is A/B,i And e A/B,i Are each Z A/B And E A/B The ith row of (1) | · | | non-conducting phosphor 2 Representing a vector of 2 Norm, eta is weight value;
(82) Constructing a fusion mark image y (x, y) with the same size as the source image by using a focusing measurement factor, wherein,
Figure FDA0003977662920000022
(9) According to formula I F (x,y)=Υ(x,y)I A (x,y)+(1-Υ(x,y))I B (x, y), constructing a final fusion image.
2. The multi-focus image fusion method based on superpixel clustering and joint low-rank representation according to claim 1, characterized in that I is pair A And I B Carrying out feature extraction to construct each super-pixel class C in the source image A/B,k Corresponding characteristicMatrix X A/B,k The method comprises the following steps:
for each pixel point p in the source image A/B,j Establishing a feature vector v A/B,j ∈R d Wherein d =44 is a dimension of the feature vector, and includes a color feature, an edge feature and a texture feature, wherein the color feature is a 6-dimensional feature including RGB components and HSI components, the edge feature is an 18-dimensional feature generated by a high-pass filter response and a discrete wavelet transform edge operator, and the texture feature is a 20-dimensional feature in 4 directions generated by a gray level co-occurrence matrix;
to I A And I B Each of the super pixels sp A/B,i Establishing a feature vector x A/B,i ∈R d The feature vector is obtained by averaging the feature vectors corresponding to the pixels included in the super-pixel, i.e. the feature vectors are obtained by averaging the feature vectors
Figure FDA0003977662920000031
Wherein i represents the ith super pixel,
Figure FDA0003977662920000032
representing a pixel p contained in the ith super pixel j The number of (2);
constructing a feature matrix according to each super-pixel class, and connecting each class C A/B,k The feature vectors corresponding to the super-pixels contained in (1) are combined and arranged according to columns to obtain a feature matrix corresponding to the category
Figure FDA0003977662920000033
Namely, it is
Figure FDA0003977662920000034
Figure FDA0003977662920000035
Wherein x is A/B,k,i Representing class k C in the source image A/B,k In the ith super pixel sp A/B,i The corresponding feature vector.
3. The multi-focus image fusion method based on super-pixel clustering and joint low-rank representation according to claim 1, wherein constructing a joint low-rank representation model and performing joint low-rank representation on feature matrices corresponding to super-pixel classes comprises:
performing combined low-rank constraint on different superpixel classes by using a low-rank representation model, and obtaining a combined low-rank representation model by combining Laplacian constraint terms of intra-class similarity and spatial position consistency
Figure FDA0003977662920000041
Satisfy X k =DZ k +E k K =1,2, …, K, where DZ k Representation is contained in a feature matrix X k The inherent low rank portion of the data is,
Figure FDA0003977662920000042
for low rank representation coefficient matrices to be solved,
Figure FDA0003977662920000043
representing the error or noise portion in the matrix, | Z k || * Represents the pair coefficient Z k Performing kernel norm constraint, K representing K classes of superpixels,
Figure FDA0003977662920000044
representing combined low-rank constraint on the representation coefficients of the K clustered classes, wherein the matrix Z belongs to the R M×N And E ∈ R d×N Are respectively defined as Z = [) 1 ,Z 2 ,…,Z K ]And E = [ E = 1 ,E 2 ,…,E K ],||E|| 2,1 L represents E 2,1 Norm, alpha is a trade-off factor that controls the sparse error E, beta is a trade-off factor that controls the similarity-to-consistency constraint,
Figure FDA0003977662920000045
Z T a transposed matrix representing Z, Z t Represents the tth column of the low rank representation coefficient Z, t ∈ [1,N],z m Represents the mth column of the low rank representation coefficient Z, m ∈ [1,N],ω t,m Representing the similarity of the t and m superpixels, where,
Figure FDA0003977662920000046
wherein the content of the first and second substances,
x t representing the t-th column, X, in the feature matrix X m Representing the feature vector corresponding to the mth super-pixel, sigma being a scale parameter, using omega t,m Establishing a similarity matrix W epsilon R N×N And the diagonal matrix H ∈ R N×N Wherein the (t, m) th term W of the similarity matrix W t,m =ω t,m The t-th diagonal element of the diagonal matrix H is H t,t =∑ m W t,m Using W and H, a laplacian matrix L can be established, i.e., L = H-W.
CN201910318421.1A 2019-04-19 2019-04-19 Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation Active CN110097530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910318421.1A CN110097530B (en) 2019-04-19 2019-04-19 Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910318421.1A CN110097530B (en) 2019-04-19 2019-04-19 Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation

Publications (2)

Publication Number Publication Date
CN110097530A CN110097530A (en) 2019-08-06
CN110097530B true CN110097530B (en) 2023-03-24

Family

ID=67445350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910318421.1A Active CN110097530B (en) 2019-04-19 2019-04-19 Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation

Country Status (1)

Country Link
CN (1) CN110097530B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947554B (en) * 2020-07-17 2023-07-14 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN115063413B (en) * 2022-08-04 2022-11-11 宁波鑫芯微电子科技有限公司 Feature extraction method for abnormal data of super-large-scale wafer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574534A (en) * 2015-12-17 2016-05-11 西安电子科技大学 Significant object detection method based on sparse subspace clustering and low-order expression
CN108510465A (en) * 2018-01-30 2018-09-07 西安电子科技大学 The multi-focus image fusing method indicated based on consistency constraint non-negative sparse

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10181188B2 (en) * 2016-02-19 2019-01-15 International Business Machines Corporation Structure-preserving composite model for skin lesion segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574534A (en) * 2015-12-17 2016-05-11 西安电子科技大学 Significant object detection method based on sparse subspace clustering and low-order expression
CN108510465A (en) * 2018-01-30 2018-09-07 西安电子科技大学 The multi-focus image fusing method indicated based on consistency constraint non-negative sparse

Also Published As

Publication number Publication date
CN110097530A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
Salazar-Colores et al. Single image dehazing using a multilayer perceptron
Pistilli et al. Learning graph-convolutional representations for point cloud denoising
CN108830818B (en) Rapid multi-focus image fusion method
Cho et al. Weakly-and self-supervised learning for content-aware deep image retargeting
Kou et al. Gradient domain guided image filtering
CN110148104B (en) Infrared and visible light image fusion method based on significance analysis and low-rank representation
CN108399611B (en) Multi-focus image fusion method based on gradient regularization
CN110120065B (en) Target tracking method and system based on hierarchical convolution characteristics and scale self-adaptive kernel correlation filtering
He et al. Fast weighted histograms for bilateral filtering and nearest neighbor searching
CN113052755A (en) High-resolution image intelligent matting method based on deep learning
Liu et al. Multi-focus image fusion based on residual network in non-subsampled shearlet domain
CN110097530B (en) Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation
CN116433707B (en) Accurate extraction method and system for optical center sub-pixels of line structure under complex background
Zhao et al. Automatic blur region segmentation approach using image matting
Mevenkamp et al. Variational multi-phase segmentation using high-dimensional local features
Frantc et al. Machine learning approach for objective inpainting quality assessment
Singh et al. Construction of fused image with improved depth-of-field based on guided co-occurrence filtering
Lee et al. Optimizing image focus for 3D shape recovery through genetic algorithm
CN112598012A (en) Data processing method in neural network model, storage medium and electronic device
Zhao et al. Constant time texture filtering
Jiang et al. Single image detail enhancement via metropolis theorem
Yao et al. A multi-expose fusion image dehazing based on scene depth information
Huang et al. Parametric meta-filter modeling from a single example pair
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN112613574B (en) Training method of image classification model, image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant