CN108985320A - Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose - Google Patents

Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose Download PDF

Info

Publication number
CN108985320A
CN108985320A CN201810546687.7A CN201810546687A CN108985320A CN 108985320 A CN108985320 A CN 108985320A CN 201810546687 A CN201810546687 A CN 201810546687A CN 108985320 A CN108985320 A CN 108985320A
Authority
CN
China
Prior art keywords
image
texture
cartoon
dictionary
ingredient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810546687.7A
Other languages
Chinese (zh)
Other versions
CN108985320B (en
Inventor
李华锋
严双林
王棠
王一棠
余正涛
王红斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201810546687.7A priority Critical patent/CN108985320B/en
Publication of CN108985320A publication Critical patent/CN108985320A/en
Application granted granted Critical
Publication of CN108985320B publication Critical patent/CN108985320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The invention proposes a kind of based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose.For that can separate the cartoon of different shape structure in source images-texture ingredient, we are converted into the resolution problem of image the classification problem of image, and devise cartoon texture discrimination dictionary learning model.In view of picture breakdown is not only related with dictionary, the also fact related with the strategy of decomposition devises a kind of new picture breakdown model.In the model, texture ingredient regards the noise being superimposed upon on source images cartoon ingredient as, and introduces the consistency regular terms of non-local mean similitude, to constrain the solution space of sparse coding coefficient.Finally, according to the code coefficient l of tie element1Norm value maximum chooses the code coefficient of blending image.The result shows that the present invention has preferably fusion performance no matter in visual effect or in objective indicator.

Description

Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose
Technical field
The present invention relates to a kind of based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose, and belongs to figure As fused data processing technology field.
Background technique
Since the obtained image information of different sensors has unicity, it is difficult to realize the accurate description to object.For It solves the problems, such as this, image fusion technology can be used, the image information about Same Scene from different sensors is carried out It is comprehensive, generate description of the width in relation to the scene.And this description can not be obtained from single source image information.Due to the skill Art can effectively integrate the complementarity of the obtained image information of different sensors, provide more accurate description for observation object, because This technology has been successfully applied the fields such as medical imaging, machine vision, remote sensing, security monitoring.
In recent years, image fusion technology receives the extensive concern of researcher, and proposes many effective fusion methods. In these methods, the fusion method based on multiple dimensioned variation is most representative.It is the most frequently used in multi-scale transform method Be wavelet transform (Discrete Wavelet Transform, DWT).But DWT does not have translation invariance, is easy Some deceptive information are introduced in fusion results.In addition, DWT is only capable of indicating the detail of the high frequency of image in three directions, no It is able to achieve effective expression to information such as image border, profile, curves.For this purpose, there has been proposed some more rulers haveing excellent performance Spend geometrical analysis tool, including Ridgelet, Curvelet, Contourlet, non-sampled Contourlet (Nonsubsampled Contourlet Transform, NSCT) etc..In these methods, since NSCT not only has in many ways Tropism, anisotropy also have translation invariance, therefore NSCT is widely applied in image co-registration, and show excellent Different fusion performance.But there is weaker robust to Images Registration in this kind of image interfusion method based on multi-scale transform Property.In addition, multiscale analysis method be used to express base used in the different structure feature of image be it is fixed, do not have it is adaptive Ying Xing is difficult to realize the Precise Representation to labyrinth.
Fusion method relative to tradition based on multi-resolution decomposition, based on the image co-registration of rarefaction representation since it is able to achieve Source images content is more effectively indicated, and receives the extensive concern of researcher.In this type of method, super complete dictionary One of an important factor for study and building are influence final fused image qualities.In general, the building of super complete dictionary can pass through two Kind of mode is realized.It generates one is the method using parsing, another generated by way of study.By analysis mode The dictionary of composition common are DCT dictionary, Wavelet dictionary, Curvelet dictionary etc..This category dictionary tends not to adaptively Express structural information complicated in natural image.On the contrary, being had by the dictionary for learning to obtain stronger according to one group of training sample Ability to express.Wherein, most representative dictionary learning method is K-SVD algorithm.This method is in the figure based on rarefaction representation As being had obtained relatively broad application in fusion.
In recent years, in order to which in the case where guaranteeing dictionary ability to express, it is researcher that study, which obtains more compact dictionary, Of interest, and propose a series of image co-registrations dictionary-based learning and restoration methods.But in these methods, image is not Congruent indicated by a dictionary.However, one dictionary is very since image heterogeneity has different morphological features Hardly possible realizes effective expression to image heterogeneity.Solve the problems, such as this, it is necessary to which larger-sized dictionary will increase in this way The calculation amount of algorithm reduces the efficiency of algorithm.To overcome the problems, such as this, researchers propose the image based on anatomic element analysis Fusion method.In the method, the texture and cartoon ingredient of image are expressed using DCT and Curvelet dictionary respectively.However, Since this category dictionary is constructed by way of parsing, adaptivity is poor, cannot effectively portray extremely complex Image structure information.More importantly the decomposition of image heterogeneity and expression are not only related with the performance of dictionary, also and scheme The decomposition model of picture is related.And this key factor, traditional image interfusion method based on rarefaction representation and dictionary learning is not Consider.
Summary of the invention
It is an object of the invention in view of the shortcomings of the prior art and insufficient, propose it is a kind of based on differentiate dictionary learning with The multi-source image integration technology scheme that anatomic element decomposes.
The technical scheme adopted by the invention is that a kind of based on the multi-source image for differentiating that dictionary learning and anatomic element decompose Fusion method, comprising the following steps:
Step 1, training sample data are acquired first and are decomposed obtains cartoon training data and texture training data, building one The training sample of group diversification, including character image, medical image, food image etc., in order to guarantee that the dictionary succeeded in school has Preferably discrimination property.From interconnection online collection one open more than gray level image as training sample, then with the shape of sliding window Formula acquires the data of training sample, and each window (n × n) collects data as a column vector (n2× 1), n is sliding window The size of mouth, collected data are decomposed by MCA algorithm, obtain cartoon training data and texture training data, all to adopt The cartoon training data and texture training data collected is two n2The matrix of dimension.
Step 2, cartoon training data and texture training data are learnt respectively by K-SVD algorithm to obtain initial cartoon word Allusion quotation and initial texture dictionary;Cartoon word is obtained using the initial cartoon dictionary of discrimination dictionary learning model training and initial texture dictionary Allusion quotation DcWith texture dictionary Dt
Discrimination dictionary learning model proposed by the present invention are as follows:
In formula, Y ∈ Rm×nFor sliding window acquisition data as Column vector groups at matrix, R is spatial domain, and m is vector The dimension in space, n are the number of image block, Dt、DcRespectively represent texture dictionary and cartoon dictionary, At=[α1, t, α2, t..., αL, t], indicate the corresponding texture sparse coding coefficient of texture training data, wherein αL, tIt is corresponding for first of image block texture ingredient Sparse coding coefficient, Ac=[α1, c, α2, c..., αL, c], indicate cartoon sparse coding coefficient corresponding to cartoon training data, Wherein αL, cFor the corresponding sparse coding coefficient of first of image block cartoon ingredient, matrix DcAcFor the cartoon separated from Y at Point, matrix DtAtFor the texture ingredient separated from Y, T is the transposition of matrix,For gradient operator, λ, λ1, λ2For balance ginseng Number, | | | |FFor F norm operator, | | | |1For l1Norm operator, | | | |2It is accorded with for the square operation of norm.
The Optimization Solution of step 2.1 dictionary learning model, needs to solve variables Dt、Dc、At、Ac.When other variables are fixed, Optimization problem (1) is convex.Therefore, we can be using alternative iteration method come Solve problems (1).
Step 2.1.1 is firstly, we solve optimal code coefficient AtAnd Ac.At this point, solving AtObjective function can be written as Formula (2)
Formula (2) is typically about l1The optimization problem of norm can usually be solved with iterative shrinkage algorithm.
Solve AcObjective function it is writeable are as follows:
For Ac, we are firstly introduced into two auxiliary variablesWithSo thatWithAuxiliary is become Amount is brought (3) into and is obtained:
ThenAnd AcRespectively can be by solving minimization problem (4), (5) and (6) obtain, it is evident that optimize and ask Topic (4) and (6) can be solved by iterative shrinkage algorithm, and formula (5) can be solved directly by gradient descent method.
Step 2.1.2 has updated sparse coding coefficient AtAnd AcAfterwards, we can be solved by solving optimization problem (7) Dt:
If enabledThen optimization problem (7) is writeable are as follows:
The problem is the least square problem of standard, and has the analytic solutions of following form:
Similarly, we can fix D to step 2.1.3t、AtAnd Ac, to solve Dc, DcObjective function are as follows:
For convenient for solving, we introduce auxiliary variable Z and h, so that Z=DcAc,At this point, optimization problem (10) can It rewrites:
It can thus be concluded that solving optimal XcIt is respectively as follows: with the objective function of auxiliary variable g
With
Optimization problem (12) can be solved by gradient descent method, and optimization problem (13) is a standard l1Norm optimization problem, Iterative shrinkage algorithm can be used to solve.
Similarly, optimal dictionary D is solvedcObjective function it is writeable are as follows:
Since the above problem is to describe optimization problem by F norm, we have the solution for closing form as follows:
Wherein,
Above-mentioned all solution procedurees are required to be iterated update acquisition optimal solution, wherein institute is defeated when first time iteration Two dictionaries entered are the initial cartoon dictionary and initial texture dictionary learnt by K-SVD algorithm, are obtained by formula (9) To cartoon dictionary DtAfterwards, formula (10)-(15) are substituted into and solve its dependent variable, the auxiliary variable of introducing is set as 0, and second Iteration, all variables are obtained data after first time iteration updates, and so on be iterated update.
Step 3, it treats blending image to be pre-processed, first addition white Gaussian noise, then in the form of sliding window The data of image to be fused are acquired, each window (n × n) collects data as a column vector (n2× 1), n is sliding window The size of mouth, it is decomposed to obtain cartoon part and texture part by MCA algorithm, is two n2Matrix, to ensure difference Ingredient is successfully separated, and proposes a kind of new picture breakdown model, source images are decomposed into cartoon part and texture part, are passed through Formula (16) is calculated:
Wherein, X ∈ Rm×nFor sliding window acquisition data as Column vector groups at matrix, R is spatial domain, Dt、DcPoint The texture dictionary and cartoon dictionary that Biao Shi not learn in step 1, At=[α1, t, α2, t..., αL, t], indicate texture sample data Corresponding texture sparse coding coefficient, wherein αL, tFor the corresponding sparse coding coefficient of first of image block texture ingredient, Ac= [α1, c, α2, c..., αL, c], indicate cartoon sparse coding coefficient corresponding to cartoon samples data, wherein αL, cFor first of image The corresponding sparse coding coefficient of block cartoon ingredient, η, η1, η2For balance parameters, | | | |FFor F norm operator, | | | |1For l1 Norm operator, | | | |2It is accorded with for the square operation of norm,For sparse coding factor alphacEstimated valueThe matrix of composition, Calculation formula are as follows:
Wherein, ΩiFor code coefficient αI, jRegional area where correspondence image block.According to the think of of non-local mean method Think, calculation formula is as follows:
Wherein, αC, iFor the code coefficient of the texture ingredient of i-th of image block,For the texture ingredient institute of i image block In regional area ΩiIn j-th piece of code coefficient, H is normalization factor, and h is preset scale size.
Step 3.1, the solution of decomposition model, for picture breakdown model (16), we can equally use alternating iteration Method solves.Therefore At, AcIt can be obtained respectively by solving following optimization problem:
Obviously, minimization problem (19) can be solved by iterative shrinkage algorithm, and minimization problem (20), by the problem It is converted, and is solved using alternate algorithm.
Above-mentioned all solution procedurees are required to be iterated update acquisition optimal solution.
Step 4, image to be fused is rebuild
Assuming that having the multi-source image of N number of band fusion, the cartoon dictionary D learnt by step 2cWith texture dictionary DtAnd The decomposition model of step 3 is decomposed image to be fused to obtain cartoon-texture ingredient sparse coding coefficient AcAnd At.IfFor the code coefficient of first of image block of the texture and cartoon ingredient of blending image, using l1Norm maximum principle is come The code coefficient of blending image heterogeneity is selected, integration program is as follows:
Wherein,Indicate that the l of the code coefficient of the texture ingredient of blending image is arranged, l={ 1,2 ..., K },It indicates The code coefficient A of the texture ingredient of r images to be fusedtL column, be a column vector, r={ 1,2 ..., N }, N is The number of image to be fused, K are the numbers of image block;
Wherein,Indicate that the l of the code coefficient of the cartoon ingredient of blending image is arranged, l={ 1,2 ..., K },It indicates The code coefficient A of the cartoon ingredient of s images to be fusedcL column, be a column vector, s={ 1,2 ..., N }, thenN is the number of image to be fused, and K is the number of image block;
It is obtainingWithAfterwards, if a total of K image block,Indicate the line of blending image The code coefficient of ingredient is managed,Indicate the code coefficient of the cartoon ingredient of blending image, then after merging Cartoon-texture ingredient can be expressed asWithTherefore it can be indicated by the matrix that the block vector of blending image is constituted ForThis result is lined up into image block again and just obtains final blending image.
The principle of the present invention: in the method, the resolution problem of image is regarded as the classification problem of image by we, design A kind of effective dictionary learning model.In the model, the relationship between heterogeneity dictionary has been considered not only, it is also contemplated that The morphological character of cartoon ingredient is arrived.In addition, for the heterogeneity of image can be effectively separated, herein based on learning Dictionary devises a kind of effective picture breakdown model.For the decomposability of lift scheme, we see the texture ingredient of image At being the concussion noise being superimposed upon on image cartoon ingredient, and the similitude based on non local image block is introduced as cartoon ingredient The regular terms of sparse coding.Finally use l1The maximum principle of norm selects the code coefficient of blending image heterogeneity to carry out Fusion treatment constitutes the sparse coding coefficient of blending image.
Beneficial effects of the present invention:
1, the present invention not only allows for irrelevance between heterogeneity dictionary, it is also contemplated that it is to the weak of heterogeneity Expression characterization, and regular terms of the gradient minimisation constraint as cartoon ingredient is introduced, therefore cartoon dictionary has stronger expression Ability.
2, the present invention has considered not only effect of the heterogeneity dictionary played in picture breakdown, it is also contemplated that image Influence of the design of decomposition model to decomposition result, therefore the syncretizing effect that can be more satisfied with.
3, image interfusion method proposed by the present invention is obviously improved compared to other methods fusion performance.
Detailed description of the invention
Fig. 1 is flow chart of the present invention;
Fig. 2 (a) and 2 (b) is multi-modality medical image to be fused in embodiment 1;
Fig. 3 (a)-(f) is the Medical image fusion effect obtained in embodiment 1 using distinct methods;
Fig. 4 (a) and 4 (b) is multiple focussing image to be fused in embodiment 2;
Fig. 5 (a)-(f) is the multi-Focus Image Fusion Effect obtained in embodiment 2 using distinct methods;
Fig. 6 (a) and 6 (b) is infrared and visible images to be fused in embodiment 3;
Fig. 7 (a)-(f) is the infrared and visual image fusion effect obtained in embodiment 3 using distinct methods.
Specific embodiment
The present invention is described in further detail in the following with reference to the drawings and specific embodiments.
Several groups of images are merged below as the concrete scheme in specification and according to process shown in FIG. 1, 8 width multi-source images are acquired during training dictionary, as training sample, are updated according to dictionary learning algorithm iteration is proposed The cartoon dictionary and texture dictionary needed.Cartoon part and the line of source images are obtained by the decomposition algorithm of the image of proposition Manage part.In dictionary learning and picture breakdown, it is related to λ, λ altogether1、λ2With η, η1、η2Six parameters need to be arranged, according to reality Experience is tested, it is 0.01, λ that λ and η, which is arranged, in we1、λ2And η1、η2It is set as 0.005.For verify context of methods validity, we To multiple focussing image, medical image, infrared tested respectively with visible images.Objectively and impartially to be generated to distinct methods Fusion results quality evaluated, herein in addition to being compared from visual effect with to also use multiple objectively evaluate Index measures the quality of fusion results.It is objectively evaluated in index at these, we used normalized mutual trusts Cease QMI, the evaluation index Q based on linearly related comentropyNCIEAnd the evaluation index Q based on phase equalizationP.Wherein, QMIIt can Blending image has been transferred to for measuring how many information in source images;QNCIEBe by measurement blending image and source images it Between correlation, realize evaluation to fused image quality;QPFusion figure is remained into for measuring significant characteristics in source images The degree of picture.The numerical value of these evaluation indexes is bigger, shows that the quality of fusion results is better.
Embodiment 1: Medical image fusion
In the first set of experiments, we first carry out one group of multi-modality medical image as shown in Fig. 2 (a) and (b) Fusion experiment.Wherein, Fig. 2 (a) is MR-T1 image, and Fig. 2 (b) is MR-T2 image.It can be regarded as by Fig. 2 (a) and (b), due to The difference of weight, so that MR-T1 and MR-T2 image contains a large amount of complementary information, if can be integrated to these information, A width blending image is produced, doctor will be very beneficial for the diagnosis of the state of an illness, treatment and subsequent image procossing, such as image point Class, segmentation, target identification energy.
Fig. 3 (a)-(f) successively shows NSCT, NSCT-SR, Kim ' s, Zhu-KSVD, ASR and melting set forth herein method Close result.It is possible thereby to regard as, different fusion methods has different performances on retaining image edge detail information.Its In, the fusion method based on NSCT can more efficiently retain the complementary information of source images, but in the details letter for keeping source images Ability is slightly worse compared in Zhu-KSVD method on breath.The detailed information for the method fusion results that Kim is proposed is more fuzzy.Compare and Speech, although the edge detail information of Zhu-KSVD method and ASR method energy effective protection source images, cannot protect well The contrast of image.Be for this medical image more demanding to quality it is very unfavorable, be also unfavorable at subsequent image Reason and identification mission.On the contrary, by comparison it can be found that fusion method proposed in this paper is not only only capable of effective protection source images Edge detail information, and it is able to maintain the contrast of source images, this mainly has benefited from asking using the method for loop iteration herein Solve the sparse coding coefficient of heterogeneity.Meanwhile more without introducing artificial deceptive information, thus the side this paper in fusion process Result visual effect caused by method is more preferable.In addition, three kinds of distinct methods objectively evaluate, the results are shown in Table 1.By these data It has been obtained and the consistent conclusion of subjective assessment it will be seen that objectively evaluating.This further demonstrates the superior of context of methods Property and there is superiority relative to conventional method.
The Medical image fusion performance of the different fusion methods of table 1 compares
Embodiment 2: multi-focus image fusion
In the second set of experiments, we have carried out fusion experiment to one group of multiple focussing image shown in Fig. 4 (a) and (b).By Shown in Fig. 4 (a) and (b) as can be seen that when the lens focus of camera is on a certain object, the object can by clearly at Picture, however be fuzzy far from focal plane image objects.But in reality, certain Computer Vision Tasks or image procossing are appointed Be engaged in such as Target Segmentation, image classification, target identification, acquisition one width thus target all clearly image is very important.This The method of multi-focus image fusion can be used usually to solve in one problem.Method proposed in this paper can not only be used to solve medicine The fusion problem of image, additionally it is possible to for solving the fusion of multiple focussing image.
Fig. 5 (a)-(f) illustrates NSCT, NSCT-SR, Kim ' s, the view of Zhu-KSVD, ASR and context of methods fusion results Feel that effect compares.It is possible thereby to regard as, all methods can effectively extract the information of clear object in source images, and be protected It is left in blending image.Convenient for comparing, we have carried out partial enlargement to the fusion solution that distinct methods generate, from magnification region The information for being included can be seen that the edge that the method based on NSCT, NSCT-SR, Kim, Zhu-KSVD can be effectively retained image Detailed information obtains fusion results similar with context of methods.And ASR method is obscured when retaining image edge detail information Part edge information in focal zone.Although being difficult from visual effect to NSCT, NSCT-SR, Kim, Zhu-KSVD Method provides a determining evaluation result, but table 2 provide objectively evaluate result and can be calculated herein from reflecting on the other hand The validity of method and superiority relative to conventional method.
The multi-focus image fusion performance of the different fusion methods of table 2 compares
Embodiment 3: infrared and visual image fusion
In the experiment of third group, we using distinct methods to Fig. 6 (a) and (b) shown in infrared and visible images into Fusion experiment is gone.Wherein, Fig. 6 (a) is infrared image, and Fig. 6 (b) is visible images.It can be regarded as by these source images, it can Light-exposed image can clearly reflect the background detail information of scene, but to thermal target () such as pedestrian, vehicle) it can not be clear Imaging;On the contrary, infrared image can clearly reflect thermal target (such as pedestrian, vehicle), but to the background etc. of not higher temperature It is unable to blur-free imaging.To obtain a width background and thermal target all clearly images, in target following, identification, segmentation and detection All have important function.
Shown in distinct methods blending image obtained such as Fig. 7 (a)-(f).Wherein Fig. 7 (a) and (b) be respectively NSCT and The fusion results of NSCT-SR method;Fig. 7 (c)-(f) is Kim ' s method, Zhu-KSVD method, ASR method and this paper respectively Result caused by algorithm.It can be seen from these images all control methods can be effectively retained the thermal target of source images with And background information.But it can be seen that distinct methods from local magnification region and show different fusion performances.Based on NSCT's Simple fusion and the fusion based on NSCT-SR obtain more similar syncretizing effect, and Kim ' s fusion method, Zhu-KSVD Although method and ASR fusion method can be effectively retained the background information (shown in Fig. 6 (b)) in visible images, retaining When brightness and thermal target information in infrared image (Fig. 6 (a) shown in), effect is not as good as context of methods (shown in Fig. 7 (f)).It is whole It is said on body, method proposed in this paper can not only be effectively retained the background information in visible images, and can also be effectively retained red Object in outer image, while it being able to maintain the contrast of source images again, thus there is better visual effect.Table 3 gives Result is objectively evaluated when different fusion methods infrared and visible images shown in fusion such as Fig. 6 (a) and (b).It is given from table 3 In data out we can see that it is employed herein objectively evaluate index substantially and can be evaluated with visual effect obtain it is consistent Conclusion.It is also preferable on the whole to objectively evaluate result for the preferable fusion method of visual effect.This also reflects selected visitor herein Seeing evaluation index is reasonable.It is in table 3 statistics indicate that, method proposed in this paper have preferably fusion performance.
The different fusion methods of table 3 it is infrared compared with visual image fusion performance
Above in conjunction with attached drawing, the embodiment of the present invention is explained in detail, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (6)

1. based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose, which is characterized in that including walking as follows It is rapid:
(1) training sample data are acquired first and are decomposed obtains cartoon training data and texture training data;
(2) learn to obtain initial cartoon dictionary and just to cartoon training data and texture training data respectively by K-SVD algorithm Beginning texture dictionary;
(3) cartoon dictionary D is obtained using the initial cartoon dictionary of discrimination dictionary learning model training and initial texture dictionarycAnd texture Dictionary Dt
(4) it is based on cartoon dictionary DcWith texture dictionary DtPicture breakdown model is formed, using picture breakdown model by figure to be fused Picture is decomposed to obtain corresponding cartoon part and texture part respectively, then is merged.
2. the multisource image anastomosing method according to claim 1 decomposed based on differentiation dictionary learning and anatomic element, It is characterized in that: the detailed process of the step (1) are as follows: from interconnection multiple gray level images of online collection as training sample, then The data that training sample is acquired in the form of sliding window, by window n × n acquisition to data as a column vector n2× 1, N is the size of sliding window, and collected data are decomposed by MCA algorithm, obtains cartoon training data and texture training number According to.
3. the multisource image anastomosing method according to claim 1 decomposed based on differentiation dictionary learning and anatomic element, It is characterized in that: distinguishing the objective function of dictionary learning model in the step (3) are as follows:
Wherein, Y ∈ Rm×nFor sliding window acquisition data as Column vector groups at matrix, R is spatial domain, and m is vector space Dimension, n is the number of image block, Dt、DcRespectively represent texture dictionary and cartoon dictionary, At=[α1,t2,t,…,αL,t], table Show the corresponding texture sparse coding coefficient of texture training data, wherein αl,tFor the corresponding sparse volume of first of image block texture ingredient Code coefficient, Ac=[α1,c2,c,…,αL,c], indicate cartoon sparse coding coefficient corresponding to cartoon training data, wherein αl,c For the corresponding sparse coding coefficient of first of image block cartoon ingredient, T is the transposition of matrix, and ▽ is gradient operator, λ, λ1, λ2It is flat Weigh parameter, | | | |FFor F norm operator, | | | |1For l1Norm operator, | | | |2It is accorded with for the square operation of norm.
4. the multisource image anastomosing method according to claim 1 decomposed based on differentiation dictionary learning and anatomic element, It is characterized in that: the detailed process of the step (4) are as follows: N picture to be fused is taken, image to be fused is pre-processed, White Gaussian noise is added first, and the data of image to be fused are then acquired in the form of sliding window, pass through window n × n acquisition To data as a column vector n2× 1, n are the size of sliding window, image array to be fused are obtained, by image to be fused It is decomposed to obtain the sparse coding coefficient A of cartoon ingredientcWith the sparse coding coefficient A of texture ingredientt, using l1Norm is most Big principle selects the code coefficient of blending image heterogeneity, and image is merged.
5. the multisource image anastomosing method according to claim 4 decomposed based on differentiation dictionary learning and anatomic element, Be characterized in that: it is described by image to be fused carry out decompose be by picture breakdown model realization, objective function is as follows:
Wherein, X ∈ Rm×nFor sliding window acquisition data as Column vector groups at matrix, R is spatial domain, Dt、DcTable respectively Show the texture dictionary and cartoon dictionary learnt in step 3, At=[α1,t2,t,…,αL,t], indicate that texture sample data are corresponding Texture sparse coding coefficient, wherein αl,tFor the corresponding sparse coding coefficient of first of image block texture ingredient, Ac=[α1,c, α2,c,…,αL,c], indicate cartoon sparse coding coefficient corresponding to cartoon samples data, wherein αl,cFor first of image block cartoon The corresponding sparse coding coefficient of ingredient, η, η1, η2For balance parameters, | | | |FFor F norm operator, | | | |1For l1Norm fortune Operator, | | | |2It is accorded with for the square operation of norm,For sparse coding factor alphacEstimated valueThe matrix of composition calculates Formula are as follows:
Wherein, ΩiFor code coefficient αi,jRegional area where correspondence image block, according to the thought of non-local mean method, Calculation formula is as follows:
Wherein, αc,iFor the code coefficient of the texture ingredient of i-th of image block,For office where the texture ingredient of i image block Portion region ΩiIn j-th piece of code coefficient, H is normalization factor, and h is preset scale size.
6. the multisource image anastomosing method according to claim 4 decomposed based on differentiation dictionary learning and anatomic element, It is characterized in that: using l1Norm maximum principle selects the code coefficient of blending image heterogeneity to merge image Detailed process is as follows:
IfFor the code coefficient of first of image block of the texture and cartoon ingredient of blending image, integration program is as follows:
Wherein,Indicate that the l of the code coefficient of the texture ingredient of blending image is arranged, l={ 1,2 ..., K },Indicate r The code coefficient A of the texture ingredient of image to be fusedtL column, be a column vector, r={ 1,2 ..., N }, N are to be fused The number of image, K are the numbers of image block;
Wherein,Indicate that the l of the code coefficient of the cartoon ingredient of blending image is arranged, l={ 1,2 ..., K },Indicate s The code coefficient A of the cartoon ingredient of image to be fusedcL column, be a column vector, s={ 1,2 ..., N }, thenN is the number of image to be fused, and K is the number of image block;
It is obtainingWithAfterwards, if a total of K image block,Indicate blending image texture at The code coefficient divided,Indicate the code coefficient of the cartoon ingredient of blending image, then fused card Logical-texture ingredient can be expressed asWithTherefore it is represented by by the matrix that the block vector of blending image is constitutedThis result is lined up into image block again and just obtains final blending image.
CN201810546687.7A 2018-05-31 2018-05-31 Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition Active CN108985320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810546687.7A CN108985320B (en) 2018-05-31 2018-05-31 Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810546687.7A CN108985320B (en) 2018-05-31 2018-05-31 Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition

Publications (2)

Publication Number Publication Date
CN108985320A true CN108985320A (en) 2018-12-11
CN108985320B CN108985320B (en) 2021-11-23

Family

ID=64542818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810546687.7A Active CN108985320B (en) 2018-05-31 2018-05-31 Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition

Country Status (1)

Country Link
CN (1) CN108985320B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784572A (en) * 2020-05-19 2020-10-16 昆明理工大学 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN113447111A (en) * 2021-06-16 2021-09-28 合肥工业大学 Visual vibration amplification method, detection method and system based on morphological component analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700379A (en) * 2014-12-29 2015-06-10 烟台大学 Remote sensing image fusion method based on multi-dimensional morphologic element analysis
CN105761287A (en) * 2016-03-02 2016-07-13 东方网力科技股份有限公司 Image decomposition method and device based on sparse representation
CN106056640A (en) * 2016-06-03 2016-10-26 西北大学 Image compression method based on morphological component decomposition combined with compressed sensing
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107341765A (en) * 2017-05-05 2017-11-10 西安邮电大学 A kind of image super-resolution rebuilding method decomposed based on cartoon texture
CN107481221A (en) * 2017-07-19 2017-12-15 天津大学 Distorted image quality evaluating method is mixed with the full reference of cartoon rarefaction representation based on texture
CN107563968A (en) * 2017-07-26 2018-01-09 昆明理工大学 A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700379A (en) * 2014-12-29 2015-06-10 烟台大学 Remote sensing image fusion method based on multi-dimensional morphologic element analysis
CN105761287A (en) * 2016-03-02 2016-07-13 东方网力科技股份有限公司 Image decomposition method and device based on sparse representation
CN106056640A (en) * 2016-06-03 2016-10-26 西北大学 Image compression method based on morphological component decomposition combined with compressed sensing
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107341765A (en) * 2017-05-05 2017-11-10 西安邮电大学 A kind of image super-resolution rebuilding method decomposed based on cartoon texture
CN107481221A (en) * 2017-07-19 2017-12-15 天津大学 Distorted image quality evaluating method is mixed with the full reference of cartoon rarefaction representation based on texture
CN107563968A (en) * 2017-07-26 2018-01-09 昆明理工大学 A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784572A (en) * 2020-05-19 2020-10-16 昆明理工大学 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning
CN111784572B (en) * 2020-05-19 2022-06-28 昆明理工大学 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112561842B (en) * 2020-12-07 2022-12-09 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN113447111A (en) * 2021-06-16 2021-09-28 合肥工业大学 Visual vibration amplification method, detection method and system based on morphological component analysis

Also Published As

Publication number Publication date
CN108985320B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN110766051A (en) Lung nodule morphological classification method based on neural network
Oliva et al. Scene-centered description from spatial envelope properties
CN107533649A (en) Use the automatic brain tumor diagnosis method and system of image classification
CN108564109A (en) A kind of Remote Sensing Target detection method based on deep learning
CN108198147A (en) A kind of method based on the multi-source image fusion denoising for differentiating dictionary learning
CN108509854B (en) Pedestrian re-identification method based on projection matrix constraint and discriminative dictionary learning
CN109410157B (en) Image fusion method based on low-rank sparse decomposition and PCNN
CN108830818A (en) A kind of quick multi-focus image fusing method
CN109614991A (en) A kind of segmentation and classification method of the multiple dimensioned dilatancy cardiac muscle based on Attention
Han et al. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs
CN107194937A (en) Tongue image partition method under a kind of open environment
CN108399611A (en) Multi-focus image fusing method based on gradient regularisation
CN102737250A (en) Method and system for automatic detection of spinal bone lesions in 3d medical image data
CN106991411B (en) Remote Sensing Target based on depth shape priori refines extracting method
CN106981058A (en) A kind of optics based on sparse dictionary and infrared image fusion method and system
CN108985320A (en) Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose
CN108898065A (en) Candidate regions quickly screen and the depth network Ship Target Detection method of dimension self-adaption
CN109409201A (en) A kind of pedestrian's recognition methods again based on shared and peculiar dictionary to combination learning
CN101667289A (en) Retinal image segmentation method based on NSCT feature extraction and supervised classification
CN110097537A (en) A kind of meat quantitative analysis evaluation method based on three-D grain feature
CN110188767A (en) Keratonosus image sequence feature extraction and classifying method and device based on deep neural network
CN110287370A (en) Suspect's method for tracing, device and storage medium based on field shoe print
CN106529486A (en) Racial recognition method based on three-dimensional deformed face model
Nair et al. Multi-layer, multi-modal medical image intelligent fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant