CN110415198A - A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network - Google Patents

A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network Download PDF

Info

Publication number
CN110415198A
CN110415198A CN201910639252.1A CN201910639252A CN110415198A CN 110415198 A CN110415198 A CN 110415198A CN 201910639252 A CN201910639252 A CN 201910639252A CN 110415198 A CN110415198 A CN 110415198A
Authority
CN
China
Prior art keywords
source images
frequency coefficient
sub
low frequency
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910639252.1A
Other languages
Chinese (zh)
Other versions
CN110415198B (en
Inventor
谈玲
于欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910639252.1A priority Critical patent/CN110415198B/en
Publication of CN110415198A publication Critical patent/CN110415198A/en
Application granted granted Critical
Publication of CN110415198B publication Critical patent/CN110415198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network, steps are as follows: S1: source images A and source images B being decomposed, low frequency coefficient and high frequency coefficient are obtained;S2: low frequency coefficient is decomposed into sub- low frequency coefficient and sub- high frequency coefficient by LP transformation, sub- low frequency coefficient and sub- high frequency coefficient are merged respectively;S3: fused sub- low frequency coefficient and sub- high frequency coefficient are merged by LP inverse transformation;S4: high frequency coefficient is merged;S5: fused low frequency coefficient and fused high frequency coefficient acquire final blending image by FFST inverse transformation.The present invention can retain the most information of source images, and in terms of the clarity, edge refers to improve in terms of portraying aspect and contrast, spatial frequency, average gradient, comentropy and the marginal information transmission factor of blending image are also improved simultaneously, so as to obtain preferable syncretizing effect.

Description

It is a kind of based on laplacian pyramid and parameter adaptive Pulse Coupled Neural Network Method of Medical Image Fusion
Technical field
The present invention relates to digital image processing techniques fields, more particularly to one kind to be based on laplacian pyramid and parameter certainly Adapt to the Method of Medical Image Fusion of Pulse Coupled Neural Network.
Background technique
Medical image fusion is just becoming more and more important in medicine auxiliary diagnosis.Plurality of medical image, as electronics calculates Machine tomoscan (CT), magnetic resonance imaging (MRI), positron e mission computed tomography (PET), single photon emission calculate Machine tomography (SPECT) etc. has respective advantage both from different sensors, but can not all provide enough Information.Therefore image fusion technology just becomes indispensable.
In recent years, strong evidence shows that human visual system just handles information, Jin Erti in a manner of multiresolution Various Method of Medical Image Fusion are gone out.Most of Method of Medical Image Fusion are all in the frame for being based on multi-scale transform (MST) It is introduced under frame, to obtain better visual effect.
Many bases have been proposed by studying more advanced image conversion method and more complicated convergence strategy, researcher In the Method of Medical Image Fusion of MST.The Method of Medical Image Fusion that (CTD) is wherein decomposed based on cartoon texture, using being based on The convergence strategy of rarefaction representation (SR) come merge decompose after coefficient;Multiple dimensioned point of (LLF) is filtered based on local Laplce Solution method;The Medical image fusion frame of convergence strategy based on convolutional neural networks (CNN).But there are still have for these methods It is insufficient.
Summary of the invention
Goal of the invention: it is asked for what the reservation of source images detailed information present in existing Medical image fusion was not enough Topic, the present invention propose a kind of Medical image fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network Method.
Technical solution: to achieve the purpose of the present invention, the technical scheme adopted by the invention is that:
A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network, The Method of Medical Image Fusion specifically comprises the following steps:
S1: it is converted using FFST and decomposes source images A and source images B, acquire the source images A and source images The low frequency coefficient and high frequency coefficient of B;
S2: the low frequency coefficient of the source images A and source images B is decomposed into sub- low frequency coefficient and sub- high frequency by LP transformation Coefficient merges the sub- low frequency coefficient of the sub- low frequency coefficient of the source images A and source images B, by the son of the source images A The sub- high frequency coefficient of high frequency coefficient and source images B are merged, and fused sub- low frequency coefficient and sub- high frequency coefficient are obtained;
S3: the fused sub- low frequency coefficient and sub- high frequency coefficient are merged by LP inverse transformation, after obtaining fusion Low frequency coefficient;
S4: the high frequency coefficient of the high frequency coefficient of the source images A and source images B is merged, after acquiring fusion High frequency coefficient;
S5: the fused low frequency coefficient and fused high frequency coefficient are acquired final by FFST inverse transformation Blending image.
Further speaking, in the step S2, the fused sub- low frequency coefficient is obtained, specific as follows:
S2.1: equal to the sub- low frequency coefficient of the source images A and the sub- low frequency coefficient of source images B by default sliding window Piecemeal processing is carried out, the image of the image subblock of the sub- low frequency coefficient of source images A and the sub- low frequency coefficient of source images B is acquired Sub-block;
S2.2: by the image of the image subblock of the sub- low frequency coefficient of the source images A and the sub- low frequency coefficient of source images B Block is converted into column vector, the sample training matrix of the sub- low frequency coefficient to set up source images A and source images B;
S2.3: by K-SVD algorithm to the sample training matrix of the sub- low frequency coefficient of the source images A and source images B into Row iteration operation acquires the complete dictionary matrix of sub- low frequency coefficient;
S2.4: the sample training square of the sub- low frequency coefficient of the source images A and source images B is estimated using OMP optimization algorithm The sparse coefficient of battle array, acquires fusion sparse coefficient matrix;
S2.5: the complete dictionary matrix of the sub- low frequency coefficient is multiplied with fusion sparse coefficient matrix, obtains fusion sample This training matrix, specifically:
VF=D αF
Wherein: VFTo merge sample training matrix, D was complete dictionary matrix, αFTo merge sparse coefficient matrix;
S2.6: data sub-block is converted by the column vector of the fusion each column of sample training matrix, utilizes the number It reconstructs to obtain sub- low frequency coefficient according to sub-block, as obtains fused sub- low frequency coefficient.
Further speaking, in step S2.2, by the image subblock and source images B of the sub- low frequency coefficient of the source images A The image subblock of sub- low frequency coefficient be converted into column vector, i.e., by the image subblock of the sub- low frequency coefficient of the source images A and The image subblock of the sub- low frequency coefficient of source images B re-starts arrangement by sequence from left to right from top to bottom.
Further speaking, in the step S2.3, the complete dictionary matrix of the sub- low frequency coefficient is acquired, as The sample training matrix of the sub- low frequency coefficient of the source images B is set up directly on to the sample instruction of the sub- low frequency coefficient of source images A Practice behind matrix, line number remains unchanged.
Further speaking, the step S2.4 acquires fusion sparse coefficient matrix, specific as follows:
S2.4.1: the sample training of the sub- low frequency coefficient of the source images A and source images B is estimated using OMP optimization algorithm The sparse coefficient of matrix acquires the sparse coefficient matrix of the sub- low frequency coefficient of source images A and source images B;
S2.4.2: by the sparse coefficient matrix of the sub- low frequency coefficient of the source images A and source images B, it is dilute to obtain fusion Sparse coefficient matrix, wherein the column vector of the fusion sparse coefficient matrix, specifically:
Wherein:To merge the column vector that sparse coefficient matrix i-th arranges,For source images A sub- low frequency coefficient it is dilute The column vector that sparse coefficient matrix i-th arranges,For source images B sub- low frequency coefficient sparse coefficient matrix i-th arrange column vector, | |αA||1For the sum of the absolute value of each element of column vector in the sparse coefficient matrix of the sub- low frequency coefficient of source images A, | | αB||1 For the sum of the absolute value of each element of column vector in the sparse coefficient matrix of the sub- low frequency coefficient of source images B.
Further speaking, in the step S2, obtain the fused sub- high frequency coefficient, i.e., the described source images A's The sub- high frequency coefficient of sub- high frequency coefficient and source images B take big method to be merged by absolute value, wherein the son obtained after fusion is high Frequency coefficient is the fused sub- high frequency coefficient.
Further speaking, the step S4 acquires fused high frequency coefficient, specific as follows:
S4.1: after the initialization of PCNN neural network model, the link input of setting PCNN neural network model, internal shape State, the input of variable threshold value and external input, specifically:
Wherein: Fij[n] is the feed back input of PCNN neural network n-th, IijFor the stimulus signal of PCNN neural network, Lij[n] is that the link of PCNN neural network n-th inputs, αLFor the definite value of PCNN neural network, Lij[n-1] is PCNN nerve net The link input that network is (n-1)th time, VLFor the amplification coefficient of the link input of PCNN neural network, WijklFor PCNN neural network Link weight coefficients between neuron, YijThe external input that [n-1] is PCNN neural network (n-1)th time, Uij[n] is PCNN mind Internal state through network n-th, β are the link strength of PCNN neural network, θij[n] is the change of PCNN neural network n-th Threshold value input, αθFor the variable threshold value damping time constant of PCNN neural network, θij[n-1] is PCNN neural network (n-1)th time The input of variable threshold value, VθFor the amplification coefficient of the variable threshold value of PCNN neural network, Yij[n] is outside n-th of PCNN neural network Input, Uij[n] is n-th of internal state of PCNN neural network, and k is the decomposition scale of source images, and l is the decomposition side of source images To number;
S4.2: defeated according to the link input of the PCNN neural network model, internal state, the input of variable threshold value and outside Enter, resets the PCNN neural network model, and the high frequency coefficient of the source images A and source images B is substituted into again In the PCNN neural network model of setting, determine that the high frequency coefficient of source images A and source images B is corresponding new by weighting function Fire mapping image, specifically:
Wherein:
OAFor the corresponding new Fire mapping image of high frequency coefficient of source images A, OBIt is corresponding new for the high frequency coefficient of source images B Fire mapping image, OAEFor the high frequency coefficient of source images A link strength value of the standard deviation as PCNN neural network when output, OASFor the high frequency coefficient of source images A link strength value of the La Pusi energy as PCNN neural network when output, OBEFor source Output when link strength value of the standard deviation of the high frequency coefficient of image B as PCNN neural network, OBSFor the high frequency of source images B Output when link strength value of the La Pusi energy of coefficient as PCNN neural network;
S4.3: according to the corresponding new Fire mapping image of the high frequency coefficient of the source images A and source images B, by the source figure As A and source images B new Fire mapping image in duration of ignition output threshold value at each pixel be compared, pass through and described relatively tie Fruit obtains fused high frequency coefficient, specifically:
Wherein: HF(i, j) is fusion high frequency coefficient, HA(i, j) is the high frequency coefficient of source images A, HB(i, j) is source images B High frequency coefficient, OA(i.j) defeated for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images A Threshold value out, OB(i.j) threshold is exported for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images B Value.
Further speaking, in the step S4.2, the link strength of the PCNN neural network is worth corresponding output, tool Body is as follows:
S4.2.1: the high frequency coefficient of the source images A and source images B is substituted into La Pusi energy and standard deviation formula, Acquire the La Pusi energy and standard deviation of the high frequency coefficient of source images A and source images B, the La Pusi energy and standard deviation Formula, specifically:
Wherein: SD is standard deviation, and EOL is La Pusi energy, and f (i, j) is pixel value, mkFor pixel mean value,WFor sliding window Mouthful, n is the length or width of sliding window, fiiFor in active window to i carry out derivation as a result, fjjFor in active window to j Carry out derivation as a result, (i, j) be source images in pixel position;
S4.2.2: using the La Pusi energy of the high frequency coefficient of the source images A and source images B and standard deviation as The link strength value of PCNN neural network, and substitute into the PCNN neural network model, acquire the source images A and source The La Pusi energy of the high frequency coefficient of image B and standard deviation respectively as PCNN neural network link strength value when output.
The utility model has the advantages that compared with prior art, technical solution of the present invention has following advantageous effects:
(1) source images are decomposed into low frequency coefficient and high frequency coefficient by Method of Medical Image Fusion of the invention, to low frequency system Number is decomposed into the sparse and sub- high frequency coefficient of sub- low frequency using LP again, and high frequency coefficient sparse to sub- low frequency and sub- is handled again Afterwards, it is fused into low frequency coefficient again, the low frequency coefficient of fusion and high frequency coefficient are reconstructed into final fusion by FFST inverse transformation Medical image, low-frequency information more can completely be retained, highlight performance detailed information, simultaneous contrast It is higher;
(2) blending image that the present invention obtains can retain the most information of source images, and in terms of clarity, edge It refers to improve in terms of portraying aspect and contrast, while the spatial frequency of blending image, average gradient, comentropy and side Edge information transmission factor is also improved, so as to obtain preferable syncretizing effect.
Detailed description of the invention
Fig. 1 is the flow diagram of Method of Medical Image Fusion of the invention;
Fig. 2 is the flow diagram of low frequency coefficient fusion process of the invention;
Fig. 3 is the flow diagram of high frequency coefficient fusion process of the invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described.Wherein, described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Therefore, below to the embodiment of the present invention provided in the accompanying drawings Detailed description be not intended to limit the range of claimed invention, but be merely representative of selected embodiment of the invention.
Embodiment 1
With reference to Fig. 1, one kind is present embodiments provided based on laplacian pyramid and parameter adaptive pulse coupled neural The Method of Medical Image Fusion of network, specifically comprises the following steps:
Step S1: being converted using FFST and decompose source images A and source images B, and decomposition obtains source images A and source images The low frequency coefficient and high frequency coefficient of B.
In particular, the low frequency coefficient L that source images A is decomposedAWith high frequency coefficient HA, specifically:
Wherein: LAFor the low frequency coefficient of source images A, kOAFor the Decomposition order of source images A, HAFor the high frequency system of source images A Number, kAFor the decomposition scale of source images A, lAFor the decomposition direction number of source images A.
The low frequency coefficient L that source images B is decomposedBWith high frequency coefficient HB, specifically:
Wherein: LBFor the low frequency coefficient of source images B, k0BFor the Decomposition order of source images B, HBFor the high frequency system of source images B Number, kBFor the decomposition scale of source images B, lBFor the decomposition direction number of source images B.
Step S2: sub- low frequency coefficient is decomposed by LP transformation with reference to the low frequency coefficient of Fig. 2, source images A and source images B With sub- high frequency coefficient, while the sub- low frequency coefficient of the sub- low frequency coefficient of source images A and source images B being merged, source images A's The sub- high frequency coefficient of sub- high frequency coefficient and source images B are merged, and fused sub- low frequency coefficient and sub- high frequency system are acquired Number.
Wherein, the sub- high frequency coefficient of source images A and the sub- high frequency coefficient of source images B take big method to be melted by absolute value It closes, the sub- high frequency coefficient obtained after fusion is fused sub- high frequency coefficient.
Wherein, the sub- low frequency coefficient of source images A and the sub- low frequency coefficient of source images B are merged, after acquiring fusion Sub- low frequency coefficient, process is specific as follows
Step S2.1: by sliding window to the low frequency coefficient L of source images AAWith the low frequency coefficient L of source images BBRespectively into The processing of row piecemeal, wherein the step-length of sliding window is S pixel, and size is n × n.
In the present embodiment, the size of source images A and source images B are equal are as follows: M × N, in which: M is source images A and source images B Length, N be source images A and source images B width.To the low frequency coefficient L of source images AAWith the low frequency coefficient L of source images BBIt carries out It is available to obtain a image subblock of (M+n-1) × (N+n-1) after piecemeal processing.Sliding window is during selection simultaneously It should not choose excessive, this is because window is excessive to will lead to that sample is very few, and then will increase the complexity of calculating, it is accurate to reduce Rate.
Step S2.2: by the image of the image subblock of the sub- low frequency coefficient of source images A and the sub- low frequency coefficient of source images B Block is converted into column vector, i.e., by the image subblock of the sub- low frequency coefficient of source images A according to sequence from left to right from top to bottom Arrangement is re-started, the image subblock of the sub- low frequency coefficient of source images B is re-started according to sequence from left to right from top to bottom Arrangement.
The column vector converted by the image subblock of the sub- low frequency coefficient of source images A sets up the sub- low frequency of source images A The sample training matrix V of coefficientA.The column vector converted by the image subblock of the sub- low frequency coefficient of source images B sets up source The sample training matrix V of the sub- low frequency coefficient of image BB
Step S2.3: by K-SVD algorithm to the sample training matrix V of the sub- low frequency coefficient of source images AAWith source images B Sub- low frequency coefficient sample training matrix VBIt is iterated operation, as by the sample training of the sub- low frequency coefficient of source images B Matrix VBIt is set up directly on the sample training matrix V of the sub- low frequency coefficient of source images AABehind, and the sub- low frequency of source images A The sample training matrix V of coefficientALine number remain unchanged, to acquire the complete dictionary matrix D of sub- low frequency coefficient.
Step S2.4: the sample training matrix V of OMP optimization algorithm estimation source images A is utilizedASparse coefficient, source images B Sub- low frequency coefficient sample training matrix VBSparse coefficient it is dilute to acquire fusion by estimating obtained sparse coefficient Sparse coefficient matrix αF.It is specific as follows:
Step S2.4.1: the sample training matrix V of OMP optimization algorithm estimation source images A is utilizedASparse coefficient, pass through Obtained sparse coefficient obtains the sparse coefficient matrix α of the sub- low frequency coefficient of source images AA
Utilize the sample training matrix V of the sub- low frequency coefficient of OMP optimization algorithm estimation source images BBSparse coefficient, pass through Obtained sparse coefficient obtains the sparse coefficient matrix α of the sub- low frequency coefficient of source images BB
Step S2.4.2: pass through the sparse coefficient matrix α of the sub- low frequency coefficient of source images AA, source images B sub- low frequency system Several sparse coefficient matrix αB, obtain fusion sparse coefficient matrix αF, wherein fusion sparse coefficient matrix αFColumn vector, specifically Are as follows:
Wherein:To merge the column vector that sparse coefficient matrix i-th arranges,For source images A sub- low frequency coefficient it is dilute The column vector that sparse coefficient matrix i-th arranges,For source images B sub- low frequency coefficient sparse coefficient matrix i-th arrange column vector, | |αA||1For the sum of the absolute value of each element of column vector in the sparse coefficient matrix of the sub- low frequency coefficient of source images A, | | αB||1 For the sum of the absolute value of each element of column vector in the sparse coefficient matrix of the sub- low frequency coefficient of source images B.
Step S2.5: sparse by being merged in the complete dictionary matrix D of step S2.3 neutron low frequency coefficient and step S2.4.2 Coefficient matrix αFIt is multiplied, obtains fusion sample training matrix, specifically:
VF=D αF
Wherein: VFTo merge sample training matrix, D was complete dictionary matrix, αFTo merge sparse coefficient matrix.
Step S2.6: converting data sub-block for the column vector for merging each column of sample training matrix, and by the number It is reconstructed according to sub-block, acquires sub- low frequency coefficient, as obtain fused sub- low frequency coefficient.
Step S3: fused obtained in fused sub- high frequency coefficient, step S2.6 according to obtained in step S2 Fused sub- low frequency coefficient and sub- high frequency coefficient are merged by LP inverse transformation, acquire fusion by sub- low frequency coefficient Low frequency coefficient afterwards.
Step S4: Fig. 3 is referred to, the high frequency coefficient of the high frequency coefficient of source images A and source images B is merged, is obtained It is specific as follows to fused high frequency coefficient:
Step S4.1: PCNN neural network model is initialized, i.e. the link of PCNN neural network model inputs Lij, it is internal State Uij, variable threshold value input θijIt is disposed as 0, specifically:
Lij(0)=Uij(0)=θij(0)=0
Wherein: Lij(0) it is inputted for the link of PCNN neural network, UijIt (0) is the internal state of PCNN neural network, θij (0) it is inputted for the variable threshold value of PCNN neural network.
The neuron in PCNN neural network model is in flameout state at this time, that is to say, that PCNN neural network model External input YijIt (0) is 0, output result is also 0, the umber of pulse O of generationijIt (0) is also 0.
After the initialization of PCNN neural network model, will be arranged again the link input of PCNN neural network model, internal state, The input of variable threshold value and external input, specifically:
Wherein: Fij[n] is the feed back input of PCNN neural network n-th, IijFor the stimulus signal of PCNN neural network, Lij[n] is that the link of PCNN neural network n-th inputs, αLFor the definite value of PCNN neural network, Lij[n-1] is PCNN nerve net The link input that network is (n-1)th time, VLFor the amplification coefficient of the link input of PCNN neural network, WijklFor PCNN neural network Link weight coefficients between neuron, YijThe external input that [n-1] is PCNN neural network (n-1)th time, Uij[n] is PCNN mind Internal state through network n-th, β are the link strength of PCNN neural network, θij[n] is the change of PCNN neural network n-th Threshold value input, αθFor the variable threshold value damping time constant of PCNN neural network, θij[n-1] is PCNN neural network (n-1)th time The input of variable threshold value, VθFor the amplification coefficient of the variable threshold value of PCNN neural network, Yij[n] is outside n-th of PCNN neural network Input, Uij[n] is n-th of internal state of PCNN neural network, and k is the decomposition scale of source images, and l is the decomposition side of source images To number.
Step S4.2: L is inputted according to the link of PCNN neural network modelij, internal state Uij, variable threshold value input θijWith External input Yij, setting re-started to PCNN neural network model, and by the high frequency coefficient of source images A and source images B equal generation Enter in the PCNN neural network model reset, the high frequency coefficient pair of source images A and source images B are determined by weighting function The new Fire mapping image answered, specifically:
Wherein:
OAFor the corresponding new Fire mapping image of high frequency coefficient of source images A, OBIt is corresponding new for the high frequency coefficient of source images B Fire mapping image, OAEFor the high frequency coefficient of source images A link strength value of the standard deviation as PCNN neural network when output, OASFor the high frequency coefficient of source images A link strength value of the La Pusi energy as PCNN neural network when output, OBEFor source Output when link strength value of the standard deviation of the high frequency coefficient of image B as PCNN neural network, OBSFor the high frequency of source images B Output when link strength value of the La Pusi energy of coefficient as PCNN neural network.
In the present embodiment, the link strength of PCNN neural network is worth corresponding output, specific as follows:
Step S4.2.1: the high frequency coefficient of the high frequency coefficient of source images A and source images B are substituted into respectively La Pusi energy and In standard deviation formula, the La Pusi energy of the high frequency coefficient of source images A and the high frequency coefficient of standard deviation, source images B are acquired La Pusi energy and standard deviation.
In particular, La Pusi energy and standard deviation formula, specifically:
Wherein: SD is standard deviation, and EOL is La Pusi energy, and f (i, j) is pixel value, mkFor pixel mean value,WFor sliding window Mouthful, n is the length or width of sliding window, fiiFor in active window to i carry out derivation as a result, fjjFor in active window to j Carry out derivation as a result, (i, j) be source images in pixel position.
Step S4.2.2: by the La Pusi energy and standard deviation of the high frequency coefficient of source images A, the high frequency coefficient of source images B La Pusi energy and standard deviation respectively as PCNN neural network link strength value, and substitute into reset PCNN nerve In network model, acquire the high frequency coefficient of the source images A and source images B La Pusi energy and standard deviation respectively as Output when the link strength value of PCNN neural network.
Step S4.3: according to the corresponding new Fire mapping image of the high frequency coefficient of source images A and source images B, by source images A and Duration of ignition output threshold value in the new Fire mapping image of source images B at each pixel is compared, and is obtained by the comparison result Fused high frequency coefficient is taken, specifically:
Wherein: HF(i, j) is fusion high frequency coefficient, HA(i, j) is the high frequency coefficient of source images A, HB(i, j) is source images B High frequency coefficient, OA(i.j) defeated for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images A Threshold value out, OB(i.j) threshold is exported for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images B Value.
Step S5: according to fused high frequency coefficient in low frequency coefficient fused in step S3 and step S4.3, will melt Low frequency coefficient and fused high frequency coefficient after conjunction acquire final blending image by FFST inverse transformation.
Method of Medical Image Fusion in the present embodiment can be very good retain low-frequency information, outstanding behaviours detailed information, Contrast is higher.Simultaneously through this embodiment in the obtained blending image of Method of Medical Image Fusion can retain source images Most information portrays at clarity, edge and contrast is all improved, and the spatial frequency of blending image, average ladder Degree, comentropy and marginal information transmission factor are improved, so as to reach better syncretizing effect.
Schematically the present invention and embodiments thereof are described above, description is not limiting, institute in attached drawing What is shown is also one of embodiments of the present invention, and actual structures and methods are not limited thereto.So if this field Those of ordinary skill is enlightened by it, without departing from the spirit of the invention, is not inventively designed and the skill The similar frame mode of art scheme and embodiment, all belong to the scope of protection of the present invention.

Claims (8)

1. a kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network, It is characterized in that, the Method of Medical Image Fusion specifically comprises the following steps:
S1: it is converted using FFST and decomposes source images A and source images B, acquire the source images A's and source images B Low frequency coefficient and high frequency coefficient;
S2: the low frequency coefficient of the source images A and source images B is decomposed into sub- low frequency coefficient and sub- high frequency system by LP transformation Number, the sub- low frequency coefficient of the sub- low frequency coefficient of the source images A and source images B is merged, and the son of the source images A is high The sub- high frequency coefficient of frequency coefficient and source images B are merged, and fused sub- low frequency coefficient and sub- high frequency coefficient are obtained;
S3: the fused sub- low frequency coefficient and sub- high frequency coefficient are merged by LP inverse transformation, are obtained fused low Frequency coefficient;
S4: the high frequency coefficient of the high frequency coefficient of the source images A and source images B is merged, fused height is acquired Frequency coefficient;
S5: the fused low frequency coefficient and fused high frequency coefficient acquire final fusion by FFST inverse transformation Image.
2. according to claim 1 a kind of based on laplacian pyramid and parameter adaptive Pulse Coupled Neural Network Method of Medical Image Fusion, which is characterized in that in the step S2, obtain the fused sub- low frequency coefficient, specifically such as Under:
S2.1: the sub- low frequency coefficient of the source images A and the sub- low frequency coefficient of source images B are carried out by default sliding window Piecemeal processing acquires image of the image subblock of the sub- low frequency coefficient of source images A and the sub- low frequency coefficient of source images B Block;
S2.2: the image subblock of the image subblock of the sub- low frequency coefficient of the source images A and the sub- low frequency coefficient of source images B is equal It is converted into column vector, the sample training matrix of the sub- low frequency coefficient to set up source images A and source images B;
S2.3: it is changed by sample training matrix of the K-SVD algorithm to the sub- low frequency coefficient of the source images A and source images B For operation, the complete dictionary matrix of sub- low frequency coefficient is acquired;
S2.4: the sample training matrix of the sub- low frequency coefficient of the source images A and source images B is estimated using OMP optimization algorithm Sparse coefficient acquires fusion sparse coefficient matrix;
S2.5: the complete dictionary matrix of the sub- low frequency coefficient is multiplied with fusion sparse coefficient matrix, obtains fusion sample instruction Practice matrix, specifically:
VF=D αF
Wherein: VFTo merge sample training matrix, D was complete dictionary matrix, αFTo merge sparse coefficient matrix;
S2.6: data sub-block is converted by the column vector of the fusion each column of sample training matrix, utilizes data Block reconstructs to obtain sub- low frequency coefficient, as obtains fused sub- low frequency coefficient.
3. a kind of doctor based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network according to claim 2 Learn image interfusion method, which is characterized in that in step S2.2, by the image subblock of the sub- low frequency coefficient of the source images A and The image subblock of the sub- low frequency coefficient of source images B is converted into column vector, i.e., by the image of the sub- low frequency coefficient of the source images A The image subblock of the sub- low frequency coefficient of sub-block and source images B re-starts arrangement by sequence from left to right from top to bottom.
4. one kind according to claim 2 or 3 is based on laplacian pyramid and parameter adaptive pulse coupled neural net The Method of Medical Image Fusion of network, which is characterized in that in the step S2.3, acquire the complete of the sub- low frequency coefficient The sample training matrix of the sub- low frequency coefficient of the source images B is as set up directly on the sub- low frequency of source images A by dictionary matrix Behind the sample training matrix of coefficient, line number is remained unchanged.
5. according to claim 4 a kind of based on laplacian pyramid and parameter adaptive Pulse Coupled Neural Network Method of Medical Image Fusion, which is characterized in that the step S2.4 acquires fusion sparse coefficient matrix, specific as follows:
S2.4.1: the sample training matrix of the sub- low frequency coefficient of the source images A and source images B is estimated using OMP optimization algorithm Sparse coefficient, acquire the sparse coefficient matrix of the sub- low frequency coefficient of source images A and source images B;
S2.4.2: it by the sparse coefficient matrix of the sub- low frequency coefficient of the source images A and source images B, obtains and merges sparse system Matrix number, wherein the column vector of the fusion sparse coefficient matrix, specifically:
Wherein:To merge the column vector that sparse coefficient matrix i-th arranges,For the sparse coefficient of the sub- low frequency coefficient of source images A The column vector that matrix i-th arranges,For source images B sub- low frequency coefficient sparse coefficient matrix i-th arrange column vector, | | αA||1 For the sum of the absolute value of each element of column vector in the sparse coefficient matrix of the sub- low frequency coefficient of source images A, | | αB||1For source figure As the sub- low frequency coefficient of B sparse coefficient matrix in each element of column vector the sum of absolute value.
6. one kind according to claim 1 or 2 is based on laplacian pyramid and parameter adaptive pulse coupled neural net The Method of Medical Image Fusion of network, which is characterized in that in the step S2, obtain the fused sub- high frequency coefficient, i.e., The sub- high frequency coefficient of the source images A and the sub- high frequency coefficient of source images B take big method to be merged by absolute value, wherein merging The sub- high frequency coefficient obtained afterwards is the fused sub- high frequency coefficient.
7. according to claim 6 a kind of based on laplacian pyramid and parameter adaptive Pulse Coupled Neural Network Method of Medical Image Fusion, which is characterized in that the step S4 acquires fused high frequency coefficient, specific as follows:
S4.1: after the initialization of PCNN neural network model, be arranged the link input of PCNN neural network model, internal state, The input of variable threshold value and external input, specifically:
Wherein: Fij[n] is the feed back input of PCNN neural network n-th, IijFor the stimulus signal of PCNN neural network, Lij[n] It is inputted for the link of PCNN neural network n-th, αLFor the definite value of PCNN neural network, Lij[n-1] is PCNN neural network the N-1 link input, VLFor the amplification coefficient of the link input of PCNN neural network, WijklFor the nerve of PCNN neural network Link weight coefficients between member, YijThe external input that [n-1] is PCNN neural network (n-1)th time, Uij[n] is PCNN nerve net The internal state of network n-th, β are the link strength of PCNN neural network, θij[n] is the variable threshold value of PCNN neural network n-th Input, αθFor the variable threshold value damping time constant of PCNN neural network, θijThe variable threshold that [n-1] is PCNN neural network (n-1)th time Value input, VθFor the amplification coefficient of the variable threshold value of PCNN neural network, Yij[n] is n-th of external input of PCNN neural network, Uij[n] is n-th of internal state of PCNN neural network, and k is the decomposition scale of source images, and l is the decomposition direction number of source images;
S4.2: according to the link input of the PCNN neural network model, internal state, the input of variable threshold value and external input, weight The PCNN neural network model is newly set, and the high frequency coefficient of the source images A and source images B is substituted into and is reset In PCNN neural network model, determine that the corresponding new igniting of high frequency coefficient of source images A and source images B is reflected by weighting function Figure is penetrated, specifically:
Wherein:
OAFor the corresponding new Fire mapping image of high frequency coefficient of source images A, OBFor the corresponding new igniting of high frequency coefficient of source images B Mapping graph, OAEFor the high frequency coefficient of source images A link strength value of the standard deviation as PCNN neural network when output, OAS For the high frequency coefficient of source images A link strength value of the La Pusi energy as PCNN neural network when output, OBEFor source figure As the high frequency coefficient of B link strength value of the standard deviation as PCNN neural network when output, OBSFor the high frequency system of source images B Output when link strength value of several La Pusi energy as PCNN neural network;
S4.3: according to the corresponding new Fire mapping image of the high frequency coefficient of the source images A and source images B, by the source images A and Duration of ignition output threshold value in the new Fire mapping image of source images B at each pixel is compared, and is obtained by the comparison result Fused high frequency coefficient is taken, specifically:
Wherein: HF(i, j) is fusion high frequency coefficient, HA(i, j) is the high frequency coefficient of source images A, HB(i, j) is the height of source images B Frequency coefficient, OA(i.j) threshold is exported for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images A Value, OB(i.j) threshold value is exported for the duration of ignition at each pixel in the corresponding new Fire mapping image of high frequency coefficient of source images B.
8. according to claim 7 a kind of based on laplacian pyramid and parameter adaptive Pulse Coupled Neural Network Method of Medical Image Fusion, which is characterized in that in the step S4.2, the link strength value of the PCNN neural network is corresponding Output, it is specific as follows:
S4.2.1: the high frequency coefficient of the source images A and source images B is substituted into La Pusi energy and standard deviation formula, is obtained The La Pusi energy and standard deviation of the high frequency coefficient of source images A and source images B are obtained, the La Pusi energy and standard deviation are public Formula, specifically:
Wherein: SD is standard deviation, and EOL is La Pusi energy, and f (i, j) is pixel value, mkFor pixel mean value,WFor sliding window, n For the length or width of sliding window, fiiFor in active window to i carry out derivation as a result, fjjTo be carried out in active window to j Derivation as a result, (i, j) be source images in pixel position;
S4.2.2: refreshing using the La Pusi energy of the high frequency coefficient of the source images A and source images B and standard deviation as PCNN Link strength value through network, and substitute into the PCNN neural network model, acquire the source images A and source images B High frequency coefficient La Pusi energy and standard deviation respectively as PCNN neural network link strength value when output.
CN201910639252.1A 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network Active CN110415198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910639252.1A CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910639252.1A CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Publications (2)

Publication Number Publication Date
CN110415198A true CN110415198A (en) 2019-11-05
CN110415198B CN110415198B (en) 2023-07-04

Family

ID=68361614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910639252.1A Active CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Country Status (1)

Country Link
CN (1) CN110415198B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874581A (en) * 2019-11-18 2020-03-10 长春理工大学 Image fusion method for bioreactor of cell factory
CN111598822A (en) * 2020-05-18 2020-08-28 西安邮电大学 Image fusion method based on GFRW and ISCM
CN112163994A (en) * 2020-09-01 2021-01-01 重庆邮电大学 Multi-scale medical image fusion method based on convolutional neural network
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN113487526A (en) * 2021-06-04 2021-10-08 湖北工业大学 Multi-focus image fusion method for improving focus definition measurement by combining high and low frequency coefficients
CN117408905A (en) * 2023-12-08 2024-01-16 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139371A (en) * 2015-09-07 2015-12-09 云南大学 Multi-focus image fusion method based on transformation between PCNN and LP
US20170298727A1 (en) * 2012-06-14 2017-10-19 Reeves Wireline Technologies Limited Geological log data processing methods and apparatuses
CN107659314A (en) * 2017-09-19 2018-02-02 电子科技大学 The rarefaction expression of distributing optical fiber sensing space-time two-dimension signal and compression method
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN109934887A (en) * 2019-03-11 2019-06-25 吉林大学 A kind of Method of Medical Image Fusion based on improved Pulse Coupled Neural Network
CN109949258A (en) * 2019-03-06 2019-06-28 北京科技大学 A kind of image recovery method and device based on NSCT transform domain

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170298727A1 (en) * 2012-06-14 2017-10-19 Reeves Wireline Technologies Limited Geological log data processing methods and apparatuses
CN105139371A (en) * 2015-09-07 2015-12-09 云南大学 Multi-focus image fusion method based on transformation between PCNN and LP
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN107659314A (en) * 2017-09-19 2018-02-02 电子科技大学 The rarefaction expression of distributing optical fiber sensing space-time two-dimension signal and compression method
CN109949258A (en) * 2019-03-06 2019-06-28 北京科技大学 A kind of image recovery method and device based on NSCT transform domain
CN109934887A (en) * 2019-03-11 2019-06-25 吉林大学 A kind of Method of Medical Image Fusion based on improved Pulse Coupled Neural Network

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874581A (en) * 2019-11-18 2020-03-10 长春理工大学 Image fusion method for bioreactor of cell factory
CN111598822A (en) * 2020-05-18 2020-08-28 西安邮电大学 Image fusion method based on GFRW and ISCM
CN111598822B (en) * 2020-05-18 2023-05-16 西安邮电大学 Image fusion method based on GFRW and ISCM
CN112163994A (en) * 2020-09-01 2021-01-01 重庆邮电大学 Multi-scale medical image fusion method based on convolutional neural network
CN112163994B (en) * 2020-09-01 2022-07-01 重庆邮电大学 Multi-scale medical image fusion method based on convolutional neural network
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN112184646B (en) * 2020-09-22 2022-07-29 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN113487526A (en) * 2021-06-04 2021-10-08 湖北工业大学 Multi-focus image fusion method for improving focus definition measurement by combining high and low frequency coefficients
CN113487526B (en) * 2021-06-04 2023-08-25 湖北工业大学 Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients
CN117408905A (en) * 2023-12-08 2024-01-16 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction
CN117408905B (en) * 2023-12-08 2024-02-13 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction

Also Published As

Publication number Publication date
CN110415198B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN110415198A (en) A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network
CN108876735B (en) Real image blind denoising method based on depth residual error network
CN109859147A (en) A kind of true picture denoising method based on generation confrontation network noise modeling
DE69935404T2 (en) Surface model generation for displaying three-dimensional objects with multiple elastic surface meshes
CN108280814B (en) Light field image angle super-resolution reconstruction method based on perception loss
CN109934887B (en) Medical image fusion method based on improved pulse coupling neural network
Mishra et al. MRI and CT image fusion based on wavelet transform
CN113112592B (en) Drivable implicit three-dimensional human body representation method
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
CN110060225A (en) A kind of Medical image fusion method based on rapid finite shearing wave conversion and rarefaction representation
CN110189286B (en) Infrared and visible light image fusion method based on ResNet
CN112837274A (en) Classification and identification method based on multi-mode multi-site data fusion
CN104408697B (en) Image Super-resolution Reconstruction method based on genetic algorithm and canonical prior model
CN115457359A (en) PET-MRI image fusion method based on adaptive countermeasure generation network
CN106981059A (en) With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
Li et al. A new image fusion algorithm based on wavelet packet analysis and PCNN
CN114821259A (en) Zero-learning medical image fusion method based on twin convolutional neural network
WO2022222011A1 (en) Drivable implicit three-dimensional human body representation method
Tang et al. Exploiting quality-guided adaptive optimization for fusing multimodal medical images
Li et al. SUPER learning: a supervised-unsupervised framework for low-dose CT image reconstruction
CN113838161B (en) Sparse projection reconstruction method based on graph learning
WO2022120731A1 (en) Mri-pet image modality conversion method and system based on cyclic generative adversarial network
CN113706407A (en) Infrared and visible light image fusion method based on separation characterization
CN113192155A (en) Helical CT cone-beam scanning image reconstruction method, scanning system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant