CN110415198B - Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network - Google Patents

Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network Download PDF

Info

Publication number
CN110415198B
CN110415198B CN201910639252.1A CN201910639252A CN110415198B CN 110415198 B CN110415198 B CN 110415198B CN 201910639252 A CN201910639252 A CN 201910639252A CN 110415198 B CN110415198 B CN 110415198B
Authority
CN
China
Prior art keywords
source image
sub
neural network
frequency coefficients
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910639252.1A
Other languages
Chinese (zh)
Other versions
CN110415198A (en
Inventor
谈玲
于欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910639252.1A priority Critical patent/CN110415198B/en
Publication of CN110415198A publication Critical patent/CN110415198A/en
Application granted granted Critical
Publication of CN110415198B publication Critical patent/CN110415198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a medical image fusion method based on a Laplacian pyramid and a parameter self-adaptive pulse coupling neural network, which comprises the following steps: s1: decomposing the source image A and the source image B to obtain a low-frequency coefficient and a high-frequency coefficient; s2: decomposing the low-frequency coefficient into a sub-low-frequency coefficient and a sub-high-frequency coefficient through LP (Low pass) transformation, and respectively fusing the sub-low-frequency coefficient and the sub-high-frequency coefficient; s3: the fused sub low frequency coefficient and sub high frequency coefficient are fused through LP inverse transformation; s4: fusing the high-frequency coefficients; s5: and obtaining a final fusion image by the fused low-frequency coefficient and the fused high-frequency coefficient through FFST inverse transformation. The invention can keep most of information of the source image, improves the definition, edge depiction and contrast, and improves the spatial frequency, average gradient, information entropy and edge information transfer factor of the fusion image, thereby obtaining better fusion effect.

Description

Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network
Technical Field
The invention relates to the technical field of digital image processing, in particular to a medical image fusion method based on a Laplacian pyramid and a parameter self-adaptive pulse coupling neural network.
Background
Medical image fusion is becoming increasingly important in medical assisted diagnosis. Various medical images, such as electronic Computed Tomography (CT), magnetic Resonance Imaging (MRI), positron Emission Tomography (PET), single Photon Emission Computed Tomography (SPECT), etc., come from different sensors and each has their own advantages, but do not provide enough information. Image fusion techniques are therefore becoming indispensable.
In recent years, strong evidence has shown that the human visual system is processing information in a multi-resolution manner, and various medical image fusion methods have been proposed. Most medical image fusion methods are introduced in a multi-scale transformation (MST) based framework to obtain better visual results.
By studying more advanced image transformation methods and more complex fusion strategies, researchers have proposed many MST-based medical image fusion methods. Wherein a fusion strategy based on Sparse Representation (SR) is applied to merge the decomposed coefficients by a medical image fusion method based on Cartoon Texture Decomposition (CTD); a multi-scale decomposition method based on Local Laplace Filtering (LLF); medical image fusion framework based on fusion strategy of Convolutional Neural Network (CNN). However, these methods still have drawbacks.
Disclosure of Invention
The invention aims to: aiming at the problem that detail information of a source image is not sufficiently reserved in the existing medical image fusion, the invention provides a medical image fusion method based on a Laplacian pyramid and a parameter self-adaptive pulse coupling neural network.
The technical scheme is as follows: in order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows:
a medical image fusion method based on a Laplacian pyramid and a parameter self-adaptive pulse coupling neural network specifically comprises the following steps:
s1: decomposing a source image A and a source image B by using FFST transformation to obtain a low-frequency coefficient and a high-frequency coefficient of the source image A and the source image B;
s2: the low-frequency coefficients of the source image A and the source image B are decomposed into sub-low-frequency coefficients and sub-high-frequency coefficients through LP conversion, the sub-low-frequency coefficients of the source image A and the sub-low-frequency coefficients of the source image B are fused, the sub-high-frequency coefficients of the source image A and the sub-high-frequency coefficients of the source image B are fused, and the fused sub-low-frequency coefficients and sub-high-frequency coefficients are obtained;
s3: the fused sub low-frequency coefficient and sub high-frequency coefficient are fused through LP inverse transformation to obtain a fused low-frequency coefficient;
s4: fusing the high-frequency coefficient of the source image A and the high-frequency coefficient of the source image B to obtain a fused high-frequency coefficient;
s5: and obtaining a final fusion image by the fused low-frequency coefficient and the fused high-frequency coefficient through FFST inverse transformation.
Further, in the step S2, the fused sub-low frequency coefficients are obtained as follows:
s2.1: the sub-low frequency coefficients of the source image A and the sub-low frequency coefficients of the source image B are subjected to block processing through a preset sliding window, and an image sub-block of the sub-low frequency coefficients of the source image A and an image sub-block of the sub-low frequency coefficients of the source image B are obtained;
s2.2: converting the image sub-blocks of the sub-low frequency coefficients of the source image A and the image sub-blocks of the sub-low frequency coefficients of the source image B into column vectors for constructing sample training matrixes of the sub-low frequency coefficients of the source image A and the source image B;
s2.3: performing iterative operation on sample training matrixes of the sub-low-frequency coefficients of the source image A and the source image B through a K-SVD algorithm to obtain a complete dictionary matrix of the sub-low-frequency coefficients;
s2.4: estimating the sparse coefficients of the sample training matrices of the sub low frequency coefficients of the source image A and the source image B by using an OMP optimization algorithm, and obtaining a fusion sparse coefficient matrix;
s2.5: multiplying the complete dictionary matrix of the sub low-frequency coefficient by the fusion sparse coefficient matrix to obtain a fusion sample training matrix, wherein the fusion sample training matrix specifically comprises the following components:
V F =Dα F
wherein: v (V) F Training matrix for fusion sample, D is over-complete dictionary matrix, alpha F A fusion sparse coefficient matrix;
s2.6: and converting column vectors of each column of the fusion sample training matrix into data sub-blocks, and reconstructing the data sub-blocks to obtain sub-low frequency coefficients, namely obtaining the fused sub-low frequency coefficients.
Further, in step S2.2, the image sub-blocks of the sub-low frequency coefficients of the source image a and the image sub-blocks of the sub-low frequency coefficients of the source image B are both converted into column vectors, i.e. the image sub-blocks of the sub-low frequency coefficients of the source image a and the image sub-blocks of the sub-low frequency coefficients of the source image B are rearranged in order from left to right and from top to bottom.
Further, in the step S2.3, a complete dictionary matrix of the sub-low frequency coefficient is obtained, that is, a sample training matrix of the sub-low frequency coefficient of the source image B is directly set behind a sample training matrix of the sub-low frequency coefficient of the source image a, and the number of rows is kept unchanged.
Further, the step S2.4 is to obtain a fused sparse coefficient matrix, which specifically includes the following steps:
s2.4.1: estimating the sparse coefficients of the sample training matrixes of the sub-low frequency coefficients of the source image A and the source image B by using an OMP optimization algorithm, and obtaining the sparse coefficient matrixes of the sub-low frequency coefficients of the source image A and the source image B;
s2.4.2: acquiring a fusion sparse coefficient matrix through the sparse coefficient matrix of the sub low-frequency coefficients of the source image A and the source image B, wherein the column vector of the fusion sparse coefficient matrix specifically comprises:
Figure BDA0002131312300000031
wherein:
Figure BDA0002131312300000032
for fusing column vectors of ith column of sparse coefficient matrix,/>
Figure BDA0002131312300000033
Column vector of ith column of sparse coefficient matrix which is sub-low frequency coefficient of source image A,/>
Figure BDA0002131312300000034
Column vector of ith column of sparse coefficient matrix which is sub low frequency coefficient of source image B, |α A || 1 Is the sum of absolute values of each element of column vectors in a sparse coefficient matrix of sub-low frequency coefficients of the source image A, ||alpha B || 1 Is the sum of the absolute values of the individual elements of the column vector in the sparse coefficient matrix of the sub-low frequency coefficients of the source image B.
Further, in the step S2, the fused sub-high frequency coefficient is obtained, that is, the sub-high frequency coefficient of the source image a and the sub-high frequency coefficient of the source image B are fused by an absolute value extraction method, where the fused sub-high frequency coefficient is the fused sub-high frequency coefficient.
Further, the step S4 obtains the fused high-frequency coefficient, which specifically includes:
s4.1: after initializing the PCNN neural network model, setting link input, internal state, variable threshold input and external input of the PCNN neural network model, wherein the method specifically comprises the following steps:
Figure BDA0002131312300000035
wherein: f (F) ij [n]For the nth feedback input of the PCNN neural network, I ij Is the stimulating signal of the PCNN neural network, L ij [n]For the nth link input of the PCNN neural network, alpha L Is the constant value of the PCNN neural network, L ij [n-1]For the n-1 th link input of the PCNN neural network, V L Amplification factor, W, of the link input for PCNN neural network ijkl Is the connection weight coefficient between neurons of the PCNN neural network, Y ij [n-1]Is the n-1 th external input of the PCNN neural network, U ij [n]The n-th internal state of the PCNN neural network, beta is the link strength of the PCNN neural network, theta ij [n]Alpha is the n-th variable threshold input of the PCNN neural network θ Is a variable threshold decay time constant of PCNN neural network, theta ij [n-1]The n-1 th variable threshold value input of the PCNN neural network is V θ Is the variable threshold amplification factor of the PCNN neural network, Y ij [n]Is the nth external input of the PCNN neural network, U ij [n]The n internal state of the PCNN is the decomposition scale of the source image, k is the decomposition direction number of the source image;
s4.2: resetting the PCNN neural network model according to the link input, the internal state, the variable threshold input and the external input of the PCNN neural network model, substituting the high-frequency coefficients of the source image A and the source image B into the reset PCNN neural network model, and determining a new ignition map corresponding to the high-frequency coefficients of the source image A and the source image B through a weighting function, wherein the new ignition map comprises the following specific steps:
Figure BDA0002131312300000041
wherein:
Figure BDA0002131312300000042
O A is the high of the source image ANew ignition map corresponding to frequency coefficient, O B New ignition map corresponding to high frequency coefficient of source image B, O AE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image A is used as the link intensity value, O AS Output of Laplains energy as a high frequency coefficient of source image A when used as a link intensity value of PCNN neural network, O BE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image B is used as the link intensity value, O BS The Laplains energy of the high-frequency coefficient of the source image B is used as output when the link intensity value of the PCNN neural network is used;
s4.3: comparing the ignition time output threshold value of each pixel in the new ignition map of the source image A and the new ignition map of the source image B according to the new ignition map corresponding to the high frequency coefficients of the source image A and the source image B, and acquiring the fused high frequency coefficients according to the comparison result, wherein the method specifically comprises the following steps:
Figure BDA0002131312300000051
wherein: h F (i, j) is a fused high frequency coefficient, H A (i, j) is the high frequency coefficient of the source image A, H B (i, j) is the high frequency coefficient of the source image B, O A (i.j) is the ignition timing output threshold value at each pixel in the new ignition map corresponding to the high frequency coefficient of the source image A, O B (i.j) is the firing time output threshold at each pixel in the new firing map corresponding to the high frequency coefficient of the source image B.
Further, in the step S4.2, the output corresponding to the link strength value of the PCNN neural network is specifically as follows:
s4.2.1: substituting the high-frequency coefficients of the source image A and the source image B into a Laplains energy and standard deviation formula to obtain the Laplains energy and standard deviation of the high-frequency coefficients of the source image A and the source image B, wherein the Laplains energy and standard deviation formula specifically comprises the following steps:
Figure BDA0002131312300000052
wherein: SD is standard deviation, EOL is Laplains energy, f (i, j) is pixel value, m k As the mean value of the pixels, W for sliding window, n is length or width of sliding window, f ii To derive i within the active window, f jj For the result of deriving j in the active window, (i, j) is the position of the pixel point in the source image;
s4.2.2: and taking the Laplains energy and standard difference of the high-frequency coefficients of the source image A and the source image B as the link intensity values of the PCNN neural network, substituting the link intensity values into the PCNN neural network model, and obtaining the Laplains energy and standard difference of the high-frequency coefficients of the source image A and the source image B as the output when the link intensity values of the PCNN neural network are obtained.
The beneficial effects are that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) The medical image fusion method of the invention decomposes the source image into the low-frequency coefficient and the high-frequency coefficient, adopts LP to decompose the low-frequency coefficient into the sub-low-frequency sparse coefficient and the sub-high-frequency coefficient again, and then processes the sub-low-frequency sparse coefficient and the sub-high-frequency coefficient to be recombined into the low-frequency coefficient, and the fused low-frequency coefficient and the high-frequency coefficient are reconstructed into the final fused medical image through FFST inverse transformation, thus the relatively complete reservation of the low-frequency information can be realized, the representation detail information is highlighted, and meanwhile, the contrast is relatively high;
(2) The fusion image obtained by the invention can retain most of information of a source image, improves the definition, edge depiction and contrast, and improves the spatial frequency, average gradient, information entropy and edge information transfer factor of the fusion image, thereby obtaining better fusion effect.
Drawings
FIG. 1 is a flow chart of a medical image fusion method of the present invention;
FIG. 2 is a flow chart of the low frequency coefficient fusion process of the present invention;
fig. 3 is a flow chart of the high frequency coefficient fusion process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Wherein the described embodiments are some, but not all embodiments of the invention. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Example 1
Referring to fig. 1, the embodiment provides a medical image fusion method based on a laplacian pyramid and a parameter adaptive pulse coupling neural network, which specifically includes the following steps:
step S1: and decomposing the source image A and the source image B by using FFST transformation to obtain low-frequency coefficients and high-frequency coefficients of the source image A and the source image B.
Specifically, the low-frequency coefficient L obtained by decomposing the source image A A And high frequency coefficient H A The method specifically comprises the following steps:
Figure BDA0002131312300000061
wherein: l (L) A For low frequency coefficients, k, of source image A OA Number of decomposition layers H for source image A A For the high frequency coefficients, k, of the source image A A For the decomposition scale of the source image A, l A Is the number of decomposition directions of the source image a.
Low frequency coefficient L obtained by decomposing source image B B And high frequency coefficient H B The method specifically comprises the following steps:
Figure BDA0002131312300000062
wherein: l (L) B For low frequency coefficients, k, of source image B 0B Is the sourceNumber of decomposition layers of image B, H B For the high frequency coefficients, k, of the source image B B For the decomposition scale of the source image B, l B Is the number of decomposition directions of the source image B.
Step S2: referring to fig. 2, low frequency coefficients of a source image a and a source image B are decomposed into sub-low frequency coefficients and sub-high frequency coefficients through LP transformation, and the sub-low frequency coefficients of the source image a and the sub-low frequency coefficients of the source image B are fused, and the sub-high frequency coefficients of the source image a and the sub-high frequency coefficients of the source image B are fused to obtain the fused sub-low frequency coefficients and sub-high frequency coefficients.
The sub-high frequency coefficients of the source image A and the sub-high frequency coefficients of the source image B are fused through an absolute value extraction method, and the fused sub-high frequency coefficients are the fused sub-high frequency coefficients.
The method comprises the following steps of fusing a sub-low frequency coefficient of a source image A and a sub-low frequency coefficient of a source image B to obtain a fused sub-low frequency coefficient, wherein the process is as follows
Step S2.1: low frequency coefficient L for source image a through sliding window A And low frequency coefficient L of source image B B And respectively performing block processing, wherein the step length of the sliding window is S pixels, and the size is n multiplied by n.
In this embodiment, the sizes of the source image a and the source image B are both: m×n, wherein: m is the length of source image a and source image B, and N is the width of source image a and source image B. So that the low frequency coefficient L of the source image A A And low frequency coefficient L of source image B B After the block processing, (M+n-1) × (N+n-1) image sub-blocks can be obtained. Meanwhile, the sliding window is not suitable to be selected too large in the selecting process, and the sample is too small due to the fact that the window is too large, so that the calculating complexity is increased, and the accuracy is reduced.
Step S2.2: the image sub-blocks of the sub-low frequency coefficients of the source image A and the image sub-blocks of the sub-low frequency coefficients of the source image B are converted into column vectors, namely, the image sub-blocks of the sub-low frequency coefficients of the source image A are rearranged in a sequence from left to right and from top to bottom, and the image sub-blocks of the sub-low frequency coefficients of the source image B are rearranged in a sequence from left to right and from top to bottom.
Column vectors obtained through image subblock conversion of sub-low frequency coefficients of source image A are used for constructing a sample training matrix V of the sub-low frequency coefficients of the source image A A . Column vectors obtained through image subblock conversion of sub-low frequency coefficients of the source image B are used for constructing a sample training matrix V of the sub-low frequency coefficients of the source image B B
Step S2.3: sample training matrix V for sub-low frequency coefficients of source image A by K-SVD algorithm A Sample training matrix V with sub-low frequency coefficients of source image B B Performing iterative operation to obtain a sample training matrix V of sub-low frequency coefficients of the source image B B Sample training matrix V directly arranged on sub-low frequency coefficients of source image A A Later, and the sub-low frequency coefficients of the source image a, sample training matrix V A The number of lines of the sub-low frequency coefficients is kept unchanged, so that a complete dictionary matrix D of the sub-low frequency coefficients is obtained.
Step S2.4: estimating a sample training matrix V of a source image a using OMP optimization algorithm A Sample training matrix V of sparse coefficients, sub-low frequency coefficients of source image B B The sparse coefficient matrix alpha is obtained through the sparse coefficient obtained through estimation F . The method comprises the following steps:
step S2.4.1: estimating a sample training matrix V of a source image a using OMP optimization algorithm A Acquiring a sparse coefficient matrix alpha of the sub-low frequency coefficients of the source image A through the obtained sparse coefficient A
Sample training matrix V for estimating sub-low frequency coefficients of source image B by OMP optimization algorithm B Acquiring a sparse coefficient matrix alpha of the sub-low frequency coefficients of the source image B through the obtained sparse coefficient B
Step S2.4.2: sparse coefficient matrix alpha by sub-low frequency coefficients of source image a A Sparse coefficient matrix alpha of sub-low frequency coefficients of source image B B Acquiring a fusion sparse coefficient matrix alpha F In which a sparse coefficient matrix alpha is fused F Specifically, the column vector of (a) is:
Figure BDA0002131312300000081
wherein:
Figure BDA0002131312300000082
for fusing column vectors of ith column of sparse coefficient matrix,/>
Figure BDA0002131312300000083
Column vector of ith column of sparse coefficient matrix which is sub-low frequency coefficient of source image A,/>
Figure BDA0002131312300000084
Column vector of ith column of sparse coefficient matrix which is sub low frequency coefficient of source image B, |α A || 1 Is the sum of absolute values of each element of column vectors in a sparse coefficient matrix of sub-low frequency coefficients of the source image A, ||alpha B || 1 Is the sum of the absolute values of the individual elements of the column vector in the sparse coefficient matrix of the sub-low frequency coefficients of the source image B.
Step S2.5: combining the complete dictionary matrix D of the low-frequency coefficients in the step S2.3 with the fused sparse coefficient matrix alpha in the step S2.4.2 F Multiplying to obtain a fusion sample training matrix, specifically:
V F =Dα F
wherein: v (V) F Training matrix for fusion sample, D is over-complete dictionary matrix, alpha F To fuse sparse coefficient matrix.
Step S2.6: and converting column vectors of each column of the fusion sample training matrix into data sub-blocks, and reconstructing the data sub-blocks to obtain sub-low frequency coefficients, namely obtaining the fused sub-low frequency coefficients.
Step S3: and (2) fusing the fused sub-low frequency coefficient and the sub-high frequency coefficient through LP inverse transformation according to the fused sub-high frequency coefficient obtained in the step (S2) and the fused sub-low frequency coefficient obtained in the step (S2.6) to obtain the fused low frequency coefficient.
Step S4: referring to fig. 3, the high frequency coefficient of the source image a and the high frequency coefficient of the source image B are fused, and the fused high frequency coefficient is obtained, which is specifically as follows:
step S4.1: initializing the PCNN neural network model, namely inputting the link of the PCNN neural network model into L ij Internal state U ij Variable threshold input θ ij All set to 0, specifically:
L ij (0)=U ij (0)=θ ij (0)=0
wherein: l (L) ij (0) For linking input of PCNN neural network, U ij (0) Is the internal state of PCNN neural network, θ ij (0) Is a variable threshold input to the PCNN neural network.
At this time, the neurons in the PCNN neural network model are in a flameout state, that is, the external input Y of the PCNN neural network model ij (0) 0, the output result is also 0, the number of generated pulses O ij (0) And also 0.
After initializing the PCNN neural network model, setting link input, internal state, variable threshold input and external input of the PCNN neural network model, wherein the method specifically comprises the following steps:
Figure BDA0002131312300000091
wherein: f (F) ij [n]For the nth feedback input of the PCNN neural network, I ij Is the stimulating signal of the PCNN neural network, L ij [n]For the nth link input of the PCNN neural network, alpha L Is the constant value of the PCNN neural network, L ij [n-1]For the n-1 th link input of the PCNN neural network, V L Amplification factor, W, of the link input for PCNN neural network ijkl Is the connection weight coefficient between neurons of the PCNN neural network, Y ij [n-1]Is the n-1 th external input of the PCNN neural network, U ij [n]The n-th internal state of the PCNN neural network, beta is the link strength of the PCNN neural network, theta ij [n]Alpha is the n-th variable threshold input of the PCNN neural network θ Is a variable threshold decay time constant of PCNN neural network, theta ij [n-1]The n-1 th variable threshold value input of the PCNN neural network is V θ Is the variable threshold amplification factor of the PCNN neural network, Y ij [n]Is the nth external input of the PCNN neural network, U ij [n]The n-th internal state of the PCNN neural network is k, which is the decomposition scale of the source image, and l is the decomposition direction number of the source image.
Step S4.2: link input L according to PCNN neural network model ij Internal state U ij Variable threshold input θ ij And an external input Y ij Resetting the PCNN neural network model, substituting the high-frequency coefficients of the source image A and the source image B into the reset PCNN neural network model, and determining a new ignition map corresponding to the high-frequency coefficients of the source image A and the source image B through a weighting function, wherein the new ignition map specifically comprises the following components:
Figure BDA0002131312300000101
wherein:
Figure BDA0002131312300000102
O A new ignition map corresponding to high frequency coefficient of source image A, O B New ignition map corresponding to high frequency coefficient of source image B, O AE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image A is used as the link intensity value, O AS Output of Laplains energy as a high frequency coefficient of source image A when used as a link intensity value of PCNN neural network, O BE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image B is used as the link intensity value, O BS The Laplains energy, which is a high frequency coefficient of the source image B, is output as the link strength value of the PCNN neural network.
In this embodiment, the output corresponding to the link strength value of the PCNN neural network is specifically as follows:
step S4.2.1: and substituting the high-frequency coefficient of the source image A and the high-frequency coefficient of the source image B into a Laplains energy and standard deviation formula respectively to obtain the Laplains energy and standard deviation of the high-frequency coefficient of the source image A and the Laplains energy and standard deviation of the high-frequency coefficient of the source image B.
Specifically, the Laplains energy and standard deviation formula is specifically:
Figure BDA0002131312300000111
wherein: SD is standard deviation, EOL is Laplains energy, f (i, j) is pixel value, m k As the mean value of the pixels, W for sliding window, n is length or width of sliding window, f ii To derive i within the active window, f jj And (i) j is the position of the pixel point in the source image as a result of deriving j in the active window.
Step S4.2.2: and taking the Laplains energy and standard deviation of the high-frequency coefficient of the source image A and the Laplains energy and standard deviation of the high-frequency coefficient of the source image B as the link intensity values of the PCNN neural network respectively, substituting the link intensity values into a reset PCNN neural network model, and obtaining the Laplains energy and standard deviation of the high-frequency coefficient of the source image A and the high-frequency coefficient of the source image B as the output when the link intensity values of the PCNN neural network are obtained respectively.
Step S4.3: comparing the ignition time output threshold values at each pixel in the new ignition map of the source image A and the new ignition map of the source image B according to the new ignition map corresponding to the high frequency coefficients of the source image A and the source image B, and acquiring the fused high frequency coefficients according to the comparison result, wherein the specific steps are as follows:
Figure BDA0002131312300000112
wherein: h F (i, j) is a fused high frequency coefficient, H A (i, j) is the high frequency coefficient of the source image A, H B (i, j) is the high frequency coefficient of the source image B, O A (i.j) is the ignition timing output threshold value at each pixel in the new ignition map corresponding to the high frequency coefficient of the source image A, O B (i.j) each image in the new ignition map corresponding to the high-frequency coefficient of the source image BThe ignition time at element outputs a threshold.
Step S5: and (3) obtaining a final fusion image by performing FFST inverse transformation on the fused low-frequency coefficient and the fused high-frequency coefficient according to the fused low-frequency coefficient in the step (S3) and the fused high-frequency coefficient in the step (S4.3).
The medical image fusion method in the embodiment can well retain low-frequency information, and is high in contrast and high in detail information. Meanwhile, the fusion image obtained by the medical image fusion method in the embodiment can keep most of information of a source image, the definition, edge depiction and contrast are improved, and the spatial frequency, average gradient, information entropy and edge information transfer factor of the fusion image are improved, so that a better fusion effect can be achieved.
The invention and its embodiments have been described above by way of illustration and not limitation, and the actual construction and method of construction illustrated in the accompanying drawings is not limited to this. Therefore, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical scheme are not creatively designed without departing from the gist of the present invention, and all the structural manners and the embodiments belong to the protection scope of the present invention.

Claims (8)

1. The medical image fusion method based on the Laplacian pyramid and the parameter self-adaptive pulse coupling neural network is characterized by comprising the following steps of:
s1: decomposing a source image A and a source image B by using FFST transformation to obtain a low-frequency coefficient and a high-frequency coefficient of the source image A and the source image B;
s2: the low-frequency coefficients of the source image A and the source image B are decomposed into sub-low-frequency coefficients and sub-high-frequency coefficients through LP conversion, the sub-low-frequency coefficients of the source image A and the sub-low-frequency coefficients of the source image B are fused through a sparse representation SR fusion method, the sub-high-frequency coefficients of the source image A and the sub-high-frequency coefficients of the source image B are fused through an absolute value maximization method, and the fused sub-low-frequency coefficients and sub-high-frequency coefficients are obtained;
s3: the fused sub low-frequency coefficient and sub high-frequency coefficient are fused through LP inverse transformation to obtain a fused low-frequency coefficient;
s4: fusing the high-frequency coefficient of the source image A and the high-frequency coefficient of the source image B through a parameter self-adaptive pulse coupling neural network PCNN to obtain a fused high-frequency coefficient;
s5: and obtaining a final fusion image by the fused low-frequency coefficient and the fused high-frequency coefficient through FFST inverse transformation.
2. The medical image fusion method based on the laplacian pyramid and the parameter adaptive pulse coupling neural network according to claim 1, wherein in the step S2, the fused sub-low frequency coefficients are obtained by using an SR fusion method, which specifically comprises the following steps:
s2.1: the sub-low frequency coefficients of the source image A and the sub-low frequency coefficients of the source image B are subjected to block processing through a preset sliding window, and an image sub-block of the sub-low frequency coefficients of the source image A and an image sub-block of the sub-low frequency coefficients of the source image B are obtained;
s2.2: converting the image sub-blocks of the sub-low frequency coefficients of the source image A and the image sub-blocks of the sub-low frequency coefficients of the source image B into column vectors for constructing sample training matrixes of the sub-low frequency coefficients of the source image A and the source image B;
s2.3: performing iterative operation on sample training matrixes of the sub-low-frequency coefficients of the source image A and the source image B through a K-SVD algorithm to obtain a complete dictionary matrix of the sub-low-frequency coefficients;
s2.4: estimating the sparse coefficients of the sample training matrices of the sub low frequency coefficients of the source image A and the source image B by using an OMP optimization algorithm, and obtaining a fusion sparse coefficient matrix;
s2.5: multiplying the complete dictionary matrix of the sub low-frequency coefficient by the fusion sparse coefficient matrix to obtain a fusion sample training matrix, wherein the fusion sample training matrix specifically comprises the following components:
V F =Dα F
wherein: v (V) F Training matrix for fusion sample, D is over-complete dictionary matrix, alpha F A fusion sparse coefficient matrix;
s2.6: and converting column vectors of each column of the fusion sample training matrix into data sub-blocks, and reconstructing the data sub-blocks to obtain sub-low frequency coefficients, namely obtaining the fused sub-low frequency coefficients.
3. The medical image fusion method based on the laplacian pyramid and the parameter adaptive pulse coupling neural network according to claim 2, wherein in step S2.2, the image sub-blocks of the sub-low frequency coefficients of the source image a and the image sub-blocks of the sub-low frequency coefficients of the source image B are both converted into column vectors, i.e. the image sub-blocks of the sub-low frequency coefficients of the source image a and the image sub-blocks of the sub-low frequency coefficients of the source image B are rearranged in order from left to right and from top to bottom.
4. The medical image fusion method based on the laplacian pyramid and the parameter adaptive pulse coupling neural network according to claim 2 or 3, wherein in the step S2.3, a complete dictionary matrix of the sub-low frequency coefficients is obtained, that is, a sample training matrix of the sub-low frequency coefficients of the source image B is directly set at the back of a sample training matrix of the sub-low frequency coefficients of the source image a, and the number of lines is kept unchanged.
5. The medical image fusion method based on the laplacian pyramid and the parameter self-adaptive pulse coupling neural network according to claim 4, wherein the step S2.4 is to obtain a fusion sparse coefficient matrix, which is specifically as follows:
s2.4.1: estimating the sparse coefficients of the sample training matrixes of the sub-low frequency coefficients of the source image A and the source image B by using an OMP optimization algorithm, and obtaining the sparse coefficient matrixes of the sub-low frequency coefficients of the source image A and the source image B;
s2.4.2: acquiring a fusion sparse coefficient matrix through the sparse coefficient matrix of the sub low-frequency coefficients of the source image A and the source image B, wherein the column vector of the fusion sparse coefficient matrix specifically comprises:
Figure FDA0004253265260000021
wherein:
Figure FDA0004253265260000022
for fusing column vectors of ith column of sparse coefficient matrix,/>
Figure FDA0004253265260000023
Column vector of ith column of sparse coefficient matrix which is sub-low frequency coefficient of source image A,/>
Figure FDA0004253265260000031
Column vector of ith column of sparse coefficient matrix which is sub low frequency coefficient of source image B, |α A || 1 Is the sum of absolute values of each element of column vectors in a sparse coefficient matrix of sub-low frequency coefficients of the source image A, ||alpha B || 1 Is the sum of the absolute values of the individual elements of the column vector in the sparse coefficient matrix of the sub-low frequency coefficients of the source image B.
6. The medical image fusion method based on the laplacian pyramid and the parameter self-adaptive pulse coupling neural network according to claim 1 or 2, wherein in the step S2, the fused sub-high frequency coefficient is obtained, that is, the sub-high frequency coefficient of the source image a and the sub-high frequency coefficient of the source image B are fused by an absolute value scaling method, wherein the fused sub-high frequency coefficient is the fused sub-high frequency coefficient.
7. The medical image fusion method based on the laplacian pyramid and the parameter adaptive pulse coupled neural network according to claim 6, wherein the step S4 is characterized in that the parameter adaptive pulse coupled neural network PCNN is used for obtaining the fused high-frequency coefficient, and the method specifically comprises the following steps:
s4.1: after initializing the PCNN neural network model, setting link input, internal state, variable threshold input and external input of the PCNN neural network model, wherein the method specifically comprises the following steps:
Figure FDA0004253265260000032
wherein: f (F) pq [n]For the nth feedback input of the PCNN neural network, I pq Is the stimulating signal of the PCNN neural network, L pq [n]For the nth link input of the PCNN neural network, alpha L Is the constant value of the PCNN neural network, L pq [n-1]For the n-1 th link input of the PCNN neural network, V L Amplification factor, W, of the link input for PCNN neural network pqkl Is the connection weight coefficient between neurons of the PCNN neural network, Y pq [n-1]Is the n-1 th external input of the PCNN neural network, U pq [n]The n-th internal state of the PCNN neural network, beta is the link strength of the PCNN neural network, theta pq [n]Alpha is the n-th variable threshold input of the PCNN neural network θ Is a variable threshold decay time constant of PCNN neural network, theta pq [n-1]The n-1 th variable threshold value input of the PCNN neural network is V θ Is the variable threshold amplification factor of the PCNN neural network, Y pq [n]Is the nth external input of the PCNN neural network, U pq [n]The n internal state of the PCNN is the decomposition scale of the source image, k is the decomposition direction number of the source image;
s4.2: resetting the PCNN neural network model according to the link input, the internal state, the variable threshold input and the external input of the PCNN neural network model, substituting the high-frequency coefficients of the source image A and the source image B into the reset PCNN neural network model, and determining a new ignition map corresponding to the high-frequency coefficients of the source image A and the source image B through a weighting function, wherein the new ignition map comprises the following specific steps:
Figure FDA0004253265260000041
wherein:
Figure FDA0004253265260000042
O A new ignition map corresponding to high frequency coefficient of source image A, O B New ignition map corresponding to high frequency coefficient of source image B, O AE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image A is used as the link intensity value, O AS Output of Laplains energy as a high frequency coefficient of source image A when used as a link intensity value of PCNN neural network, O BE As the output of the PCNN neural network when the standard deviation of the high-frequency coefficient of the source image B is used as the link intensity value, O BS The Laplains energy of the high-frequency coefficient of the source image B is used as output when the link intensity value of the PCNN neural network is used;
s4.3: comparing the ignition time output threshold value of each pixel in the new ignition map of the source image A and the new ignition map of the source image B according to the new ignition map corresponding to the high frequency coefficients of the source image A and the source image B, and acquiring the fused high frequency coefficients according to the comparison result, wherein the method specifically comprises the following steps:
Figure FDA0004253265260000043
wherein: h F (i, j) is a fused high frequency coefficient, H A (i, j) is the high frequency coefficient of the source image A, H B (i, j) is the high frequency coefficient of the source image B, O A (i, j) is the ignition timing output threshold value at each pixel in the new ignition map corresponding to the high frequency coefficient of the source image A, O B And (i, j) is an ignition time output threshold value at each pixel in the new ignition map corresponding to the high-frequency coefficient of the source image B, and (i, j) is the position of the pixel point in the new ignition map.
8. The method for medical image fusion based on Laplacian pyramid and parameter adaptive pulse coupled neural network according to claim 7, wherein in step S4.2, the output corresponding to the link strength value of the PCNN neural network is as follows
S4.2.1: substituting the high-frequency coefficients of the source image A and the source image B into a Laplains energy and standard deviation formula to obtain the Laplains energy and standard deviation of the high-frequency coefficients of the source image A and the source image B, wherein the Laplains energy and standard deviation formula specifically comprises the following steps:
Figure FDA0004253265260000051
wherein: SD is standard deviation, EOL is Laplains energy, f (i, j) is pixel value, m k Is the average value of pixels, w is a sliding window, n is the length or width of the sliding window, f ii To derive i within the active window, f jj For the result of deriving j in the active window, (i, j) is the position of the pixel point in the source image;
s4.2.2: and taking the Laplains energy and standard difference of the high-frequency coefficients of the source image A and the source image B as the link intensity values of the PCNN neural network, substituting the link intensity values into the PCNN neural network model, and obtaining the Laplains energy and standard difference of the high-frequency coefficients of the source image A and the source image B as the output when the link intensity values of the PCNN neural network are obtained.
CN201910639252.1A 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network Active CN110415198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910639252.1A CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910639252.1A CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Publications (2)

Publication Number Publication Date
CN110415198A CN110415198A (en) 2019-11-05
CN110415198B true CN110415198B (en) 2023-07-04

Family

ID=68361614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910639252.1A Active CN110415198B (en) 2019-07-16 2019-07-16 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network

Country Status (1)

Country Link
CN (1) CN110415198B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874581B (en) * 2019-11-18 2023-08-01 长春理工大学 Image fusion method for bioreactor of cell factory
CN111598822B (en) * 2020-05-18 2023-05-16 西安邮电大学 Image fusion method based on GFRW and ISCM
CN112163994B (en) * 2020-09-01 2022-07-01 重庆邮电大学 Multi-scale medical image fusion method based on convolutional neural network
CN112184646B (en) * 2020-09-22 2022-07-29 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN113487526B (en) * 2021-06-04 2023-08-25 湖北工业大学 Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients
CN117408905B (en) * 2023-12-08 2024-02-13 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10753202B2 (en) * 2012-06-14 2020-08-25 Reeves Wireline Technologies Limited Geological log data processing methods and apparatuses
CN105139371B (en) * 2015-09-07 2019-03-15 云南大学 A kind of multi-focus image fusing method based on PCNN and LP transformation
US10685429B2 (en) * 2017-02-22 2020-06-16 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN107659314B (en) * 2017-09-19 2021-02-19 电子科技大学 Sparse representation and compression method of distributed optical fiber sensing space-time two-dimensional signal
CN109949258B (en) * 2019-03-06 2020-11-27 北京科技大学 Image restoration method based on NSCT transform domain
CN109934887B (en) * 2019-03-11 2023-05-30 吉林大学 Medical image fusion method based on improved pulse coupling neural network

Also Published As

Publication number Publication date
CN110415198A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110415198B (en) Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN107025632B (en) Image super-resolution reconstruction method and system
Zhu et al. Image reconstruction from videos distorted by atmospheric turbulence
CN107133923B (en) Fuzzy image non-blind deblurring method based on adaptive gradient sparse model
CN110189286B (en) Infrared and visible light image fusion method based on ResNet
CN110060225B (en) Medical image fusion method based on rapid finite shear wave transformation and sparse representation
CN107123094B (en) Video denoising method mixing Poisson, Gaussian and impulse noise
CN109949217B (en) Video super-resolution reconstruction method based on residual learning and implicit motion compensation
CN110930327B (en) Video denoising method based on cascade depth residual error network
CN110880163B (en) Low-light color imaging method based on deep learning
US7565010B2 (en) System and method for image segmentation by a weighted multigrid solver
CN113808036B (en) Low-illumination image enhancement and denoising method based on Retinex model
Qu et al. TransFuse: A unified transformer-based image fusion framework using self-supervised learning
CN115393227A (en) Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
CN112581378B (en) Image blind deblurring method and device based on significance strength and gradient prior
Ju et al. Ivf-net: An infrared and visible data fusion deep network for traffic object enhancement in intelligent transportation systems
CN111553856A (en) Image defogging method based on depth estimation assistance
CN113610735A (en) Hybrid noise removing method for infrared image of power equipment
CN112801899A (en) Internal and external circulation driving image blind deblurring method and device based on complementary structure perception
Chen et al. Guided dual networks for single image super-resolution
CN112837220A (en) Method for improving resolution of infrared image and application thereof
CN113362281B (en) Infrared and visible light image fusion method based on WSN-LatLRR
CN115439849A (en) Instrument digital identification method and system based on dynamic multi-strategy GAN network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant