CN109934887B - Medical image fusion method based on improved pulse coupling neural network - Google Patents

Medical image fusion method based on improved pulse coupling neural network Download PDF

Info

Publication number
CN109934887B
CN109934887B CN201910177917.1A CN201910177917A CN109934887B CN 109934887 B CN109934887 B CN 109934887B CN 201910177917 A CN201910177917 A CN 201910177917A CN 109934887 B CN109934887 B CN 109934887B
Authority
CN
China
Prior art keywords
image
follows
frequency
neural network
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910177917.1A
Other languages
Chinese (zh)
Other versions
CN109934887A (en
Inventor
陈海鹏
吕颖达
盖迪
申铉京
张宠
李怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201910177917.1A priority Critical patent/CN109934887B/en
Publication of CN109934887A publication Critical patent/CN109934887A/en
Application granted granted Critical
Publication of CN109934887B publication Critical patent/CN109934887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a medical image fusion method based on an improved pulse coupling neural network, which comprises the following steps: step one: gamma correction is carried out on the multi-mode medical image to enhance the contrast ratio of the medical image; step two: carrying out multi-scale decomposition on the corrected image to be fused by adopting non-downsampled shear waves to obtain a low-frequency subgraph and a high-frequency subgraph; step three: the improved regional energy algorithm is adopted to fuse the low-frequency subgraphs; step four: adopting an improved pulse coupling neural network algorithm to fuse the high-frequency subgraphs; step five: and reconstructing the fused high-frequency and low-frequency subgraphs by adopting non-downsampling shear wave inverse transformation to obtain a final fused image. The invention can effectively fuse the multi-mode medical images and improve the accuracy of the doctor to the diagnosis of the illness state of the patient.

Description

Medical image fusion method based on improved pulse coupling neural network
Technical Field
The invention relates to the technical field of medical image fusion, in particular to an improved multi-mode medical image fusion method of a pulse coupling neural network.
Background
Medical images of different modalities can reflect human body information from different angles. Many images rely solely on the spatial conception and speculation of the physician to determine the desired information, and their accuracy is subjectively affected, and some information may be ignored. A single medical imaging system can only provide limited information, cannot provide complete information of a certain organ or tissue site at multiple angles (or modalities) at the same time, and thus cannot meet medical needs. For example, the resolution of the structural image (CT, MRI, etc.) is high, and the anatomical morphology of the organ tissue can be clearly reflected, but the functional change of the organ cannot be reflected; functional images (SPECT, PET, etc.) can accurately provide metabolic information of an organ, but because of their low resolution, anatomical details of the organ or focal site cannot be displayed. The image fusion technology is the best way for solving the problems, and can combine the multi-view complementary information contained in the structural image and the functional image to form an image with more abundant information, so that medical information of various aspects such as anatomical structures, organ functions and the like in the human body can be simultaneously displayed on one image, a doctor can see the lesion part more clearly and directly conveniently, accurate judgment is facilitated, uncertainty is reduced, and clinical diagnosis and treatment are more accurate and perfect.
Disclosure of Invention
The invention aims to provide an improved pulse coupling neural network-based medical image fusion method, which can effectively and accurately fuse multi-mode medical images and improve the accuracy of focus examination of a doctor on a patient.
The technical scheme provided by the invention is as follows:
a medical image fusion method based on an improved pulse coupled neural network, comprising the steps of:
step one: acquiring completely registered images A and B to be fused, and correcting by using Gamma;
step two: non-downsampled shear wave transformation is performed on corrected images A and B, and the images are decomposed into low frequency subgraph { aA, bB }, and high frequency subgraph { cA s,l ,cB s,l };
Step three: fusing the decomposed low-frequency subgraphs { aA, bB } by adopting an improved regional energy algorithm to obtain a low-frequency subgraph fusion result aF;
step four: for the decomposed high-frequency subgraph { cA } s,l ,cB s,l Adopting improved pulse coupling neural network algorithm to make fusion so as to obtain high-frequency subgraph fusion result cF s,l Wherein s represents the number of layers decomposed and l represents the direction of decomposition;
step five: the low-frequency sub-graph fusion result aF obtained in the step three and the high-frequency sub-graph fusion result cF obtained in the step four are processed s,l And carrying out non-downsampled shear wave inverse transformation to obtain a final fusion image F.
Preferably, in the first step, gamma is used for correction, and the formula is as follows:
S=I r
wherein I is an original image, gamma is a gray coefficient, the value range is between 0 and 2, and S is an image corrected by Gamma.
Preferably, in the second step, a non-downsampling shear wave system is adopted to perform multi-scale decomposition;
preferably, the third step includes the following steps:
the first step: the detail information is extracted by adopting sobel operators in eight directions, and the specific calculation is as follows:
0 ° direction: [1,2,1;0, 0; -1, -2, -1];
45 ° direction: [2,1,0;1,0, -1;0, -1, -2];
90 ° direction: [1,0, -1;2,0,0;1,0, -1];
135 ° direction: [0, -1, -2;1,0, -1;2,1,0];
180 ° direction: [ -1, -2, -1;0, 0;1,2,1];
225 ° direction: [ -2, -1,0; -1,0,1;0,1,2];
270 ° direction: [ -1,0,1; -2,0,2; -1,0,1];
315 ° direction: [0,1,2; -1,0,1; -2, -1,0];
and a second step of: the convolution operation is carried out by adopting eight sobel operators in different directions, and then subtraction is carried out on the sobel operators and the original image, so that an image with detailed information removed is obtained, and the specific operation is as follows:
O=I-(I*S 8 )
wherein O is an image with detailed information removed, and represents convolution operation, S 8 For eight sobel operators in different directions, I.times.S 8 The original image I is expressed to be respectively coiled with eight sobel operators in different directions;
and a third step of: the area energy values of all pixel points of the image O with the detail information removed are calculated, and the formula is as follows:
Figure GDA0004076072610000031
where (i, j) is the coordinates of the image pixel, E is the current area energy value, v (m, n) is the determinant of 3*3, and the specific values are:
Figure GDA0004076072610000032
fourth step: the pixel point with the largest energy is selected as the final fusion point of the low-frequency subgraph, and the formula is as follows:
Figure GDA0004076072610000033
wherein aF is the final fused pixel point, aA is the low-frequency subgraph of the image A, bB is the low-frequency subgraph of the image B, E aA Represents the area energy value of aA, E bB The area energy value of bB is represented.
Preferably, in the fourth step, the decomposed high-frequency subgraph { cA) is subjected to pulse coupling neural network pair s,l ,cB s,l Fusion ofThe method comprises the following steps:
the first step, adopting improved particle swarm optimization algorithm based on quantum behavior to determine parameters of pulse coupled neural network, has the following operation:
1) The fitness function suitable for the algorithm is designed, and the formula is as follows:
f=max(EN+SF+MI+Q A/F )
wherein EN is information entropy of the image, SF is spatial frequency of the image, MI is mutual information of the image, Q A/F Reserving a quantity for edge information of the image;
2) The average value C of the optimal positions of all particles is calculated as follows:
Figure GDA0004076072610000034
/>
wherein N represents the number of particles of 20, p i (t-1) represents the optimal position of the individual particles after t-1 iterations;
3) Calculating the optimal position p of individual particles i (t-1) and the global optimum position p among all particles g Random Point pp of (t-1) i (t) the specific formula is as follows:
pp i (t)=p i (t-1)+(1-λ)p g (t-1)
wherein p is i (t-1) represents the optimal position of the individual particles, p g (t-1) represents a global optimum among all particles, λ being a random value of 0 to 1;
4) The particles move and adjust the current direction and position as follows:
Figure GDA0004076072610000041
wherein μ is a random value of 0 to 1, and β (t) is as follows:
Figure GDA0004076072610000042
where m, n is a constant, where m=2, n=1, maxtime represents the maximum number of iterations, maxtime is 50;
secondly, adopting a pulse coupling neural network as a fusion rule of the high-frequency subgraph, wherein the specific operation is as follows:
1) The feedback input of the pulse coupled neural network is calculated as follows:
F ij (n)=S ij
wherein S is ij The external stimulus is an external stimulus, and the external stimulus of the method is a high-frequency subgraph;
2) The external input of the pulse coupled neural network is calculated as follows:
Figure GDA0004076072610000043
wherein alpha is L For decay factors externally input, V L To feed back the amplification factor, W ijlk A link weight for the neuron;
3) The internal activity term of the pulse coupled neural network is calculated as follows:
U ij (n)=F ij (1+βL ij (n))
wherein, beta is the link strength;
4) The dynamic threshold of the pulse coupled neural network is calculated as follows:
Figure GDA0004076072610000044
wherein alpha is θ Attenuation factor of dynamic threshold, V θ Is the amplification factor of the dynamic threshold value, Y ij The specific formula of the pulse output for the neuron is as follows:
Figure GDA0004076072610000051
5) Selecting points with multiple ignition times as final fused pixel points through iteration times;
Figure GDA0004076072610000052
wherein cA s,l For the high-frequency subgraph of graph A, cB s,l For the high frequency subgraph of graph B, TCA s,l Is cA s,l TCB (ignition count of TCB) s,l Is cB s,l Is used for the ignition number of the ignition device.
The invention has the beneficial effects that: the invention provides a medical image fusion method based on an improved pulse coupling neural network. And secondly, carrying out multi-scale and multi-directional decomposition on the image by adopting a non-downsampling shear wave model, decomposing the image into a high frequency sub-image and a plurality of low frequency sub-images, and providing guarantee for the accurate fusion of the next step. Thirdly, in the image fusion stage, an improved regional energy algorithm is adopted to fuse the decomposed low-frequency subgraph, in order to enable the low-frequency subgraph to be free of detail information as far as possible, a sobel operator in eight directions is adopted to extract the detail information of the low-frequency subgraph, and then a fusion rule with the largest regional energy is adopted to fuse the low-frequency subgraph; the improved pulse coupling neural network is adopted to fuse the decomposed high-frequency subgraphs, the pulse coupling neural network accords with an imaging system of human eyes, the high-frequency information can be fused accurately, the improved particle swarm optimization algorithm based on quantum behaviors is used for determining parameters of the pulse coupling neural network, the problem of parameter setting manually is solved, and the fusion efficiency of the high-frequency subgraphs is effectively improved. And finally, reconstructing the fused low-frequency subgraph and high-frequency subgraph by adopting non-downsampling shear wave inverse transformation to obtain a final fused image. Therefore, the invention has the advantages of high energy storage efficiency and good detail extraction performance, and can effectively improve the accuracy of medical image fusion.
Drawings
Fig. 1 is a flowchart of a medical image fusion method based on an improved pulse coupled neural network according to the present invention.
Fig. 2 is a fusion result diagram of a nuclear magnetic resonance T1 image (MR-T1) and a nuclear magnetic resonance T2 image (MR-T2), wherein fig. 2 (a) is MR-T1, fig. 2 (b) is MR-T2, fig. 2 (c) is a fusion result diagram based on a Boundary Finding (BF) method, fig. 2 (d) is a fusion result diagram of a Dense SIFT (DSIFT) method, fig. 2 (e) is a fusion result diagram based on a sparse representation and a dual tree complex wavelet transform (DTCWT-SR) method, fig. 2 (f) is a fusion result diagram of a non-downsampled contourlet transform (NSCT) method, fig. 2 (g) is a fusion result diagram of a Pulse Coupled Neural Network (PCNN), a NSCT and a spatial frequency domain (SF) combined method (NSCT-SF), fig. 2 (h) is a fusion result diagram of a NSCT-PCNN method, and fig. 2 (i) is a fusion result diagram of the present method (prosed).
Fig. 3 is a graph showing the fusion result of an electronic Computed Tomography (CT) image and a magnetic resonance image (MR), wherein fig. 3 (a) is a CT image, fig. 3 (b) is an MR image, fig. 3 (c) is a BF-method fusion result, fig. 3 (d) is a DSIFT-method fusion result, fig. 3 (e) is a DTCWT-SR-method fusion result, fig. 3 (f) is an NSCT-method fusion result, fig. 3 (g) is an NSCT-SF-PCNN fusion result, fig. 3 (h) is an NSCT-PCNN-method fusion result, and fig. 3 (i) is a present-method (Proposed) fusion result.
Detailed Description
The present invention is described in further detail below with reference to the drawings to enable those skilled in the art to practice the invention by referring to the description.
As shown in FIG. 1, the invention provides a medical image fusion method based on an improved pulse coupled neural network, which enhances medical images through Gamma correction. And carrying out multi-scale and multi-directional decomposition on the medical image by adopting non-downsampled shear waves to form a high-frequency subgraph and a low-frequency subgraph. The improved regional energy algorithm is adopted to fuse the low-frequency subgraphs, and the method effectively removes detail information affecting low-frequency fusion. The improved pulse coupling neural network is adopted to fuse the high-frequency subgraphs, and the method solves the problem that parameters are required to be set manually. The method comprises the following steps:
step 101: contrast enhancement of the medical image;
contrast enhancement is carried out on the registered multi-mode medical image through a Gamma correction algorithm, and the definition is shown in the following formula:
S=I r
wherein I is an original image, gamma is a gray coefficient, the value is 1.2, and S is an image corrected by Gamma.
Step 102: and performing multi-scale decomposition on the enhanced image by adopting a non-downsampled shear wave model.
In order to make fusion more accurate, the image is converted from a space domain to a frequency domain, and the image is decomposed into a high-frequency subgraph and a low-frequency subgraph in a multi-scale and multi-directional manner. Wherein the low frequency subgraph represents the contour information of the original image, and the high frequency subgraph represents the detail information of the original image;
step 103: and fusing the low-frequency subgraphs by adopting an improved regional energy algorithm.
The low frequency subgraph decomposed by the non-downsampled shear wave algorithm retains the contour information of the image. In medical image processing, profile information is stored in areas of high energy. The method adopts an improved regional energy algorithm to fuse the low-frequency subgraphs, and selects the pixel points with large regional energy as the pixel points of the final fusion of the low-frequency subgraphs. However, a large amount of detail information of the decomposed low-frequency subgraph has negative influence on fusion, in order to improve the accuracy of low-frequency fusion, the detail information of the low-frequency subgraph needs to be acquired first, and the detail information is extracted by adopting a sobel algorithm in eight directions, which is specifically calculated as follows:
0 ° direction: [1,2,1;0, 0; -1, -2, -1];
45 ° direction: [2,1,0;1,0, -1;0, -1, -2];
90 ° direction: [1,0, -1;2,0,0;1,0, -1];
135 ° direction: [0, -1, -2;1,0, -1;2,1,0];
180 ° direction: [ -1, -2, -1;0, 0;1,2,1];
225 ° direction: [ -2, -1,0; -1,0,1;0,1,2];
270 ° direction: [ -1,0,1; -2,0,2; -1,0,1];
315 ° direction: [0,1,2; -1,0,1; -2, -1,0];
secondly, a subgraph for removing detail information is obtained, convolution operation is carried out by adopting eight sobel operators in different directions, and then subtraction is carried out on the subgraph and the original image to obtain an image for removing detail information, wherein the specific formula is as follows:
O=I-(I*S 8 )
wherein O is an image with details removed, and represents convolution operation, S 8 For eight sobel operators in different directions, I.times.S 8 The original image I is represented to be respectively rolled with sobel operators in eight directions.
Again, the area energy values of all the pixels of the image O from which the detail information is removed are calculated as follows:
Figure GDA0004076072610000071
where (i, j) is the coordinates of the pixel, E is the current area energy value, v (m, n) is the determinant of 3*3, and the specific values are:
Figure GDA0004076072610000072
finally, selecting the pixel point with the largest energy as the final fusion point of the low-frequency subgraph, wherein the formula is as follows:
Figure GDA0004076072610000081
wherein aF is the final fused pixel point, aA is the low-frequency subgraph of the image A, bB is the low-frequency subgraph of the image B, E aA Represents the area energy value of aA, E bB The area energy value of bB is represented.
Step 104: and adopting improved pulse coupling to fuse the high-frequency subgraphs.
The high frequency subgraph of the image represents texture, edges and other detail information of the image. The pulse coupled neural network produces binary pulse outputs that can effectively capture detailed information of the image. However, conventional pulse coupled neural network models require a number of parameters to be set, and in most cases require manual setting through experience, which can create errors in the fusion process that are considered unavoidable. The invention combines an improved particle swarm optimization algorithm based on quantum behaviors with a pulse coupling neural network, and invents a high-frequency subgraph fusion rule for automatically setting parameters. The high-frequency sub-graph fusion step is divided into two parts, wherein the first part adopts an improved particle swarm optimization algorithm based on quantum behaviors to determine parameters of a pulse coupled neural network; the second part, the final fused pixel point is determined by using the pulse coupling nerve model.
The specific steps of the first part are as follows:
the first step: initializing the number N of populations (n=20); the dimension D (d=4) is determined, the value of which is the optimum parameter value, in particular α, to be set by the pulse-coupled neural network L 、V L 、α θ And V θ The method comprises the steps of carrying out a first treatment on the surface of the Determining a maximum iteration number Maxtime (maxtime=50); randomly initializing the optimal position p of individual particles i And global optimum position p among all particles g
And a second step of: the fitness function of the algorithm is determined, and the formula is as follows:
f=max(EN+SF+MI+Q A/F )
wherein EN is information entropy of the image, SF is spatial frequency of the image, MI is mutual information of the image, Q A/F Reserving a quantity for edge information of the image;
and a third step of: updating the optimal position p of individual particles i . If the value of the fitness function is greater than the current p i Then the value of the fitness function is chosen as the optimal position p of the individual particle i The specific formula is as follows:
p i (t)=f,if p i (t)<f
fourth step: updating global optimum position p g . Selecting all p i The maximum value of (2) is the global optimum position p g The specific formula is as follows:
f 1 =max{p i }
p g =f 1
fifth step: the average value C of the optimal positions of all the individual particles is calculated as follows:
Figure GDA0004076072610000091
wherein N represents the number of particles of 20, p i (t-1) represents the optimal position of the individual particles after t-1 iterations.
Sixth step: calculating the optimal position p of individual particles i (t-1) and the global optimum position p among all particles g Random Point pp of (t-1) i (t) the specific formula is as follows:
pp i (t)=p i (t-1)+(1-λ)p g (t-1)
wherein p is i (t-1) is the optimal position of the individual particles, p g (t-1) represents the global optimum position of all particles, and λ is a random value of 0 to 1.
Seventh step: the particles move and adjust the direction and position of the current movement as follows:
Figure GDA0004076072610000092
wherein μ is a random value of 0 to 1, and β (t) is as follows:
Figure GDA0004076072610000093
where m, n is a constant, where m=2, n=1, and the maximum number of maxtime iterations is 50.
Eighth step: finally obtaining four parameters of the pulse coupling neural network and decay factor alpha input from the outside L Feedback amplification factor V L Attenuation factor alpha of dynamic threshold θ Amplification factor V of dynamic threshold θ The method is carried into a pulse coupling neural network, so that the parameter inquiry of manual setting is avoidedThe problem is that the fusion efficiency is improved.
The specific steps of the second part are as follows:
the first step: the feedback input of the pulse coupled neural network is calculated as follows:
F ij (n)=S ij
wherein S is ij For external stimulation, the method is a decomposed high-frequency subgraph.
And a second step of: the external input of the pulse coupled neural network is calculated as follows:
Figure GDA0004076072610000101
wherein alpha is L For decay factors externally input, V L The feedback amplification factor, the specific value of W, is obtained in the last step ijlk Is the link weight of the neuron.
Figure GDA0004076072610000102
And a third step of: the internal activity term of the pulse coupled neural network is calculated as follows:
U ij (n)=F ij (1+βL ij (n))
wherein, beta is the link strength and takes the value of 0.1.
Fourth step: the dynamic threshold of the pulse coupled neural network is calculated as follows:
Figure GDA0004076072610000103
wherein alpha is θ Attenuation factor of dynamic threshold, V θ The amplification factor of the dynamic threshold is obtained in the last step, Y ij The specific formula of the pulse output for the neuron is as follows:
Figure GDA0004076072610000104
fifth step: and selecting the point with multiple ignition times as the final fused pixel point through the iteration times.
Figure GDA0004076072610000105
Wherein cA s,l For the high-frequency subgraph of graph A, cB s,l For the high frequency subgraph of graph B, TCA s,l Is cA s,l TCB (ignition count of TCB) s,l Is cB s,l Is used for the ignition number of the ignition device.
Although embodiments of the present invention have been disclosed above, it is not limited to the details and embodiments shown and described, it is well suited to various fields of use for which the invention would be readily apparent to those skilled in the art, and accordingly, the invention is not limited to the specific details and illustrations shown and described herein, without departing from the general concepts defined in the claims and their equivalents.
The following examples are provided by the inventors to further illustrate the technical aspects of the present invention.
Example 1:
according to the technical scheme of the invention, the two completely registered medical images are fused. The method is compared with other methods, including image fusion (NSCT) based on a multi-scale transformation method, fusion method (DTCWT-SR) based on sparse representation and multi-scale transformation, fusion method (NSCT-PCNN, NSCT-SF-PCNN) based on a pulse coupled neural network and multi-scale transformation, image fusion (DSIFT) based on dense SIFT and image fusion (BF) based on boundary discovery. All parameter settings of the comparison experiment are default values.
Fig. 2 (a) is a nuclear magnetic resonance T1 image (MR-T1), fig. 2 (b) is a nuclear magnetic resonance T2 image (MR-T2), fig. 2 (c) is a fusion result diagram of BF method, fig. 2 (d) is a fusion result diagram of DSIFT method, fig. 2 (e) is a fusion result diagram of DTCWT-SR method, fig. 2 (f) is a fusion result diagram of NSCT method, fig. 2 (g) is a fusion result diagram of NSCT-PCNN method, fig. 2 (h) is a fusion result diagram of NSCT-SF-PCNN method, and fig. 2 (i) is a fusion result diagram of the present method (Proposed). It is clearly observed that the intensity and contrast of FIGS. 2 (f), (g) and (h) are lower in some areas than others, indicating that NSCT, NSCT-PCNN and NSCT-SF-PCNN algorithms are lower in their ability to store information than others. It can be seen from fig. 2 (d) that there is a blocking effect of the fused image, which greatly affects the effect of the fusion, and is disadvantageous for medical diagnosis. In fig. 2 (c), (e), the fused image is mainly from the source image (a), lacking information of the source image (b), which means that BF and DTCWT-SR algorithms are lower in structural similarity than other algorithms. In order to facilitate observation of the quality of the fusion effect, the lower left corner of the image is set as a locally enlarged effect map. As can be seen from fig. 2 (i), the black box is a skeletal region, and the method has higher contrast and intensity than other algorithms. The algorithm herein proves to be superior to other algorithms in terms of structural similarity, intensity and contrast.
The invention not only performs comparison in subjective vision, but also performs comparison in objective evaluation indexes. By Q 0 、Q w EN and Q TE Comparative experiments were performed on four evaluation indexes, and the experimental results are shown in table 1.
Q 0 Representing the distortion degree of the fused image, Q W Representing the degree to which the fused image transfers significant information from the source image, EN represents the information entropy of the image, Q TE Representing the degree of dependence between two discrete random variables. The larger the values of the four evaluation indexes, the better the fusion effect. It can be seen from table 1 that the fusion effect of the method is better than that of other algorithms.
TABLE 1 evaluation of MR-T1 and MR-T2 fusion image quality
Figure GDA0004076072610000121
Example 2:
figure 3 shows a patient suffering from a cerebrovascular disease whose head is challenged or stroke. The patient only has writing function and loses reading function. The black box indicates the eyeball position of the brain, and the lower left corner of the image is an enlarged effect diagram. Fig. 3 (a) is a CT image, fig. 3 (b) is an MR image, fig. 3 (c) is a fusion result diagram of BF method, fig. 3 (d) is a fusion result diagram of DSIFT method, fig. 3 (e) is a fusion result diagram of DTCWT-SR method, fig. 3 (f) is a fusion result diagram of NSCT method, fig. 3 (g) is a fusion result diagram of NSCT-PCNN method, fig. 3 (h) is a fusion result diagram of NSCT-SF-PCNN method, and fig. 3 (i) is a fusion result diagram of the present method (Proposed). From the fusion result, a fused image of the BF method in fig. 3 (c) can be observed, which mainly contains information in the source image (a), lacking information of the source image (b). In fig. 3 (d), there is a blocking effect, the contrast is low, and the detail extraction capability is weak. From a visual effect perspective, the algorithms herein are superior to other algorithms in contrast and detail extraction.
The method was used to analyze the fusion image from a quantitative point of view as shown in table 2.
Table 2CT and MR fusion image quality assessment
Figure GDA0004076072610000122
As can be seen from Table 2, the method is superior to other six fusion methods in terms of four evaluation indexes, which indicates that the structural similarity, detail processing, contrast and intensity of the images are superior to those of other fusion algorithms. The algorithm combines a large amount of source image information from two different modes, presents rich detail characteristics, and provides accurate diagnosis for doctors.
The foregoing examples are merely illustrative of the present invention and are not intended to limit the scope of the present invention, and all designs that are the same or similar to the present invention are within the scope of the present invention.

Claims (2)

1. A medical image fusion method based on an improved pulse coupled neural network, comprising the steps of:
step one: gamma correction is carried out on the two completely registered medical images A and B to enhance the contrast of the images to be fused;
step two: the enhanced images A and B are subjected to multi-scale and multi-directional decomposition into a low-frequency sub-image { aA, bB } and a high-frequency sub-image { cA by adopting non-downsampled shear wave transformation s,l ,cB s,l -wherein s represents the number of layers decomposed and l represents the direction of decomposition;
step three: the method comprises the following steps of adopting an improved regional energy algorithm to fuse the decomposed low-frequency subgraphs { aA, bB } to obtain a low-frequency subgraph fusion result aF, and specifically comprising the following steps:
(one): the detail information is extracted by adopting sobel operators in eight directions, and the specific calculation is as follows:
0 ° direction: [1,2,1;0, 0; -1, -2, -1];
45 ° direction: [2,1,0;1,0, -1;0, -1, -2];
90 ° direction: [1,0, -1;2,0,0;1,0, -1];
135 ° direction: [0, -1, -2;1,0, -1;2,1,0];
180 ° direction: [ -1, -2, -1;0, 0;1,2,1];
225 ° direction: [ -2, -1,0; -1,0,1;0,1,2];
270 ° direction: [ -1,0,1; -2,0,2; -1,0,1];
315 ° direction: [0,1,2; -1,0,1; -2, -1,0];
(II): the convolution operation is carried out by adopting eight sobel operators in different directions, and then subtraction is carried out on the eight sobel operators and the original image, so that an image with detailed information removed is obtained, and the specific operation is as follows:
O=I-(I*S 8 )
wherein O is an image with detailed information removed, and represents convolution operation, S 8 For eight sobel operators in different directions, I.times.S 8 The original image I is expressed to be respectively coiled with eight sobel operators in different directions;
(III): the area energy values of all pixel points of the image O with the detail information removed are calculated, and the formula is as follows:
Figure FDA0004076072600000021
wherein, (i, j) is the coordinates of the pixel point of the image, E is the current area energy value, v is the determinant of 3*3, and the specific values are:
Figure FDA0004076072600000022
(IV): the pixel point with the largest energy is selected as the final fusion point of the low-frequency subgraph, and the formula is as follows:
Figure FDA0004076072600000023
wherein aF is the final fused pixel point, aA is the low-frequency subgraph of the image A, bB is the low-frequency subgraph of the image B, E aA Represents the area energy value of aA, E bB A region energy value representing bB;
step four: the improved pulse coupling neural network is adopted to make the decomposed high-frequency subgraph { cA } s,l ,cB s,l Fusion is carried out to obtain a high-frequency subgraph fusion result cF s,l The method specifically comprises the following steps:
the method comprises the following steps of (1) adopting an improved particle swarm optimization algorithm based on quantum behaviors to determine parameters of a pulse coupled neural network, wherein the method comprises the following steps:
1) The fitness function suitable for the algorithm is designed, and the formula is as follows:
f=max(EN+SF+MI+Q A/F )
wherein EN is information entropy of the image, SF is spatial frequency of the image, MI is mutual information of the image, Q A/F Reserving a quantity for edge information of the image;
2) The average value C of the optimal positions of all particles is calculated as follows:
Figure FDA0004076072600000024
wherein N represents the number of particles of 20, p i (t-1) is represented at t-1The optimal position of the single particle after the iteration;
3) Calculating the optimal position p of individual particles i (t-1) and the global optimum position p among all particles g Random Point pp of (t-1) i (t) the specific formula is as follows:
pp i (t)=p i (t-1)+(1-λ)p g (t-1)
wherein p is i (t-1) represents the optimal position of the individual particles, p g (t-1) represents a global optimum among all particles, λ being a random value of 0 to 1;
4) The particles move and adjust the current direction and position as follows:
Figure FDA0004076072600000031
wherein μ is a random value of 0 to 1, and β (t) is as follows:
Figure FDA0004076072600000032
where m, n is a constant, where m=2, n=1, maxtime represents the maximum number of iterations, maxtime is 50;
secondly, adopting a pulse coupling neural network as a fusion rule of the high-frequency subgraph, wherein the specific operation is as follows:
1) The feedback input of the pulse coupled neural network is calculated as follows:
F ij (n)=S ij
wherein S is ij The external stimulus is an external stimulus, and the external stimulus of the method is a high-frequency subgraph;
2) The external input of the pulse coupled neural network is calculated as follows:
Figure FDA0004076072600000033
wherein alpha is L For decay factors externally input, V L To feed back the amplification factor, W ijlk A link weight for the neuron;
3) The internal activity term of the pulse coupled neural network is calculated as follows:
U ij (n)=F ij (1+βL ij (n))
wherein, beta is the link strength;
4) The dynamic threshold of the pulse coupled neural network is calculated as follows:
Figure FDA0004076072600000034
wherein alpha is θ Attenuation factor of dynamic threshold, V θ Is the amplification factor of the dynamic threshold value, Y ij The specific formula of the pulse output for the neuron is as follows:
Figure FDA0004076072600000035
5) Selecting points with multiple ignition times as final fused pixel points through iteration times;
Figure FDA0004076072600000041
wherein cA s,l For the high-frequency subgraph of graph A, cB s,l For the high frequency subgraph of graph B, TCA s,l Is cA s,l TCB (ignition count of TCB) s,l Is cB s,l Is a number of ignition times;
step five: the low-frequency sub-graph fusion result aF obtained in the step three and the high-frequency sub-graph fusion result cF obtained in the step four are processed s,l And carrying out non-downsampled shear wave inverse transformation and reconstructing to obtain a final fusion image F.
2. The method of claim 1, wherein the correction using Gamma in step one is as follows:
S=I r
wherein I is an original image, gamma is a gray coefficient, the value range is between 0 and 2, and S is an image corrected by Gamma.
CN201910177917.1A 2019-03-11 2019-03-11 Medical image fusion method based on improved pulse coupling neural network Active CN109934887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910177917.1A CN109934887B (en) 2019-03-11 2019-03-11 Medical image fusion method based on improved pulse coupling neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910177917.1A CN109934887B (en) 2019-03-11 2019-03-11 Medical image fusion method based on improved pulse coupling neural network

Publications (2)

Publication Number Publication Date
CN109934887A CN109934887A (en) 2019-06-25
CN109934887B true CN109934887B (en) 2023-05-30

Family

ID=66986765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910177917.1A Active CN109934887B (en) 2019-03-11 2019-03-11 Medical image fusion method based on improved pulse coupling neural network

Country Status (1)

Country Link
CN (1) CN109934887B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956180B (en) * 2019-07-04 2021-04-13 中联重科股份有限公司 Detection method and system of counterweight weight, acquisition method and system and crane
CN110415198B (en) * 2019-07-16 2023-07-04 南京信息工程大学 Medical image fusion method based on Laplacian pyramid and parameter self-adaptive pulse coupling neural network
CN112330638B (en) * 2020-11-09 2023-06-16 苏州大学 Retina OCT (optical coherence tomography) image horizontal registration and image enhancement method
CN113284079B (en) * 2021-05-27 2023-02-28 山东第一医科大学(山东省医学科学院) Multi-modal medical image fusion method
CN113240616A (en) * 2021-05-27 2021-08-10 云南大学 Brain medical image fusion method and system
CN113421200A (en) * 2021-06-23 2021-09-21 中国矿业大学(北京) Image fusion method based on multi-scale transformation and pulse coupling neural network
CN113506307B (en) * 2021-06-29 2022-05-27 吉林大学 Medical image segmentation method for improving U-Net neural network based on residual connection
CN114004343B (en) * 2021-12-31 2022-10-14 之江实验室 Shortest path obtaining method and device based on memristor pulse coupling neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049895A (en) * 2012-12-17 2013-04-17 华南理工大学 Multimode medical image fusion method based on translation constant shear wave transformation
CN103985105A (en) * 2014-02-20 2014-08-13 江南大学 Contourlet domain multi-modal medical image fusion method based on statistical modeling
WO2015070634A1 (en) * 2013-11-15 2015-05-21 吴一兵 Life maintenance mode, brain inhibition method and personal health information platform
CN107977926A (en) * 2017-12-01 2018-05-01 新乡医学院 A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049895A (en) * 2012-12-17 2013-04-17 华南理工大学 Multimode medical image fusion method based on translation constant shear wave transformation
WO2015070634A1 (en) * 2013-11-15 2015-05-21 吴一兵 Life maintenance mode, brain inhibition method and personal health information platform
CN103985105A (en) * 2014-02-20 2014-08-13 江南大学 Contourlet domain multi-modal medical image fusion method based on statistical modeling
CN107977926A (en) * 2017-12-01 2018-05-01 新乡医学院 A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Fusion of infrared and visible light images using region energy and approach degree;Zhuqing Jiao 等;《ICIC Express Letters》;20100430;第4卷(第2期);第583-588页 *
基于NSST变换和PCNN的医学图像融合方法;田娟秀等;《中国医学物理学杂志》;20180825;第35卷(第08期);第914-920页 *
基于NSST的CS与区域特性相结合的图像融合方法;曹义亲等;《计算机工程与应用》;20171201;第54卷(第20期);第190-196页 *
基于压缩感知与自适应PCNN的医学图像融合;高媛等;《计算机工程》;20180915;第44卷(第09期);第224-229页 *
基于图像质量评价参数的FDST域图像融合;陈广秋等;《光电子.激光》;20131115;第24卷(第11期);第2240-2248页 *
基于小波变换的图像融合算法研究;余汪洋等;《北京理工大学学报》;20141215;第34卷(第12期);1262-1266页 *
多源遥感图像融合技术研究;王志来;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20180315(第3期);I140-1020 *

Also Published As

Publication number Publication date
CN109934887A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109934887B (en) Medical image fusion method based on improved pulse coupling neural network
Yang et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss
Yang et al. CT image denoising with perceptive deep neural networks
Rajalingam et al. Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis
Asha et al. Multi-modal medical image fusion with adaptive weighted combination of NSST bands using chaotic grey wolf optimization
Kabade et al. Segmentation of brain tumour and its area calculation in brain MR images using K-mean clustering and fuzzy C-mean algorithm
Nie et al. Medical image synthesis with context-aware generative adversarial networks
CN110827216A (en) Multi-generator generation countermeasure network learning method for image denoising
CN111178369B (en) Medical image recognition method and system, electronic equipment and storage medium
Rajalingam et al. A novel approach for multimodal medical image fusion using hybrid fusion algorithms for disease analysis
Gai et al. Medical image fusion via PCNN based on edge preservation and improved sparse representation in NSST domain
Shabanzade et al. Combination of wavelet and contourlet transforms for PET and MRI image fusion
Panigrahy et al. Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion
CN110660063A (en) Multi-image fused tumor three-dimensional position accurate positioning system
CN115100093A (en) Medical image fusion method based on gradient filtering
El-Shafai et al. An efficient medical image deep fusion model based on convolutional neural networks
CN109389567B (en) Sparse filtering method for rapid optical imaging data
Irshad et al. Gradient compass-based adaptive multimodal medical image fusion
Chen et al. A C-GAN denoising algorithm in projection domain for micro-CT
Lepcha et al. Multimodal medical image fusion based on pixel significance using anisotropic diffusion and cross bilateral filter
CN115018728A (en) Image fusion method and system based on multi-scale transformation and convolution sparse representation
Muthiah et al. Fusion of MRI and PET images using deep learning neural networks
Rao et al. Deep learning-based medical image fusion using integrated joint slope analysis with probabilistic parametric steered image filter
CN112750097B (en) Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
Cao et al. Medical image fusion based on GPU accelerated nonsubsampled shearlet transform and 2D principal component analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant