CN112215922A - FT domain medical image fusion method - Google Patents
FT domain medical image fusion method Download PDFInfo
- Publication number
- CN112215922A CN112215922A CN202011131840.3A CN202011131840A CN112215922A CN 112215922 A CN112215922 A CN 112215922A CN 202011131840 A CN202011131840 A CN 202011131840A CN 112215922 A CN112215922 A CN 112215922A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- band
- low
- fused
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 22
- 230000004927 fusion Effects 0.000 claims abstract description 102
- 238000000034 method Methods 0.000 claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000013507 mapping Methods 0.000 claims description 22
- 238000000354 decomposition reaction Methods 0.000 claims description 16
- 238000009499 grossing Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 15
- 238000007499 fusion processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 239000004576 sand Substances 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 abstract description 7
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000004088 simulation Methods 0.000 description 17
- 238000003384 imaging method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 210000004872 soft tissue Anatomy 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 208000019553 vascular disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Abstract
The invention discloses a medical image fusion method of an FT domain, which comprises the following steps: performing FT (Fourier transform) on all medical source images to be fused, and respectively obtaining 1 low-frequency sub-band image and 8 high-frequency sub-band images from each source image; combining the GFRW model and the SF model to respectively complete low-frequency sub-band image fusion and high-frequency sub-band image fusion; and processing the sub-band fusion image by using inverse FT to obtain a final fusion image. The fused image obtained by the method can successfully extract the detail information and the edge information of the source image and be fused into one image on the basis of keeping the main body information of the medical source image to be fused, and the obtained result image has very ideal subjective visual effect and objective evaluation result.
Description
Technical Field
The invention relates to a medical image information processing technology, in particular to a medical image fusion method based on a GFRW (GFR (fuzzy rational weight) model and an SF (fuzzy finite weight) model in an FT (Fourier transform) domain.
Background
At present, medical images become an important basis for clinical diagnosis and treatment planning in the medical field. However, different image imaging sensors often have certain limitations, such as: the CT image has good imaging effect on parts such as bones, but has general detection capability on soft tissues; MRI images can provide high resolution anatomical information for soft tissue, but are less sensitive to diagnosis in regions such as bone than CT images. In addition, functional imaging techniques such as PET images and SPECT images can be used to reflect metabolic information of the body, and thus have great significance for diagnosis of vascular diseases, detection of tumors, and the like, but their spatial resolution is generally low. Under the background, images obtained by various different imaging sensors are effectively fused, so that richer and more accurate pathological information can be provided for doctors and patients, the doctor can accurately judge the pathological information, and the cure rate of diseases is improved.
Disclosure of Invention
The invention mainly aims to provide a medical image fusion method of an FT domain, which solves the practical problems that the existing medical images are too many in types and have limitations.
The technical scheme adopted by the invention is as follows: a medical image fusion method of FT domain includes the following steps:
step 1: performing FT transformation on all medical source images to be fused, and after L-time scale decomposition, respectively obtaining 1 low-frequency sub-band image and 8 x L high-frequency sub-band images from each source image;
step 2: calculating ISML corresponding to each pixel point in the low-frequency sub-band image; obtaining an initial fusion decision map; smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model to obtain smoothed fusion decision mapping maps; further processing the smoothed fusion decision map to obtain a final fusion decision map; completing the fusion of the low-frequency sub-band images;
and step 3: processing the high-frequency sub-band image coefficient; calculating the SF value of each pixel point in the high-frequency sub-band image; completing the fusion of the high-frequency sub-band images;
and 4, step 4: and performing FT (inverse Fourier transform) inverse transformation on the high-frequency sub-band image and the low-frequency sub-band image of the final fusion image to obtain a final fusion image F.
Further, the step 1 comprises: inputting all medical source images to be fused, and respectively carrying out FT conversion on the medical source images, wherein the scale decomposition stage number is L, and (L, k) is the direction decomposition stage number under the scale of L, wherein L is more than or equal to 1 and less than or equal to L, and k is more than or equal to 1 and less than or equal to 8; and after each medical source image to be fused is subjected to FT conversion, 1 low-frequency sub-band image and 8 high-frequency sub-band images are respectively obtained.
Further, the calculating the ISML corresponding to each pixel point in the low-frequency subband image in step 2 includes:
calculating the ISML corresponding to each pixel point in the low-frequency subband image, wherein the mathematical expression is as follows:
accordingly, the ISML value corresponding to each pixel point can be obtained by calculating the following equation (2):
where N and T represent the window radius and threshold, respectively. N is a positive odd number, T is 0; formula (2) can be rewritten as:
determining the scale in the ISML calculation process as 3 levels, and setting parameters N corresponding to the 3 levels to be 5,9 and 17 respectively; the ISML value when each pixel point corresponds to different N values can be further rewritten as:
ISMLnN(i,j)=ISML(In(i,j),N),n=1,2,...Num,N=5,9,17
(4)
and (i, j) represents the positions of pixel points in the image, and Num is the number of the medical source images to be fused.
Further, the obtaining the initial fusion decision map in step 2 includes:
assuming that two medical source images to be fused are provided, the initial fusion decision mapping chart corresponding to the low-frequency subband image is obtained by the following mathematical expression:
selecting the numerical values of the parameters N as 5,9 and 17 respectively, and the range of the scale parameter s as the interval [1,3 ]]An integer within; according to the formula (5), if the ISML value of a pixel point in the first image is greater than or equal to the ISML value of a pixel point at the same position in the second image, the pixel point is initially fused with the decision mapsThe value of the corresponding element in (1) is 1, otherwise, the value is 0.
Further, in the step 2, smoothing the initial fusion decision maps at three different scales by using a GF model, respectively, and obtaining a smoothed fusion decision map includes:
smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model, thereby obtaining the smoothed fusion decision mapping maps:
GFmaps(i,j)=GF(I1(i,j),maps,rs,εs),s∈[1,3] (6)
where GF is the guided filter function, I1The method comprises the following steps of (1) obtaining a low-frequency subband image corresponding to a medical source image to be fused, wherein the low-frequency subband image is called a guide image; r issAnd εsRespectively representing the neighborhood size and the regularization parameter, the invention sets rs={5,11,19},εs={0.01,0.001,0.0001}。
In the guided filter function GF at radius rsFilter window of omegakUnder the precondition of (1), the guide image I1And the fusion decision map GFmap after the smoothing processings(i, j) satisfies the following local linear relationship:
akand bkThe calculation formula of (2) is as follows:
wherein, mukAndrespectively representing guide images I1Is the filter window ω | is the mean and variance ofkThe number of the pixel points in the image,is omegakAverage value of inner filtering input image, regularization parameter epsilonsFor preventing akThe value is too large to cause falling into local optima.
Further, the step 2 of further processing the smoothed fusion decision map to obtain a final fusion decision map includes:
aiming at fusion decision map GFmap after smoothing processings(i, j) further processing to obtain a final fused decision map:
equation (11) is used for the smoothed fusion decision map GFmaps(i, j) simplifying, and marking the corresponding numerical value of the simplified numerical value which is more than or equal to 0.75 or less than or equal to 0.25; if GFmapsIf the value of (i, j) is greater than or equal to 0.75, the value of the element in the corresponding final fusion decision map remains unchanged and is still marked as GFmaps(i, j); if GFmaps(i, j) is less than or equal to 0.25, the corresponding element in the final fusion decision map is determinedThe value is 1-GFmaps(i, j); and if the two conditions are not met, setting the element numerical value in the corresponding final fusion decision mapping chart to be 0.
Further, the step 2 of completing the fusion of the low-frequency subband images includes:
three matrices Fmap derived from equation (11)s(i,j),s∈[1,3]Obtaining the final fusion decision map Fmaps(i,j):
Fmap(i,j)=max[Fmap1(i,j),Fmap2(i,j),Fmap3(i,j)] (12)
The low-frequency subband image finally completes the fusion process through the formula (13):
F_L(i,j)=Fmap(i,j)*A_L(i,j)+(1-Fmap(i,j))*B_L(i,j)
(13)
wherein, A _ L (i, j), B _ L (i, j) and F _ L (i, j) respectively represent a source image A low-frequency subband image, a source image B low-frequency subband image and a low-frequency subband fusion image.
Further, the processing the high-frequency subband image coefficients in step 3 includes:
the coefficients in the high-frequency subband image are subjected to an absolute value processing, namely abs (X _ H (i, j)(l,k)) Wherein, X _ H represents a high-frequency subband image of the medical source image X to be fused, X is a or b, (l, k) is a directional decomposition series at l scale, and l represents the l-th scale decomposition;
the calculating the SF value of each pixel point in the high-frequency subband image in the step 3 includes:
calculating SF values of all pixel points in each high-frequency sub-band image:
wherein M and N represent the width and height of the medical source image respectively;
the step 3 of completing the fusion of the high-frequency sub-band images comprises the following steps:
the high-frequency subband image finally completes the fusion process through the formula (15):
still further, the step 4 includes:
applying inverse FT to the low frequency sub-band fused image F _ L (i, j) and the high frequency sub-band fused image F _ H (i, j)(l,k)And integrating to obtain a final fusion image F, wherein the corresponding mathematical expression is as follows:
the invention has the advantages that:
compared with a single type of medical source image, the method provided by the invention has the advantages that the fusion image obtained by the method can successfully extract the detail information and the edge information of the source image and be fused into one image on the basis of keeping the main body information of the medical source image to be fused, and the obtained result image has very ideal subjective visual effect and objective evaluation result.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of the FT domain medical image fusion method of the present invention;
FIG. 2 is a medical source image to be fused used in a simulation experiment of the present invention; wherein (a) is an MRI image; (b) is a PET image.
FIG. 3 is a simulation diagram of the effect of the simulation experiment of the present invention; wherein, (a) is a simulation result of a Quadtree method; (b) the simulation result of the DTCTCSR method is obtained; (c) the simulation result of the MSVD method is obtained; (d) is the simulation result of the CNN method; (e) is the simulation result of the CBF method; (f) the simulation result of the mPCNN method is obtained; (g) is the simulation result of the ASR method; (h) is the simulation result of the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a medical image fusion method of FT domain includes the following steps:
step 1: performing FT transformation on all medical source images to be fused, and after L-time scale decomposition, respectively obtaining 1 low-frequency sub-band image and 8 x L high-frequency sub-band images from each source image;
step 2: calculating ISML (improved Sum of Modified Laplacian) corresponding to each pixel point in the low-frequency subband image; obtaining an initial fusion decision map; smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model to obtain smoothed fusion decision mapping maps; further processing the smoothed fusion decision map to obtain a final fusion decision map; completing the fusion of the low-frequency sub-band images;
and step 3: processing the high-frequency sub-band image coefficient; calculating the SF value of each pixel point in the high-frequency sub-band image; completing the fusion of the high-frequency sub-band images;
and 4, step 4: and performing FT (inverse Fourier transform) inverse transformation on the high-frequency sub-band image and the low-frequency sub-band image of the final fusion image to obtain a final fusion image F.
Compared with a single type of medical source image, the method provided by the invention has the advantages that the fusion image obtained by the method can successfully extract the detail information and the edge information of the source image and be fused into one image on the basis of keeping the main body information of the medical source image to be fused, and the obtained result image has very ideal subjective visual effect and objective evaluation result.
In this embodiment, the step 1 includes: inputting all medical source images to be fused, and respectively carrying out FT conversion on the medical source images, wherein the scale decomposition stage number is L, and (L, k) is the direction decomposition stage number under the scale of L, wherein L is more than or equal to 1 and less than or equal to L, and k is more than or equal to 1 and less than or equal to 8; and after each medical source image to be fused is subjected to FT conversion, 1 low-frequency sub-band image and 8 high-frequency sub-band images are respectively obtained.
In this embodiment, the calculating the ISML corresponding to each pixel point in the low-frequency subband image in step 2 includes:
calculating the ISML corresponding to each pixel point in the low-frequency subband image, wherein the mathematical expression is as follows:
accordingly, the ISML value corresponding to each pixel point can be obtained by calculating the following equation (2):
where N and T represent the window radius and threshold, respectively. In general, N is a positive odd number, and T is 0; formula (2) can be rewritten as:
determining the scale in the ISML calculation process as 3 levels, and setting parameters N corresponding to the 3 levels to be 5,9 and 17 respectively; the ISML value when each pixel point corresponds to different N values can be further rewritten as:
ISMLnN(i,j)=ISML(In(i,j),N),n=1,2,...Num,N=5,9,17
(4)
and (i, j) represents the positions of pixel points in the image, and Num is the number of the medical source images to be fused.
In this embodiment, the obtaining the initial fusion decision map in step 2 includes:
assuming that two medical source images to be fused are provided, the initial fusion decision mapping chart corresponding to the low-frequency subband image is obtained by the following mathematical expression:
the invention selects the numerical values of the parameter N as 5,9 and 17 respectively, and the value range of the scale parameter s is the interval [1,3 ]]An integer within; according to the formula (5), if the ISML value of a pixel point in the first image is greater than or equal to the ISML value of a pixel point at the same position in the second image, the pixel point is initially fused with the decision mapsThe value of the corresponding element in (1) is 1, otherwise, the value is 0. And primarily describing the target distribution condition in the medical source image to be fused based on the obtained three initial fusion decision mapping images.
In this embodiment, the step 2 of respectively smoothing the initial fusion decision maps at three different scales by using a GF model, and obtaining the smoothed fusion decision map includes:
smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model, thereby obtaining the smoothed fusion decision mapping maps:
GFmaps(i,j)=GF(I1(i,j),maps,rs,εs),s∈[1,3] (6)
where GF is the guided filter function, I1The method comprises the following steps of (1) obtaining a low-frequency subband image corresponding to a medical source image to be fused, wherein the low-frequency subband image is called a guide image; r issAnd εsRespectively representing the neighborhood size and the regularization parameter, the invention sets rs={5,11,19},εs={0.01,0.001,0.0001}。
In the guided filter function GF at radius rsFilter window of omegakUnder the precondition of (1), the guide image I1And the fusion decision map GFmap after the smoothing processings(i, j) satisfies the following local linear relationship:
therefore, only a is calculatedkAnd bkThe value of (2) can obtain a guide filtering mapping chart. Further, akAnd bkThe calculation formula of (2) is as follows:
wherein, mukAndrespectively representing guide images I1Is the filter window ω | is the mean and variance ofkThe number of the pixel points in the image,is omegakAverage value of inner filtering input image, regularization parameter epsilonsFor preventing akThe value is too large to cause falling into local optima.
In this embodiment, the further processing on the smoothed fusion decision map in step 2 to obtain a final fusion decision map includes:
aiming at fusion decision map GFmap after smoothing processings(i, j) further processing to obtain a final fused decision map:
equation (11) is used for the smoothed fusion decision map GFmaps(i, j) simplifying, and marking the corresponding numerical value of the simplified numerical value which is more than or equal to 0.75 or less than or equal to 0.25; if GFmapsIf the value of (i, j) is greater than or equal to 0.75, the value of the element in the corresponding final fusion decision map remains unchanged and is still marked as GFmaps(i, j); if GFmaps(i, j) is less than or equal to 0.25, the element value in the corresponding final fusion decision mapping chart is set to 1-GFmaps(i, j); and if the two conditions are not met, setting the element numerical value in the corresponding final fusion decision mapping chart to be 0.
In this embodiment, the step 2 of completing the fusion of the low-frequency subband images includes:
three matrices Fmap derived from equation (11)s(i,j),s∈[1,3]Obtaining the final fusion decision map Fmaps(i,j):
Fmap(i,j)=max[Fmap1(i,j),Fmap2(i,j),Fmap3(i,j)]
(12)
The low-frequency subband image finally completes the fusion process through the formula (13):
F_L(i,j)=Fmap(i,j)*A_L(i,j)+(1-Fmap(i,j))*B_L(i,j)
(13)
wherein, A _ L (i, j), B _ L (i, j) and F _ L (i, j) respectively represent a source image A low-frequency subband image, a source image B low-frequency subband image and a low-frequency subband fusion image.
In this embodiment, the processing the high-frequency subband image coefficients in step 3 includes:
unlike low-frequency subband images, coefficients in high-frequency subband images obtained after FT transform have both positive and negative numbers, and coefficients with larger absolute values often contain important detail information in medical source images. Therefore, in the present invention, the coefficients in the high-frequency subband image are subjected to absolute value processing, that is, abs (X _ H (i, j)(l,k)) Wherein, X _ H represents a high-frequency subband image of the medical source image X to be fused, X is a or B, (l, k) is a directional decomposition level number under an l scale, and l represents an l-th scale decomposition;
the calculating the SF value of each pixel point in the high-frequency subband image in the step 3 includes:
calculating SF values of all pixel points in each high-frequency sub-band image:
wherein M and N represent the width and height of the medical source image respectively;
the step 3 of completing the fusion of the high-frequency sub-band images comprises the following steps:
the high-frequency subband image finally completes the fusion process through the formula (15):
in this embodiment, the step 4 includes:
applying inverse FT to the low frequency sub-band fused image F _ L (i, j) and the high frequency sub-band fused image F _ H (i, j)(l,k)And integrating to obtain a final fusion image F, wherein the corresponding mathematical expression is as follows:
the simulation experiment platform of the method is a personal PC (personal computer) which is configured into an Intel (R) core (TM) i5-4250U CPU1.90GHz and 4GB memory, and simulation software is Matlab 2014 b. In order to better understand the technical scheme of the invention, the embodiment selects two medical source images (MRI image + PET image) for fusion. Referring to fig. 1, two medical source images are respectively marked as a and B, and the final fused image is marked as F; the technical scheme of the invention is followed.
Simulation comparison experiment:
in order to verify the effectiveness of the method, compared with the existing various conventional image fusion methods, the method disclosed by the invention has better reasonable effectiveness by verifying through a set of simulation experiments:
following the solution of the present invention, a set of medical source images, which includes an MRI image (see (a) in fig. 2) and a PET image (see (b) in fig. 2), is fused, and the fusion effect is compared with several representative methods. Firstly, performing FT transformation on two medical source images to be fused; then, the GFRW model and the SF model provided by the invention are respectively adopted to complete the fusion process of the low-frequency sub-band image and the high-frequency sub-band image; and finally, performing FT (inverse transform) on the high-frequency sub-band image and the low-frequency sub-band image of the final fusion image to obtain the final fusion image. Meanwhile, several representative methods including a Quadtree method, a DTCTCSSR method, an MSVD method, a CNN method, a CBF method, an mPCNN method and an ASR method are selected to be compared with the corresponding methods of the invention.
FIG. 3 shows simulation experiment results of eight methods, which indicate that the fusion method of the present invention has good fusion performance and can effectively extract and fuse main body and detail information in a medical source image to be fused; in addition, the Mutual Information (MI), Structural Similarity (SSIM), Edge Information Retention (Q) are selectedAB/F) And Sum of difference Correlations (SCD) as an objective quality assessment index for the eight methods. Table 1 shows objective evaluation results of final fusion images corresponding to the eight image fusion methods in the simulation experiment. It should be noted that, in the four objective evaluation index results, the degrees of superiority and inferiority of each result are ranked, and the ranking results are recorded as the superscript of each result, and the results are better when the numerical value is smaller. In order to objectively and fairly evaluate the eight methods, the objective evaluation result sequences corresponding to all the methods are accumulated, and the smaller the numerical value is, the better the performance is. The method has a ranking accumulated value of 14, and is superior to other seven comparison methods.
TABLE 1 Objective evaluation results of eight image fusion methods
The invention adopts FT transformation to carry out multi-scale and multi-directional decomposition on the medical source image to be fused, and the transformation has better information capturing capability and lower computational complexity compared with classical transformation such as wavelet transformation, ridge wave transformation, non-down sampling contourlet transformation and non-down sampling shear wave transformation, and has ideal performance. In addition, the GFRW model and the SF model are comprehensively adopted to respectively complete low-frequency sub-band image fusion and high-frequency sub-band image fusion, and finally the final fusion image is reconstructed by adopting FT inverse transformation. Simulation experiments show that the fusion method has good fusion performance and can effectively extract and fuse main body and detail information in the medical source image to be fused.
The method of the invention comprehensively utilizes the GFRW model and the SF model to effectively fuse CT images, MRI images, PET images and SPECT images aiming at the imaging principle of multi-modal medical images, fully utilizes the advantages of different image sensors, effectively fuses medical image information from different sources, contributes to the reasonable solution of the multi-modal medical image fusion problem, has higher academic value and has very wide practical prospect.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (9)
1. A medical image fusion method of FT domain is characterized by comprising the following steps:
step 1: performing FT transformation on all medical source images to be fused, and after L-time scale decomposition, respectively obtaining 1 low-frequency sub-band image and 8 x L high-frequency sub-band images from each source image;
step 2: calculating ISML corresponding to each pixel point in the low-frequency sub-band image; obtaining an initial fusion decision map; smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model to obtain smoothed fusion decision mapping maps; further processing the smoothed fusion decision map to obtain a final fusion decision map; completing the fusion of the low-frequency sub-band images;
and step 3: processing the high-frequency sub-band image coefficient; calculating the SF value of each pixel point in the high-frequency sub-band image; completing the fusion of the high-frequency sub-band images;
and 4, step 4: and performing FT (inverse Fourier transform) inverse transformation on the high-frequency sub-band image and the low-frequency sub-band image of the final fusion image to obtain a final fusion image F.
2. The FT domain medical image fusion method according to claim 1, wherein the step 1 includes: inputting all medical source images to be fused, and respectively carrying out FT conversion on the medical source images, wherein the scale decomposition stage number is L, and (L, k) is the direction decomposition stage number under the scale of L, wherein L is more than or equal to 1 and less than or equal to L, and k is more than or equal to 1 and less than or equal to 8; and after each medical source image to be fused is subjected to FT conversion, 1 low-frequency sub-band image and 8 high-frequency sub-band images are respectively obtained.
3. The FT-domain medical image fusion method according to claim 1, wherein the calculating the ISML corresponding to each pixel point in the low frequency subband image in step 2 comprises:
calculating the ISML corresponding to each pixel point in the low-frequency subband image, wherein the mathematical expression is as follows:
accordingly, the ISML value corresponding to each pixel point can be obtained by calculating the following equation (2):
wherein N and T represent window radius and threshold, respectively; n is a positive odd number, T is 0; formula (2) can be rewritten as:
determining the scale in the ISML calculation process as 3 levels, and setting parameters N corresponding to the 3 levels to be 5,9 and 17 respectively; the ISML value when each pixel point corresponds to different N values can be further rewritten as:
ISMLnN(i,j)=ISML(In(i,j),N),n=1,2,...Num,N=5,9,17 (4)
and (i, j) represents the positions of pixel points in the image, and Num is the number of the medical source images to be fused.
4. The FT-domain medical image fusion method of claim 1, wherein obtaining an initial fusion decision map in step 2 comprises:
assuming that two medical source images to be fused are provided, the initial fusion decision mapping chart corresponding to the low-frequency subband image is obtained by the following mathematical expression:
selecting the numerical values of the parameters N as 5,9 and 17 respectively, and the range of the scale parameter s as the interval [1,3 ]]An integer within; according to the formula (5), if the ISML value of a pixel point in the first image is greater than or equal to the ISML value of a pixel point at the same position in the second image, the pixel point is initially fused with the decision mapsThe value of the corresponding element in (1) is 1, otherwise, the value is 0.
5. The FT domain medical image fusion method according to claim 1, wherein the step 2 of smoothing the initial fusion decision maps at three different scales by using a GF model, and obtaining the smoothed fusion decision map comprises:
smoothing the initial fusion decision mapping maps under three different scales by adopting a GF model, thereby obtaining the smoothed fusion decision mapping maps:
GFmaps(i,j)=GF(I1(i,j),maps,rs,εs),s∈[1,3] (6)
where GF is the guided filter function, I1The method comprises the following steps of (1) obtaining a low-frequency subband image corresponding to a medical source image to be fused, wherein the low-frequency subband image is called a guide image; r issAnd εsRespectively representing the neighborhood size and the regularization parameter, the invention sets rs={5,11,19},εs={0.01,0.001,0.0001};
In the guided filter function GF at radius rsFilter window of omegakUnder the precondition of (1), the guide image I1And the fusion decision map GFmap after the smoothing processings(i, j) satisfies the following local linear relationship:
akand bkThe calculation formula of (2) is as follows:
wherein, mukAndrespectively representing guide images I1Is the filter window ω | is the mean and variance ofkThe number of the pixel points in the image,is omegakAverage value of inner filtering input image, regularization parameter epsilonsFor preventing akThe value is too large to cause falling into local optima.
6. The FT-domain medical image fusion method according to claim 1, wherein the step 2 further processes the smoothed fusion decision map, and obtaining a final fusion decision map comprises:
aiming at fusion decision map GFmap after smoothing processings(i, j) further processing to obtain a final fused decision map:
equation (11) is used for the smoothed fusion decision map GFmaps(i, j) simplifying, and marking the corresponding numerical value of the simplified numerical value which is more than or equal to 0.75 or less than or equal to 0.25; if GFmapsIf the value of (i, j) is greater than or equal to 0.75, the value of the element in the corresponding final fusion decision map remains unchanged and is still marked as GFmaps(i, j); if GFmaps(i, j) is less than or equal to 0.25, the element value in the corresponding final fusion decision mapping chart is set to 1-GFmaps(i, j); and if the two conditions are not met, setting the element numerical value in the corresponding final fusion decision mapping chart to be 0.
7. The FT domain medical image fusion method according to claim 1 or 6, wherein the step 2 of completing fusion of the low-frequency subband images comprises:
three matrices Fmap derived from equation (11)s(i,j),s∈[1,3]Obtaining the final fusion decision map Fmaps(i,j):
Fmap(i,j)=max[Fmap1(i,j),Fmap2(i,j),Fmap3(i,j)] (12)
The low-frequency subband image finally completes the fusion process through the formula (13):
F_L(i,j)=Fmap(i,j)*A_L(i,j)+(1-Fmap(i,j))*B_L(i,j) (13)
wherein, A _ L (i, j), B _ L (i, j) and F _ L (i, j) respectively represent a source image A low-frequency subband image, a source image B low-frequency subband image and a low-frequency subband fusion image.
8. The FT-domain medical image fusion method according to claim 1, wherein the processing of the high-frequency subband image coefficients in step 3 comprises:
the coefficients in the high-frequency subband image are subjected to an absolute value processing, namely abs (X _ H (i, j)(l,k)) Wherein, X _ H represents a high-frequency subband image of the medical source image X to be fused, X is a or B, (l, k) is a directional decomposition level number under an l scale, and l represents an l-th scale decomposition;
the calculating the SF value of each pixel point in the high-frequency subband image in the step 3 includes:
calculating SF values of all pixel points in each high-frequency sub-band image:
wherein M and N represent the width and height of the medical source image respectively;
the step 3 of completing the fusion of the high-frequency sub-band images comprises the following steps:
the high-frequency subband image finally completes the fusion process through the formula (15):
9. the FT domain medical image fusion method of claim 1, wherein the step 4 comprises:
applying inverse FT to the low frequency sub-band fused image F _ L (i, j) and the high frequency sub-band fused image F _ H (i, j)(l,k)And integrating to obtain a final fusion image F, wherein the corresponding mathematical expression is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011131840.3A CN112215922A (en) | 2020-10-21 | 2020-10-21 | FT domain medical image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011131840.3A CN112215922A (en) | 2020-10-21 | 2020-10-21 | FT domain medical image fusion method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112215922A true CN112215922A (en) | 2021-01-12 |
Family
ID=74056300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011131840.3A Pending CN112215922A (en) | 2020-10-21 | 2020-10-21 | FT domain medical image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112215922A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445311A (en) * | 2022-01-19 | 2022-05-06 | 海南大学 | Multi-source medical image fusion method and system based on domain transformation edge-preserving filter |
-
2020
- 2020-10-21 CN CN202011131840.3A patent/CN112215922A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445311A (en) * | 2022-01-19 | 2022-05-06 | 海南大学 | Multi-source medical image fusion method and system based on domain transformation edge-preserving filter |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hou et al. | Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model | |
US7773791B2 (en) | Analyzing lesions in a medical digital image | |
Liu et al. | Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window | |
Bandhyopadhyay et al. | Segmentation of brain MRI image–a review | |
Maksimovic et al. | Computed tomography image analyzer: 3D reconstruction and segmentation applying active contour models—‘snakes’ | |
CN109685814A (en) | Cholecystolithiasis ultrasound image full-automatic partition method based on MSPCNN | |
CN115830016A (en) | Medical image registration model training method and equipment | |
Wittek et al. | Computational biomechanics for medical image analysis | |
Zhai et al. | An improved full convolutional network combined with conditional random fields for brain MR image segmentation algorithm and its 3D visualization analysis | |
Jian et al. | Diagnosis of left ventricular hypertrophy using convolutional neural network | |
Qian et al. | Automatic segmentation method using FCN with multi-scale dilated convolution for medical ultrasound image | |
OK et al. | Mammogram pectoral muscle removal and classification using histo-sigmoid based ROI clustering and SDNN | |
CN112215922A (en) | FT domain medical image fusion method | |
Davatzikos | Measuring biological shape using geometry-based shape transformations | |
Gamal et al. | Brain tumor segmentation using 3D U-Net with hyperparameter optimization | |
Dai et al. | The application of multi-modality medical image fusion based method to cerebral infarction | |
Yang et al. | Intervertebral disc segmentation and diagnostic application based on wavelet denoising and AAM model in human spine image | |
Chambers et al. | A pre-processing scheme for real-time registration of dynamic contrast-enhanced magnetic resonance images | |
Ogiela et al. | Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions | |
Ma et al. | SEN-FCB: an unsupervised twinning neural network for image registration | |
Agrawal et al. | Neuro-fuzzy approach for reconstruction of 3-D spine model using 2-D spine images and human anatomy | |
CN115251889B (en) | Method for describing characteristics of dynamic connection network of functional magnetic resonance image | |
Makki et al. | Characterization of surface motion patterns in highly deformable soft tissue organs from dynamic MRI: An application to assess 4D bladder motion | |
Navab et al. | Medical image processing for analysis of colon motility | |
SHAH | Decision Support System for Automatic Identification of ROI (Region of Interest) for Medical Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |