CN102426702A - Computed tomography (CT) image and magnetic resonance (MR) image fusion method - Google Patents

Computed tomography (CT) image and magnetic resonance (MR) image fusion method Download PDF

Info

Publication number
CN102426702A
CN102426702A CN2011103266535A CN201110326653A CN102426702A CN 102426702 A CN102426702 A CN 102426702A CN 2011103266535 A CN2011103266535 A CN 2011103266535A CN 201110326653 A CN201110326653 A CN 201110326653A CN 102426702 A CN102426702 A CN 102426702A
Authority
CN
China
Prior art keywords
image
contrast
value
gray
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103266535A
Other languages
Chinese (zh)
Inventor
刘尚争
陈居现
王国珲
王泽生
刘忠超
苗金全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Institute of Technology
Original Assignee
Nanyang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Institute of Technology filed Critical Nanyang Institute of Technology
Priority to CN2011103266535A priority Critical patent/CN102426702A/en
Publication of CN102426702A publication Critical patent/CN102426702A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a computed tomography (CT) image and magnetic resonance (MR) image fusion method capable of improving program execution efficiency. The invention adopts the technical scheme that: direct statistics is conducted to the gray-level histograms of the images; and the contrast and the second order contrast of the source images are calculated according to the gray-level histograms. The adopted algorithm is as follows: the second order contract of corresponding pixels between the source images is compared, a second order contrast value with a bigger absolute value is kept, then the contrast of a fused image is calculated according to a contrast mode and the second order contrast value, and finally the gray value of the fused image is calculated according to the contrast value and the gray mode of the fused image. The CT image and MR image fusion method has the beneficial effects that: the contrast and the second order contrast of the source images are calculated according to the gray-level histograms, so that the source images do not need to be converted, the computational load is significantly reduced, the program execution efficiency is improved, and the subjective effect of the fused image is significantly improved.

Description

Fusion method of CT image and MR image
Technical Field
The invention relates to the fusion of medical contrast images, in particular to a method for fusing a CT image and an MR image.
Background
The medical imaging technology is a comprehensive and practical subject field integrating various subject achievements and advanced technologies. Medical images in various modes provide abundant, intuitive, qualitative and quantitative human physiological information for doctors and researchers from the visual angle, and become an important technical means for diagnosing various diseases. Since different modes of devices have different sensitivities and resolutions for large to small molecular atoms in the human body, there are their respective ranges and limitations of applicability. Computed Tomography (CT) images have good spatial resolution and geometric properties, low contrast to human soft tissue, and clear bone response. The Computer Tomography (CT) image has clear skeleton and high resolution, can play a good reference role for the positioning of the focus, but has poor display on the focus. CT has poor ability to distinguish soft tissue structures with similar electron density. If these two images are fused, the advantages of both images can be combined. The positioning information of the skeleton and the soft tissue details are effectively displayed through image fusion, and the positioning accuracy of the focus is greatly improved.
Magnetic Resonance (MR) images can clearly reflect the anatomy of soft tissues, organs, blood vessels, etc., but are not sensitive to calcifications, exhibit poor sensitivity to rigid bone tissue, and are subject to magnetic interference, which can cause geometric distortions. Medical images of various modalities reflect human body information from different angles, and comprehensive diagnosis information cannot be obtained from one image alone. It is necessary to integrate the image information of different modalities together to obtain more abundant information so as to know more data of the diseased tissue or organ, thereby making an accurate judgment or making a proper treatment plan. The accuracy of various images is subjectively affected by the spatial conception and guess of doctors to comprehensively determine the information needed by the doctors, and more importantly, some information is possibly ignored. The medical image fusion technology replaces the manual comprehensive mode of doctors with a computer image processing method, can improve the diagnosis efficiency and reliability, and accurately guides neurosurgery operation, radiotherapy and the like.
Medical image fusion generally refers to matching and reconstructing images of the same lesion area obtained by 2 or more than 2 different medical imaging devices, so as to obtain complementary information, increase the information amount, and make clinical diagnosis and treatment more accurate and perfect. In the beginning of the eighties of the twentieth century, medical image fusion gradually draws attention of the clinical medical field, and some researches at that time generally adopt a relatively intuitive and simple fusion method, such as pixel-by-pixel weighting averaging, filtering by using a logical operator and the like, so that the effect is not ideal.
In the nineties, medical image fusion technology becomes a leading topic in the field of medical images of the current generation, and exerts profound influence on the development of medical imaging technology in the future. At this stage, other fusion methods are proposed in succession, for example Laplacian pyramid method proposed by Burt, Gaussian pyramid decomposition method proposed by Akerman, low-pass ratio pyramid method proposed by toe, and multiresolution morphological filtering method and wavelet transform method. The rapid development of the medical image fusion technology is promoted. Such multi-scale image fusion techniques suffer from two problems: first, the algorithm is complex and time consuming. The multi-scale transformation image fusion technology needs to perform multi-scale transformation on each source image respectively, perform information fusion according to a certain fusion scheme to obtain multi-scale representation of a fusion image, and then perform corresponding inverse transformation to obtain the fusion image. The multi-scale transformation image fusion algorithm needs to perform 2 times of multi-scale transformation and 1 time of inverse transformation of the image, and the 3 times of image transformation occupies most time. Second, only absolute information is passed to the fused image, ignoring relative information. The goal of multi-scale transform techniques is to obtain a sparse representation of an image or signal, with the transform coefficients representing how much of a certain frequency component the image or signal contains, and not the proportion of that component in all frequencies. Transform coefficients are essentially absolute information. The current multi-scale transformation image fusion technology directly transmits transformation coefficients to fusion images according to various fusion schemes, ignores relative information and directly causes the contrast of the fusion images to be reduced.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method for fusing a CT image and an MR image. The fusion method of the invention directly calculates the contrast and the second-order contrast of the source image according to the gray histogram without image transformation of the source image, thereby obviously reducing the operation amount, improving the program execution efficiency and obviously improving the subjective effect of the fused image.
In order to achieve the above object, the image fusion method of the CT image and the MR image according to the present invention comprises the following steps:
(1) respectively determining background gray values of the input CT images and the input MR images;
(2) according to the background gray value, the contrast value of each pixel point in the CT image and the MR image is respectively calculated by the following formula:
wherein,I(x,y)is the gray value of the (x, y) pixel in the image,I M is the background gray value of the image;
(3) determining a mode of the CT and MR image contrast distributions;
(4) according to the mode of the contrast distribution, the second-order contrast of the CT image and the MR image is respectively calculated by the following formula:
Figure 147399DEST_PATH_IMAGE002
whereinC(x,y) Is a pixel (x,y) Contrast of (2), CMIs the mode of the image contrast value;
(5) transferring second order information (second order contrast information) to the fused image; the second-order contrast value of the fused image can be obtained by the following formula:
Figure 134947DEST_PATH_IMAGE003
wherein,CC 1(x,y) Is the second order contrast of the CT image,CC 2(x,y) Is the second order contrast of the MR image;
(6) the contrast of the fused image can be calculated by the following formula, and first order information (contrast information) is transferred to the fused image:
Figure 421572DEST_PATH_IMAGE004
wherein,CC f(x,y) Is the second-order contrast of the fused image, CMFIs the mode of contrast of the fused image, CM1Is the mode of contrast of the CT image, CM2Is the mode of contrast of the MR image;
(7) absolute information (background gray value) is transferred to the fused image, and the gray value of the fused image is obtained by the following formula:
Figure 548315DEST_PATH_IMAGE005
wherein,I fbis the background value of the fused image, IM1Is the background gray value of the CT image, IM2Is the background gray value of the MR image.
Compared with the prior art, the invention has the following advantages: 1. the method is carried out in a space domain, image transformation and inverse transformation are not required, and the realization process is simple; 2. the gray background value of the image is obtained by utilizing the gray histogram of the image, so that the method accords with the visual perception of human; 3. the gray value corresponding to the peak point of the peak with the largest coverage surface in the gray histogram is used as the background value of the image, so that the influence of an extremely bright or dark area on the overall contrast of the image is avoided; 4. the invention directly transmits the contrast information and the background gray information to the fusion image, avoids the reduction of the contrast of the fusion image and simultaneously keeps the edge and detail information of the fusion image.
Drawings
The invention is further explained below with reference to the figures and examples.
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a CT image.
Fig. 3 is a nuclear magnetic resonance image.
Fig. 4 is an NSCT fusion image.
Fig. 5 is a RIF fusion image.
Fig. 6 is a VA-method fusion image.
FIG. 7 is a fused image of CT and MR according to the present invention.
Detailed Description
Referring to fig. 1, the method for fusing a CT image and an MR image according to the present invention comprises the following steps:
step 1, determining background gray value of image:
1.1) respectively counting the gray distribution of the input CT and MR images to obtain a gray histogram, and finding out the gray value corresponding to the maximum peak value point in the gray histogram;
1.2) judging whether the gray value is equal to 0, if not, taking the gray value of the peak point as the background gray value of the image; if the gray value is equal to 0, taking the background value of the other image as the background gray value;
step 2, according to the background gray value, respectively calculating the contrast value of each pixel point in the CT image and the MR image by using the following formula:
step 3, determining the mode of the contrast value of each image;
3.1) mapping C (x, y) to the interval [0,255] according to the following formula, and then rounding;
wherein maxc represents the maximum value of C (x, y), and minc represents the minimum value of C (x, y);
3.2) finding out the mode of hc (x, y);
3.3) calculating the mode of the contrast according to the following formula:
Figure 460142DEST_PATH_IMAGE007
and 4, respectively calculating the second-order contrast of each pixel point of the CT image and the MR image according to the calculated contrast mode and the following formula:
step 5, according to the calculated second-order contrast value of the CT image and the MR image, calculating the second-order contrast of the fusion image according to the following formula, and transmitting the second-order contrast information to the fusion image:
Figure 893715DEST_PATH_IMAGE003
step 6, calculating the contrast of the fused image according to the calculated second-order contrast value of the fused image by combining the contrast modes of the CT image and the MR image;
6.1) calculating information entropies E1 and E2 of CT and MR images respectively;
6.2) calculating the contrast mode of the fused image according to the calculated image entropy and the following formula:
  
Figure 855855DEST_PATH_IMAGE008
wherein,E 1andE 2the information entropy of the CT image and the MR image respectively,C MF is the contrast mode of the fused image;
6.3) according to the calculated contrast mode of the fused image, calculating the contrast value of each pixel point of the fused image according to the following formula:
  
7, transmitting background information to the fusion image, and calculating a gray value of each pixel point of the fusion image to obtain the fusion image;
7.1) combining the information entropy and the contrast mode of the CT image and the MR image, calculating the background gray value of the fusion image according to the following formula:
  
Figure 785950DEST_PATH_IMAGE009
wherein E isIs the entropy of the CT image, EIs the entropy of the MR image, IM1Is the background gray value of the CT image, IM2Is the background gray value of the MR image;
7.2) calculating the gray value of each pixel point of the fusion image according to the calculated background gray value of the fusion image and the gray background value of the CT image and the MR image according to the following formula to obtain the fusion image as shown in FIG. 7.
Figure 208841DEST_PATH_IMAGE005
Wherein,I fbis the background value of the fused image,I M1is the background value of the CT image,I M2is the background value of the MR image.
The effect of the invention can be further confirmed by the following experiments:
experimental conditions and contents
The experimental conditions are as follows: the input images used in the experiment are shown in fig. 2 and 3, fig. 2 being a CT image and fig. 3 being an MR image. In the experiment, various methods are realized by using Matlab language programming.
Second, experimental results
In order to illustrate the superiority of the CT and MR image fusion methods proposed by the present invention, the experiments were performed first with the method proposed by the present invention, and then with the nonsubsampled contourlet method (NSCT), the region-based image fusion method (RIF), and the variation method (VA). And calculating anisotropy performance index (LQM), mutual information and entropy of the fused image, and the experimental results are shown in Table 1.
TABLE 1 fused image Performance indicators
LQM Mutual information Time/second Entropy of the entropy
NSCT 65.7380 3.1053 76.2500 4.3090
RIF 65.0720 3.0471 139.0469 4.3315
VA 61.3231 3.3252 1.9731 3.9422
The method of the invention 67.5268 5.3513 0.6094 4.6809
The CT image mainly comprises human brain and skull information, the nuclear magnetic resonance image mainly comprises soft tissue structure information, the background of the two images is black, and the information has complementarity. The skull and soft tissue in fig. 4 and 5 are relatively dark compared with the background, the skull is clear without the CT image, and the soft tissue is high in brightness without the nuclear magnetic resonance image. Fig. 6 is the least visually effective, with the skull and soft tissue being relatively dark, especially the soft tissue being almost indistinguishable. The fused image of the method of the invention is like skull and soft tissue in figure 7 is brighter than other methods.
Table 1 shows the quantitative evaluation indexes of the fused images of the algorithm and the reference algorithm. As can be seen from Table 1, the logarithmic anisotropy index (LQM) of the fused image is the best in the method, the logarithmic anisotropy index of the fused image by the variational method is the worst, and the method is 10.12% higher than that by the non-downsampling contourlet method. The entropy of the fused image is in the first place, the second place is an image fusion algorithm based on the region, and the third place is a non-down sampling contourlet method. The Mutual Information (MI) of the fused images is the highest and is far higher than that of other 3 algorithms, and the mutual information of the method is 1.76 times that of the image fusion algorithm based on the region. Compared with the CPU running time of other 3 algorithms, the method greatly reduces the CPU running time, and is not in the same order of magnitude. From the above analysis, it can be seen that the method of the present invention is superior to other methods.

Claims (4)

1. A method for fusing a CT image and an MR image is characterized by comprising the following steps:
(1) respectively determining background gray values of the input CT images and the input MR images;
(2) according to the background gray value, the contrast value of each pixel point in the CT image and the MR image is respectively calculated by the following formula:
Figure 2011103266535100001DEST_PATH_IMAGE002
wherein,I(x,y)is the gray value of the (x, y) pixel in the image,I M is the background gray value of the image;
(3) determining a mode of the CT and MR image contrast distributions;
(4) according to the mode of the contrast distribution, the second-order contrast of the CT image and the MR image is respectively calculated by the following formula:
Figure 2011103266535100001DEST_PATH_IMAGE004
whereinC(x,y) Is a pixel (x,y) Contrast of (2), CMIs the mode of the image contrast value;
(5) transferring second order information to the fused image; the second-order contrast value of the fused image can be obtained by the following formula:
Figure 2011103266535100001DEST_PATH_IMAGE006
wherein,CC 1(x,y) Is the second order contrast of the CT image,CC 2(x,y) Is the second order contrast of the MR image;
(6) the contrast of the fused image can be calculated by the following formula, delivering first order information to the fused image:
Figure 2011103266535100001DEST_PATH_IMAGE008
wherein,CC f(x,y) Is the second-order contrast of the fused image, CMFIs the mode of contrast of the fused image, CM1Is the mode of contrast of the CT image, CM2Is the mode of contrast of the MR image;
(7) absolute information (background gray value) is transferred to the fused image, and the gray value of the fused image is obtained by the following formula:
Figure 2011103266535100001DEST_PATH_IMAGE010
wherein,I fbis the background value of the fused image, IM1Is the background gray value of the CT image, IM2Is the background gray value of the MR image.
2. A method for fusing a CT image with an MR image as claimed in claim 1, wherein the step of determining the background gray level value comprises the steps of: respectively counting the gray distribution of the input CT image and the input MR image to obtain a gray histogram, and finding out the gray value corresponding to the maximum peak value point in the gray histogram; judging whether the gray value is equal to 0 or not, and if not, taking the gray value of the peak point as the background gray value of the image; if the gray value is equal to 0, the background value of the other image is taken as the background gray value.
3. A method of fusing a CT image with an MR image as claimed in claim 1, wherein the step of determining the mode of the contrast distribution of the image comprises the steps of: c (x, y) is mapped to the interval [0,255] according to the following formula and then rounded;
Figure 2011103266535100001DEST_PATH_IMAGE012
wherein maxc represents the maximum value of C (x, y), and minc represents the minimum value of C (x, y); finding out the mode of hc (x, y); the mode of contrast is calculated according to the following formula:
Figure 2011103266535100001DEST_PATH_IMAGE014
4. a method for fusing a CT image with an MR image as claimed in claim 1, wherein the second order contrast of the image is calculated according to the following formula:
wherein C isMIs the mode of image contrast.
CN2011103266535A 2011-10-25 2011-10-25 Computed tomography (CT) image and magnetic resonance (MR) image fusion method Pending CN102426702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103266535A CN102426702A (en) 2011-10-25 2011-10-25 Computed tomography (CT) image and magnetic resonance (MR) image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103266535A CN102426702A (en) 2011-10-25 2011-10-25 Computed tomography (CT) image and magnetic resonance (MR) image fusion method

Publications (1)

Publication Number Publication Date
CN102426702A true CN102426702A (en) 2012-04-25

Family

ID=45960681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103266535A Pending CN102426702A (en) 2011-10-25 2011-10-25 Computed tomography (CT) image and magnetic resonance (MR) image fusion method

Country Status (1)

Country Link
CN (1) CN102426702A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854270A (en) * 2012-11-28 2014-06-11 广州医学院第一附属医院 CT and MR inter-machine three dimensional image fusion registration method and system
CN106683061A (en) * 2017-01-05 2017-05-17 南京觅踪电子科技有限公司 Method for enhancing medical image based on corrected multi-scale retinex algorithm
CN107203696A (en) * 2017-06-19 2017-09-26 深圳源广安智能科技有限公司 A kind of intelligent medical system based on image co-registration
CN109146993A (en) * 2018-09-11 2019-01-04 广东工业大学 A kind of Method of Medical Image Fusion and system
CN110715820A (en) * 2018-07-11 2020-01-21 宁波其兰文化发展有限公司 Riverbed sampling method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7508968B2 (en) * 2004-09-22 2009-03-24 Siemens Medical Solutions Usa, Inc. Image compounding based on independent noise constraint
CN101422373A (en) * 2008-12-15 2009-05-06 沈阳东软医疗系统有限公司 Interfusion method of CT spacer and interest region capable of releasing CT image
CN101987019A (en) * 2009-08-03 2011-03-23 徐子海 Positron emission tomography (PET) and computed tomography (CT) cross-modality medical image fusion method based on wavelet transform

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7508968B2 (en) * 2004-09-22 2009-03-24 Siemens Medical Solutions Usa, Inc. Image compounding based on independent noise constraint
CN101422373A (en) * 2008-12-15 2009-05-06 沈阳东软医疗系统有限公司 Interfusion method of CT spacer and interest region capable of releasing CT image
CN101987019A (en) * 2009-08-03 2011-03-23 徐子海 Positron emission tomography (PET) and computed tomography (CT) cross-modality medical image fusion method based on wavelet transform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘尚争,韩峰,韩九强: "基于二阶对比度的图像融合算法", 《南阳理工学院学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854270A (en) * 2012-11-28 2014-06-11 广州医学院第一附属医院 CT and MR inter-machine three dimensional image fusion registration method and system
CN106683061A (en) * 2017-01-05 2017-05-17 南京觅踪电子科技有限公司 Method for enhancing medical image based on corrected multi-scale retinex algorithm
CN107203696A (en) * 2017-06-19 2017-09-26 深圳源广安智能科技有限公司 A kind of intelligent medical system based on image co-registration
CN110715820A (en) * 2018-07-11 2020-01-21 宁波其兰文化发展有限公司 Riverbed sampling method
CN109146993A (en) * 2018-09-11 2019-01-04 广东工业大学 A kind of Method of Medical Image Fusion and system

Similar Documents

Publication Publication Date Title
Gandhamal et al. Local gray level S-curve transformation–a generalized contrast enhancement technique for medical images
US10413253B2 (en) Method and apparatus for processing medical image
Lladó et al. Automated detection of multiple sclerosis lesions in serial brain MRI
CN110910405B (en) Brain tumor segmentation method and system based on multi-scale cavity convolutional neural network
Giannini et al. A fully automatic algorithm for segmentation of the breasts in DCE-MR images
Liu et al. Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window
Ikhsan et al. An analysis of x-ray image enhancement methods for vertebral bone segmentation
CN106530236B (en) Medical image processing method and system
CN110288698B (en) Meniscus three-dimensional reconstruction system based on MRI
Koundal et al. Challenges and future directions in neutrosophic set-based medical image analysis
CN106709920B (en) Blood vessel extraction method and device
CN102426702A (en) Computed tomography (CT) image and magnetic resonance (MR) image fusion method
CN106504221B (en) Method of Medical Image Fusion based on quaternion wavelet transformation context mechanism
Yang et al. Research and development of medical image fusion
GB2480864A (en) Processing system for medical scan images
Ganvir et al. Filtering method for pre-processing mammogram images for breast cancer detection
Alam et al. Evaluation of medical image registration techniques based on nature and domain of the transformation
Yin et al. Automatic breast tissue segmentation in MRIs with morphology snake and deep denoiser training via extended Stein’s unbiased risk estimator
Janan et al. RICE: A method for quantitative mammographic image enhancement
Gu et al. Cross-modality image translation: CT image synthesis of MR brain images using multi generative network with perceptual supervision
Dai et al. The application of multi-modality medical image fusion based method to cerebral infarction
CN108460748B (en) Method and system for acquiring characteristic training parameters for breast tumor analysis and diagnosis system
Carminati et al. Reconstruction of the descending thoracic aorta by multiview compounding of 3-d transesophageal echocardiographic aortic data sets for improved examination and quantification of atheroma burden
Rusu Segmentation of bone structures in Magnetic Resonance Images (MRI) for human hand skeletal kinematics modelling
CN110570369B (en) Thyroid nodule ultrasonic image denoising method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120425