CN107977926A - A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net - Google Patents

A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net Download PDF

Info

Publication number
CN107977926A
CN107977926A CN201711248590.XA CN201711248590A CN107977926A CN 107977926 A CN107977926 A CN 107977926A CN 201711248590 A CN201711248590 A CN 201711248590A CN 107977926 A CN107977926 A CN 107977926A
Authority
CN
China
Prior art keywords
mrow
msub
pet
image
mri
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711248590.XA
Other languages
Chinese (zh)
Other versions
CN107977926B (en
Inventor
王昌
任琼琼
程雅青
于毅
赵宗亚
秦鑫
张文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinxiang Medical University
Original Assignee
Xinxiang Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxiang Medical University filed Critical Xinxiang Medical University
Priority to CN201711248590.XA priority Critical patent/CN107977926B/en
Publication of CN107977926A publication Critical patent/CN107977926A/en
Application granted granted Critical
Publication of CN107977926B publication Critical patent/CN107977926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention discloses a kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net, specific implementation step is:1) PET image to be fused (thumbnail) carries out HIS conversion and color transformation, is converted into IHS and RGB channel information;2) PET, MRI Rigid Registration based on brain wire-frame image vegetarian refreshments are used, by translating, rotating, scaling, brain anatomical position is alignd;3) frequecy characteristic of different scale different directions is obtained using NSCT conversion, the average weighted low-frequency information fusion rule based on local neighborhood and the PCNN high-frequency information fusion rules of spatial frequency excitation are designed, realizes the PET image of RGB triple channels and the information fusion of MRI gray level images.This method makes full use of the functional information of PET images offer and dissection, the soft tissue information of MRI image offer, improves diagnosis efficiency and the accuracy of doctor, partial functions of the alternative PET/MRI with machine fusion device.

Description

A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net
Technical field
The invention belongs to Medical Image Processing and applied technical field, and in particular to a kind of PET/ for improving neutral net The different machine brain phantom information fusion methods of MRI, this method can realize brain PET, MRI image feature Rigid Registration, dissect position Put alignment, and functional information and dissection information fusion are reached into preferable syncretizing effect on piece image, in vision and objective The requirement of clinical diagnosis is satisfied by evaluation index.
Background technology
The image documentation equipments such as CT, MRI, SPECT, PET can provide the diagnostic message of different modalities, multi-modal letter for clinician Breath fusion is the developing direction of current image equipment.PET/CT is applied to clinic, but the same machines of PET/MRI with machine fusion device Fusion also has many technical barriers to solve.The different machine image information fusions of PET/MRI can realize the part that PET/MRI is merged with machine Function, makes full use of the brain functional information that PET is provided and the anatomic information that MRI is provided, is by the information fusion of different modalities One width brain phantom.The system can effectively reduce inspection fee, be the service of providing assistance in diagnosis of brain phantom diagnostic center, tool There is extensive clinical practice space.
The different machine brain phantom fusions of PET/MRI mainly have two steps:1) matched somebody with somebody by the rigidity of brain PET, MRI image Standard, realizes the alignment of anatomical features.2) multi-modal visual fusion, the anatomic information of the functional information of PET and MRI are merged one On width image.Brain PET, the alignment of MRI image anatomical features belong to multi-modal Image registration, due to same person all brain structures one Cause, therefore use Rigid Registration method.Using Chamfer matching method for registering, for MRI and PET image, using brain side Feature of the edge as registration, by translating, rotating, scaling, makes two images align in position.Multi-modal image fusing method The Image Fusion based on spatial domain and the image co-registration method based on transform domain can be divided into.Wherein, spatial domain image co-registration is calculated Method mainly has gray moment (AVG), IHS converter techniques and Principal Component Analysis (PCA), these methods are directly in source images Pixel grey scale space or color space on carry out gray value processing, fusion accuracy is high.Transform domain image fusion method has discrete Wavelet transformation (DWT), method of Laplace transformation (LAP), profile ripple fusion method, support vector machines fusion method (SVT), these methods Syncretizing effect is good, multiscale analysis method is commonly used in fusion process, the Multiscale Fusion method based on transform domain is currently to scheme As the mainstream of fusion.Contourlet transformation is a kind of multiresolution, locality and multidirectional sparse representation method, well The defects of compensate for wavelet transformation, but need to carry out down-sampling in conversion process, without translation invariance.Non-lower sampling Contourlet transformation (NSCT) has good directionality and translation invariance, pretends the instrument for multi-scale transform.Fusion Rule is the key of image co-registration, and Current Situation of Neural Network method is gradually applied to the fusion of medical image.Based on pulse-couple god Fusion rule through network (Pulse-coupled neural networks, PCNN) is applied to the fusion of MRI, CT images;On Hereafter Hidden Markov (CHMM) statistical model and improved Pulse Coupled Neural Network (M-PCNN) design fusion rule, are realized Multi-modal image co-registration;The information that Pulse Coupled Neural Network based on spatial frequency excitation is applied to multifocal dot image is melted Close.
The content of the invention
In view of the above-mentioned drawbacks of the prior art or insufficient, it is an object of the present invention to provide one kind to improve nerve net The different machine brain phantom information fusion methods of PET/MRI of network.
In order to realize above-mentioned task.The present invention is achieved by following technical solution:
A kind of different machine brain phantom fusion methods of PET/MRI for improving neutral net, it is characterised in that real as follows Apply:
Step 1:HIS conversion and color transformation are carried out to brain PET images (thumbnail) to be fused, obtained corresponding Brightness, tone, saturation degree component and RGB triple channel components;
Step 2:PET image and MRI image to luminance component carry out the non-rigid registration based on brain wire-frame image vegetarian refreshments, Realize the alignment of brain anatomical position;Concretely comprise the following steps:
1) gradient image is calculated using Sobel algorithms, binaryzation is carried out to gradient image and edge thinning obtains brain wheel Wide pixel;
2) by it is equidistant take to select make wire-frame image vegetarian refreshments number identical with interpolation method, with least square method obtain two width figures The space conversion matrices of picture, the alignment of the Rigid Registration and brain structure of two images is realized using space transformation matrix;
Step 3:Image information fusion, specific step are carried out to the PET image RGB triple channels after registration and MRI gray level images It is rapid as follows:
1) NSCT conversion is carried out to the PET image RGB triple channels after registration and MRI gray level images, obtains different scale, no Equidirectional low-and high-frequency sub-band information;
2) fusion rule is designed, the fusion for low-frequency information uses the weighted mean method based on local neighborhood, for height The fusion of frequency information is using the Pulse Coupled Neural Network method based on spatial frequency excitation;
3) to the information after fusion, using NSCT inverse transformations, the triple channel information after fusion is obtained.
4) synthesis of coloured image, the brain fusion evaluation of MRI, PET after being merged are carried out to RGB triple channels.
According to the present invention, in step 1, HIS conversion is carried out to brain PET images first and RGB is converted, obtains image Intensity components, then carry out Rigid Registration with Intensity components.
As a preferred embodiment, the step 1) in step 2 is specific as follows, and gradient is calculated using the template in 8 directions Image, and two-value and refinement acquisition brain wire-frame image vegetarian refreshments are carried out to gradient image, wherein thinning algorithm utilizes bwmorph functions Realize
As a preferred embodiment, the step 2) specific method in step 2 is as follows:
By it is equidistant take to select make PET, MRI brain wire-frame image vegetarian refreshments number identical with interpolation method, using passing through level Direction, vertical direction, and rotating spatial alternation, the space conversion matrices obtained using least square method, carry out image empty Between conversion realize the alignment of brain anatomy position.
As a preferred embodiment, the step 1) specific method in step 3 is as follows:
NSCT conversion is carried out to the gray level image of PET image RGB triple channels information and MRI, decomposes and obtains different scale, no Equidirectional sub-band coefficients(j=1,2 ... J;K=1,2 ... Mj;I=1,2 ... Nj), wherein j Decomposition scale is represented, k represents the directional subband of each Scale Decomposition, MjRepresent the directional subband maximum number of j Scale Decompositions, NjGeneration The coefficient total number of table respective sub-bands, A, B represent MRI, PET source images respectively.
As a preferred embodiment, the step 2) specific method in step 3 is as follows:
For low-frequency information fusion using the weighted mean method based on local neighborhood is taken, for high-frequency information Combined design base Merged in the Pulse Coupled Neural Network of spatial frequency excitation;
The weighted mean method based on local neighborhood is used for low-frequency information, local neighborhood is defined asWherein w (m, n) is the weighting matrix that window is 3x3, ifWeight coefficient at (x, y) place is calculated according to local neighborhood:
It is in the fusion coefficients of low frequency sub-band:
CF(x, y)=wA×CA(x,y)+wB×CB(x, y), wherein, CF(x, y) be blending image low frequency sub-band, CA(x, y)CBThe low frequency sub-band of (x, y) PET, MRI image.
The Pulse Coupled Neural Network method based on spatial frequency excitation, first institute are used for high-frequency sub-band fusion rule There is neuron to be in flameout state, Uij(0)=0, Yij(0)=0, θij(0)=0, Lij(0)=0, if:
αl=1.0, VL=0.2, Vθ=30, iterations N=150, utilize high frequency The spatial frequency of band is inputted as the external drive of PCNN;VLFor the amplification coefficient of connection input, VθFor the times magnification of dynamic threshold Number, αLConnect the attenuation coefficient of input, αθFor the attenuation coefficient of dynamic threshold.
By being positioned as spatial frequency (SF):Wherein RF, CF represent line frequency, row frequency respectively Rate, then, is iterated according to the following formula, and counts the total pulse firing number of each neuron;
Tij(n)=Tij(n-1)+Yij(n)
Wherein, by LijLink and input for neuron, UijFor inside neurons activity, YijExported for neuron, wijCynapse connects Ad valorem matrix, β are link strength, TijFor total pulse firing number.
High-frequency sub-band fusion rule:
Complete melting for high-frequency sub-band coefficient Close.
Further, the method for step 3) is by NSCT inverse transformations in step 3, obtains the channel information after fusion, And the synthesis of coloured image is carried out to RGB triple channels.
The different machine brain phantom fusion method of improvement neutral net PET, MRI of the present invention, by based on brain contour pixel PET, MRI Rigid Registration of point, realize the alignment of brain anatomical position, the PCNN for being converted using NSCT and being encouraged based on spatial frequency Method, finally realizes the different machine brain phantom information fusions of PET/MRI, and this method makes full use of the functional information that PET images provide Dissection, soft tissue information with MRI image offer, improve diagnosis efficiency and the accuracy of doctor, refer in vision and objective evaluation The requirement for meeting clinical diagnosis is put on, the medical image quality after fusion is effectively improved, provides auxiliary for diagnostic imaging center and examine Disconnected service, alternative PET/MRI have wide application prospect with the partial function of machine fusion device.
Brief description of the drawings
Fig. 1 is the flow chart of the different machine brain phantom fusion methods of improvement neutral net PET/MRI of the present invention.
Fig. 2 is the simplified model of PCNN neutral nets;
Fig. 3 is present example, and step 2.1 calculates gradient image with Sobel operators, and gradient image is carried out two-value and The brain wire-frame image vegetarian refreshments that edge thinning obtains, the brain wire-frame image vegetarian refreshments that Fig. 3 is;
Fig. 4 is the embodiment of the present invention, and the Intensity components and MRI image of step 2.2PET images are based on edge pixel The non-rigid registration of point is as a result, wherein Fig. 4 (a) is MRI image, and Fig. 4 (b) is PET image Intensity components, and Fig. 4 (c) is The registration result of PET image Intensity components;
Fig. 5 is the embodiment of the present invention, and step 3.1 converts to obtain the frequency band of different scale different directions using NSCT, wherein, Fig. 5 (a) is MRI image to be fused, and Fig. 5 (b) is the band information converted by NSCT;
Fig. 6 is the fusion results schematic diagram of the embodiment of the present invention 1, and wherein Fig. 6 (a) is MRI image, and Fig. 6 (b) is PET figures Picture, 6 (c) are the fusion results of the present invention, and 6 (d) is the fusion results of NSCT+PCNN, and 6 (e) is the fusion results of NSCT, 6 (f) The DWT fusion results for being.
Fig. 7 is the fusion results schematic diagram of the embodiment of the present invention 2, and wherein Fig. 7 (a) is MRI image, and Fig. 7 (b) is PET figures Picture, 7 (c) are the fusion results of the present invention, and 7 (d) is the fusion results of NSCT+PCNN, and 7 (e) is the fusion results of NSCT, 7 (f) The DWT fusion results for being.
Below in conjunction with drawings and examples, the present invention is described in further detail.
Embodiment
The present embodiment provides a kind of different machine brain phantom fusion method of PET, MRI for improving neutral net, utilizes brain side Edge feature, realizes the alignment of the anatomical position of PET, MRI, using the pulse coupled neural net of NSCT conversion and spatial frequency excitation Network realizes the different machine brain phantom fusion of PET, MRI, fully integrates anatomic information and functional information, effectively protects details and edge Profile, improves the quality of fusion.
The hardware environment of the present embodiment is:AMD A8-6500 processors, dominant frequency 3.5GHz, memory 8.0GB, software environment Windows7 and Matlab2013b.MRI, PET Image sources are in http used by experiment:// www.med.harvard.edu/AANLIB/。
The different machine brain phantom fusion method of improvement neutral net PET, MRI of the present embodiment, its flow chart as shown in Figure 1, Implement as follows:
Step 1:HIS conversion and color transformation are carried out to brain PET images (thumbnail) to be fused, obtained corresponding Brightness, tone, saturation degree component and RGB triple channel components;
Specific method is that HIS conversion is carried out to brain PET images first and RGB is converted, obtains the Intensity of image Component, then carry out Rigid Registration with Intensity components.
Step 2:PET image and MRI image to luminance component carry out the non-rigid registration based on brain wire-frame image vegetarian refreshments, Realize the alignment of brain anatomical position;Concretely comprise the following steps:
Step 2.1):Gradient image is calculated using Sobel algorithms, binaryzation is carried out to gradient image and edge thinning obtains Brain wire-frame image vegetarian refreshments;
Specific method is to calculate gradient image using the template of different directions, and to gradient image progress two-value and carefully Change and obtain brain wire-frame image vegetarian refreshments, wherein thinning algorithm is realized using bwmorph functions.
Step 2.2):By it is equidistant take to select make wire-frame image vegetarian refreshments number identical with interpolation method, obtained with least square method The space conversion matrices of two images are taken, the Rigid Registration of two images and pair of brain structure are realized using space transformation matrix Together;
Specific method be by it is equidistant take to select make PET, MRI brain wire-frame image vegetarian refreshments number identical with interpolation method, profit With by horizontal direction, vertical direction, and rotating spatial alternation, space conversion matrices are obtained using least square method, to two Width image carries out the alignment that spatial alternation realizes brain Anatomical Structure position.
Step 3:Image information fusion, specific step are carried out to the PET image RGB triple channels after registration and MRI gray level images It is rapid as follows:
Step 3.1):NSCT conversion is carried out to the PET image RGB triple channels after registration and MRI gray level images, is obtained different The low-and high-frequency sub-band information of scale, different directions;
Specific method is to carry out NSCT conversion to the gray level image of PET image RGB triple channels information and MRI, obtain different The sub-band coefficients of scale, different directionsWithWherein:J represents decomposition scale, and k represents each Scale Decomposition Directional subband, MjRepresent the directional subband maximum number of j Scale Decompositions, NjThe coefficient total number of respective sub-bands is represented, A, B are represented MRI, PET source images;J=1,2 ... J;K=1,2 ... Mj;I=1,2 ... Nj
Step 3.2):Fusion rule is designed, the fusion for low-frequency information uses the weighted mean method based on local neighborhood, Fusion for high-frequency information is using the Pulse Coupled Neural Network method based on spatial frequency excitation;
Specific method is as follows:
The weighted mean method based on local neighborhood is used for low-frequency information, local neighborhood is defined asWherein w (m, n) is the weighting matrix of window 3X3,
IfWeight coefficient at (x, y) place is calculated according to local neighborhood:
It is in the fusion coefficients of low frequency sub-band:
CF(x, y)=wA×CA(x,y)+wB×CB(x, y), wherein, CF(x, y) be blending image low frequency sub-band, CA(x, y)CB(x, y) is PET, the low frequency sub-band of MRI image;
The Pulse Coupled Neural Network method based on spatial frequency excitation, first institute are used for high-frequency sub-band fusion rule There is neuron to be in flameout state, Uij(0)=0, Yij(0)=0, θij(0)=0, Lij(0)=0, if:
αl=1.0, VL=0.2, Vθ=30, iterations N=150, utilizes high-frequency sub-band Spatial frequency as PCNN external drive input;VLFor the amplification coefficient of connection input, VθFor the times magnification of dynamic threshold Number, αLConnect the attenuation coefficient of input, αθFor the attenuation coefficient of dynamic threshold;
By being positioned as spatial frequency (SF):Wherein RF, CF represent line frequency, row frequency respectively Rate;Then, it is iterated according to the following formula, and counts the total pulse firing number of each neuron;
Tij(n)=Tij(n-1)+Yij(n)
Wherein, by LijLink and input for neuron, UijFor inside neurons activity, YijExported for neuron, wijCynapse connects Ad valorem matrix, β are link strength, TijFor total pulse firing number;
High-frequency sub-band fusion rule:
Complete high-frequency sub-band coefficient Fusion.
Step 3.3):To the information after fusion, using NSCT inverse transformations, the triple channel information after fusion is obtained.Tool Body method is by NSCT inverse transformations, obtains the channel information after fusion, and the conjunction of coloured image is carried out to RGB triple channels Into.
Step 3.4):The synthesis of coloured image, the brain fusion of MRI, PET after being merged are carried out to RGB triple channels Image.
It is the specific embodiment that inventor provides below, it is necessary to explanation, these embodiments are preferably examples, this hair It is bright to be not limited to these embodiments.
Embodiment 1:MRI-PET coronal-plane visual fusions
Specifically follow the steps below:
Step 1:HIS conversion is carried out to PET image to be fused, obtain corresponding brightness, tone, saturation degree component and RGB triple channel components;
Step 2:The non-rigid registration based on edge is carried out in MRI image to the PET image of luminance component, realizes brain knot The alignment of structure;
1) gradient image is calculated using the template in 8 directions, and two-value and refinement acquisition brain is carried out to gradient image Wire-frame image vegetarian refreshments;
1.1) template in 8 directions is as follows:
1.2) two-value processing is carried out by obtaining threshold value automatically;
1.3) using bwmorph functions realize thinning algorithm, as shown in Figure 3.
2) by it is equidistant take to select make wire-frame image vegetarian refreshments number identical with interpolation method, pass through least square method obtain two width The space conversion matrices of image, the alignment of anatomical structure, Rigid Registration result such as Fig. 4 institutes are realized using spatial alternation and interpolation Show.
Wherein, Δ x is translation of the two images in X-axis, and Δ y is the translation in Y-axis, and θ is rotation transformation.
Step 3:Image information fusion is carried out to the PET image RGB triple channels after registration and MRI gray level images.
1) the RGB triple channels to PET image after registration and MRI gray level images carry out NSCT conversion, obtain different scale, no Equidirectional low-and high-frequency sub-band coefficients(j=1,2 ... J;K=1,2 ... Mj;I=1,2 ... Nj), Wherein Scale Decomposition using " Laplacian Pyramid Decomposition ", anisotropic filter group DFB select " pkva ", Directional Decomposition parameter is arranged to [0,1,3,4], that is, carries out 4 Scale Decompositions, and the number of obtained directional subband is followed successively by 1,2, 4,16, wherein j represent decomposition scale, and k represents the directional subband of each Scale Decomposition, MjRepresent the directional subband of j Scale Decompositions Maximum number, NjCoefficient total number A, B for representing respective sub-bands represents MRI, PET source images respectively, and the wherein NSCT of MRI image becomes The frequency information for changing acquisition is as shown in Figure 5.
2) for low-frequency information fusion using the weighted mean method based on local neighborhood is taken, for high-frequency information Combined design Pulse Coupled Neural Network based on spatial frequency excitation is merged;
2.1) fusion of low frequency sub-band information uses the weighted mean method based on local neighborhood, and local neighborhood is defined asWherein w (m, n) is 3x3 window weight matrixes, i.e.,
Weight coefficient at (x, y) place is calculated according to local neighborhood:
It is in the fusion coefficients of low frequency sub-band:
CF(x, y)=wA×CA(x,y)+wB×CB(x,y)
Wherein CF(x, y) be blending image low frequency sub-band, CA(x,y)、CB(x, y) is PET, low frequency of MRI image Band.
2.2) the Pulse Coupled Neural Network method based on spatial frequency excitation is used for high-frequency sub-band fusion rule.It is first First all neurons are in flameout state, Uij(0)=0, Yij(0)=0, θij(0)=0, Lij(0)=0, high-frequency sub-band is utilized Spatial frequency is inputted as the external drive of PCNN;
Spatial frequency (SF's) is positioned asWherein RF, CF represent line frequency, row frequency respectively Rate.
Then, it is iterated according to the following formula, and counts the total pulse firing number of each neuron;
Tij(n)=Tij(n-1)+Yij(n)
Wherein, by LijLink and input for neuron, UijFor inside neurons activity, YijExported for neuron, wijCynapse connects Ad valorem matrix, β are link strength, TijFor total pulse firing number.
Wherein cynapse chain matrice is arranged to:αl=1.0, VL=0.2, Vθ=30, Iterations N=150, high-frequency sub-band fusion rule are:
It can complete high-frequency sub-band coefficient Fusion.
2.3) NSCT inverse transformations are carried out to the band information after fusion, obtains the triple channel information after fusion.
2.4) synthesis of coloured image is carried out to RGB triple channels, the brain fusion evaluation of MRI, PET to the end is obtained, melts Shown in image such as Fig. 6 (c) after conjunction.
Embodiment 2:MRI-PET cross sections visual fusion
Specific implementation step can refer to example 1, and parameter setting is identical.
In order to verify the feasibility of the present invention, validity, different machine information is carried out using brain PET image and MRI image and is melted Close.Following table 1 gives the objective evaluation of the different obtained fusion results of fusion method.
Table 1:The objective evaluation of 1 fusion results of embodiment
By using standard deviation, entropy, clarity, average gradient, QabfTo weigh the quality of fused image, standard deviation reaction The dispersion degree of gray level, standard deviation is bigger, and information is abundanter;Entropy represent image information content, entropy is bigger, comprising information content It is bigger;Clarity response diagram picture is to the ability to express of minor detail, and clarity is higher, and information content is bigger, and syncretizing effect is better;It is flat The ability of the edge details of equal gradient response diagram picture, average gradient is bigger, and syncretizing effect is better;QabfWeigh the fusion effect at edge The structural similarity of fruit, fusion results and source images, QabfBigger, syncretizing effect is better.From the visitor of 1 embodiment of table, 1 fusion results Evaluation is seen to can be seen that:The method of the present invention compared with NSCT+PCNN methods, NSCT methods, DWT methods, the standard deviation of fusion, Entropy, clarity, average gradient, QabfValue is maximum, and syncretizing effect is best.Can be with from the fusion results in Fig. 6-7 coronal-planes and cross section Find out, profile, Texture eigenvalue are remained using the syncretizing effect of the method for the present invention to the full extent, details is more prominent, main It is best to see effect.
More than embodiment merely to those skilled in the art can be better understood from the present invention, it is any to be familiar with this technology The technical staff in field in scope disclosed in this invention, technique according to the invention scheme carry out equivalent substitution, change or Increase, belongs to protection scope of the present invention.

Claims (7)

1. a kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net, it is characterised in that as follows Implement:
Step 1:HIS conversion and color transformation are carried out to brain PET images to be fused, obtain corresponding brightness, tone, saturation Spend component and RGB triple channel components;
Step 2:PET image and MRI image to luminance component carry out the non-rigid registration based on brain wire-frame image vegetarian refreshments, realize The alignment of brain anatomical position;Concretely comprise the following steps:
1) gradient image is calculated using Sobel algorithms, binaryzation is carried out to gradient image and edge thinning obtains brain wire-frame image Vegetarian refreshments;
2) by it is equidistant take to select make wire-frame image vegetarian refreshments number identical with interpolation method, two images are obtained with least square method Space conversion matrices, the alignment of the Rigid Registration and brain structure of two images is realized using space transformation matrix;
Step 3:Image information fusion is carried out to the PET image RGB triple channels after registration and MRI gray level images, specific steps are such as Under:
1) NSCT conversion is carried out to the PET image RGB triple channels after registration and MRI gray level images, obtains different scale, non-Tongfang To low-and high-frequency sub-band information;
2) fusion rule is designed, the fusion for low-frequency information uses the weighted mean method based on local neighborhood, believes for high frequency The fusion of breath is using the Pulse Coupled Neural Network method based on spatial frequency excitation;
3) to the information after fusion, using NSCT inverse transformations, the triple channel information after fusion is obtained;
4) synthesis of coloured image, MRI, PET brain fusion evaluation after being merged are carried out to RGB triple channels.
2. method as claimed in claim 1, it is characterised in that:In step 1, first to brain PET images carry out HIS conversion and RGB is converted, and obtains the Intensity components of image, then carry out Rigid Registration with Intensity components.
3. the method as described in claim 1, it is characterised in that the specific method of the step 1) in step 2 is, using not Tongfang To template calculate gradient image, and two-value is carried out to gradient image and refinement obtains brain wire-frame image vegetarian refreshments, wherein refining Algorithm is realized using bwmorph functions.
4. the method as described in claim 1, it is characterised in that the specific method of the step 2) in step 2 is by equidistant Take to select makes PET, MRI brain wire-frame image vegetarian refreshments number identical with interpolation method, using pass through horizontal direction, vertical direction, and rotation The spatial alternation turned, space conversion matrices are obtained using least square method, and carrying out spatial alternation to two images realizes brain structure Alignment.
5. the method as described in claim 1, it is characterised in that the specific method of step 1) is in step 3, to PET image RGB The gray level image of triple channel information and MRI carry out NSCT conversion, obtain different scale, the sub-band coefficients of different directions WithWherein:J represents decomposition scale, and k represents the directional subband of each Scale Decomposition, MjRepresent the direction of j Scale Decompositions Subband maximum number, NjThe coefficient total number of respective sub-bands is represented, A, B represent MRI, PET source images;J=1,2 ... J;K=1, 2 ... Mj;I=1,2 ... Nj
6. method as claimed in claim 1, it is characterised in that:Step 2) specific method in step 3 is as follows:
The weighted mean method based on local neighborhood is used for low-frequency information, local neighborhood is defined asWherein w (m, n) is the weighting matrix of window 3x3,
IfWeight coefficient at (x, y) place is calculated according to local neighborhood:
<mrow> <msub> <mi>w</mi> <mi>A</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>LE</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>LE</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>LE</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <msub> <mi>w</mi> <mi>B</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>LE</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>LE</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>LE</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
It is in the fusion coefficients of low frequency sub-band:
CF(x, y)=wA×CA(x,y)+wB×CB(x, y), wherein, CF(x, y) be blending image low frequency sub-band, CA(x,y)CB (x, y) is PET, the low frequency sub-band of MRI image;
The Pulse Coupled Neural Network method based on spatial frequency excitation, god all first are used for high-frequency sub-band fusion rule Flameout state, U are in through memberij(0)=0, Yij(0)=0, θij(0)=0, Lij(0)=0, if:
αl=1.0, VL=0.2, Vθ=30, iterations N=150, utilizes the sky of high-frequency sub-band Between frequency as PCNN external drive input;VLFor the amplification coefficient of connection input, VθFor the amplification factor of dynamic threshold, αL Connect the attenuation coefficient of input, αθFor the attenuation coefficient of dynamic threshold;
Spatial frequency (SF's) is positioned as:Wherein RF, CF represent line frequency, row frequency respectively;Then, It is iterated according to the following formula, and counts the total pulse firing number of each neuron;
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>F</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>SF</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>&amp;alpha;</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>V</mi> <mi>L</mi> </msub> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>l</mi> </mrow> </munder> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> <mi>l</mi> </mrow> </msub> <msub> <mi>Y</mi> <mrow> <mi>k</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>U</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>F</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>&amp;beta;L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>&amp;alpha;</mi> <mi>&amp;theta;</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>V</mi> <mi>&amp;theta;</mi> </msub> <msub> <mi>Y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> <msub> <mi>U</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&gt;</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> <msub> <mi>U</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&amp;le;</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> </mfenced>
Tij(n)=Tij(n-1)+Yij(n)
Wherein, by LijLink and input for neuron, UijFor inside neurons activity, YijExported for neuron, wijSynaptic junction value Matrix, β are link strength, TijFor total pulse firing number;
High-frequency sub-band fusion rule:
Complete melting for high-frequency sub-band coefficient Close.
7. the method as described in claim 1, it is characterised in that the method for step 3) is by NSCT inverse transformations in step 3, is obtained The channel information after fusion is taken, and the synthesis of coloured image is carried out to RGB triple channels.
CN201711248590.XA 2017-12-01 2017-12-01 PET/MRI different-computer brain image information fusion method with improved neural network Active CN107977926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711248590.XA CN107977926B (en) 2017-12-01 2017-12-01 PET/MRI different-computer brain image information fusion method with improved neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711248590.XA CN107977926B (en) 2017-12-01 2017-12-01 PET/MRI different-computer brain image information fusion method with improved neural network

Publications (2)

Publication Number Publication Date
CN107977926A true CN107977926A (en) 2018-05-01
CN107977926B CN107977926B (en) 2021-05-18

Family

ID=62009037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711248590.XA Active CN107977926B (en) 2017-12-01 2017-12-01 PET/MRI different-computer brain image information fusion method with improved neural network

Country Status (1)

Country Link
CN (1) CN107977926B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846327A (en) * 2018-05-29 2018-11-20 中国人民解放军总医院 A kind of intelligent distinguishing system and method for mole and melanoma
CN109035356A (en) * 2018-07-05 2018-12-18 四川大学 A kind of system and method based on PET pattern imaging
CN109584658A (en) * 2019-02-01 2019-04-05 姜培生 Online teaching method, electronic equipment and system
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109934887A (en) * 2019-03-11 2019-06-25 吉林大学 A kind of Method of Medical Image Fusion based on improved Pulse Coupled Neural Network
CN110288641A (en) * 2019-07-03 2019-09-27 武汉瑞福宁科技有限公司 PET/CT and the different machine method for registering of MRI brain image, device, computer equipment and storage medium
CN110652307A (en) * 2019-09-11 2020-01-07 中国科学院自动化研究所 Functional nuclear magnetic image-based striatum function detection method for schizophrenia patient
CN110796630A (en) * 2019-10-29 2020-02-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112884820A (en) * 2019-11-29 2021-06-01 杭州三坛医疗科技有限公司 Method, device and equipment for training initial image registration and neural network
CN113222975A (en) * 2021-05-31 2021-08-06 湖北工业大学 High-precision retinal vessel segmentation method based on improved U-net
CN113240616A (en) * 2021-05-27 2021-08-10 云南大学 Brain medical image fusion method and system
CN114403817A (en) * 2022-01-25 2022-04-29 首都医科大学附属北京安贞医院 Method and device for measuring radial variation of coronary artery

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447837A (en) * 2016-02-04 2016-03-30 重庆邮电大学 Multi-mode brain image fusion method based on adaptive cloud model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447837A (en) * 2016-02-04 2016-03-30 重庆邮电大学 Multi-mode brain image fusion method based on adaptive cloud model

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
JEAN-FRANSOIS MANGIN 等: "Nonsupervised 3D Registration of PET and MRI Data using Chamfer Matching", 《IEEE CONFERENCE ON NUCLEAR SCIENCE SYMPOSIUM AND MEDICAL IMAGING》 *
夏加星 等: "利用邻域激励的自适应PCNN进行医学图像融合", 《计算机应用研究》 *
张锦翔 等: "人脑MRI和PET图像的融合方法", 《解剖学杂志》 *
曲延华 等: "基于医学图像配准方法的研究", 《科技信息》 *
段锋 等: "Chamfer Matching在CT和MRI图像图像匹配融合中的应用", 《空军工程大学学报(自然科学版)》 *
王文文 等: "基于压缩感知和NSCT-PCNN的PETCT医学图像融合算法", 《重庆理工大学学报(自然科学)》 *
纪峰 等: "基于NSCT与PCNN的自适应图像融合", 《宁夏大学学报(自然科学版)》 *
陈俊强 等: "一种基于NSCT和自适应PCNN医学图像融合的改进算法", 《长春理工大学学报(自然科学版)》 *
马腾飞: "基于主成分分析-神经网络的医学图像刚性配准方法", 《科学技术与工程》 *
龙华飞: "PET与MRI医学图像融合研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846327A (en) * 2018-05-29 2018-11-20 中国人民解放军总医院 A kind of intelligent distinguishing system and method for mole and melanoma
CN109035356B (en) * 2018-07-05 2020-07-10 四川大学 System and method based on PET (positron emission tomography) graphic imaging
CN109035356A (en) * 2018-07-05 2018-12-18 四川大学 A kind of system and method based on PET pattern imaging
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109584658A (en) * 2019-02-01 2019-04-05 姜培生 Online teaching method, electronic equipment and system
CN109934887A (en) * 2019-03-11 2019-06-25 吉林大学 A kind of Method of Medical Image Fusion based on improved Pulse Coupled Neural Network
CN109934887B (en) * 2019-03-11 2023-05-30 吉林大学 Medical image fusion method based on improved pulse coupling neural network
CN110288641A (en) * 2019-07-03 2019-09-27 武汉瑞福宁科技有限公司 PET/CT and the different machine method for registering of MRI brain image, device, computer equipment and storage medium
CN110652307A (en) * 2019-09-11 2020-01-07 中国科学院自动化研究所 Functional nuclear magnetic image-based striatum function detection method for schizophrenia patient
CN110796630B (en) * 2019-10-29 2022-08-30 上海商汤智能科技有限公司 Image processing method and device, electronic device and storage medium
CN110796630A (en) * 2019-10-29 2020-02-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112884820A (en) * 2019-11-29 2021-06-01 杭州三坛医疗科技有限公司 Method, device and equipment for training initial image registration and neural network
CN113240616A (en) * 2021-05-27 2021-08-10 云南大学 Brain medical image fusion method and system
CN113222975A (en) * 2021-05-31 2021-08-06 湖北工业大学 High-precision retinal vessel segmentation method based on improved U-net
CN114403817A (en) * 2022-01-25 2022-04-29 首都医科大学附属北京安贞医院 Method and device for measuring radial variation of coronary artery

Also Published As

Publication number Publication date
CN107977926B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN107977926A (en) A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net
Liu et al. Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform
Du et al. Union Laplacian pyramid with multiple features for medical image fusion
Yang et al. Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain
CN103985105B (en) Contourlet territory based on statistical modeling multimode medical image fusion method
Shabanzade et al. Combination of wavelet and contourlet transforms for PET and MRI image fusion
CN110660063A (en) Multi-image fused tumor three-dimensional position accurate positioning system
CN106504221B (en) Method of Medical Image Fusion based on quaternion wavelet transformation context mechanism
CN109934887A (en) A kind of Method of Medical Image Fusion based on improved Pulse Coupled Neural Network
Nair et al. MAMIF: multimodal adaptive medical image fusion based on B-spline registration and non-subsampled shearlet transform
Salau et al. A review of various image fusion types and transform
CN115100172A (en) Fusion method of multi-modal medical images
Haribabu et al. Recent advancements in multimodal medical image fusion techniques for better diagnosis: an overview
Dolly et al. A survey on different multimodal medical image fusion techniques and methods
CN114821259A (en) Zero-learning medical image fusion method based on twin convolutional neural network
CN115222637A (en) Multi-modal medical image fusion method based on global optimization model
Basu et al. A systematic literature review on multimodal medical image fusion
Kusakunniran et al. Automated tongue segmentation using deep encoder-decoder model
Nobariyan et al. A new MRI and PET image fusion algorithm based on pulse coupled neural network
CN115731444A (en) Medical image fusion method based on artificial intelligence and superpixel segmentation
Rao et al. Deep learning-based medical image fusion using integrated joint slope analysis with probabilistic parametric steered image filter
Ghandour et al. Comprehensive performance analysis of different medical image fusion techniques for accurate healthcare diagnosis applications
Shaohai et al. Block-matching based multimodal medical image fusion via PCNN with SML
Das et al. Multimodal image sensor fusion in a cascaded framework using optimized dual channel pulse coupled neural network
Rani et al. Efficient fused convolution neural network (EFCNN) for feature level fusion of medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant