CN110363797B - PET and CT image registration method based on excessive deformation inhibition - Google Patents

PET and CT image registration method based on excessive deformation inhibition Download PDF

Info

Publication number
CN110363797B
CN110363797B CN201910634301.2A CN201910634301A CN110363797B CN 110363797 B CN110363797 B CN 110363797B CN 201910634301 A CN201910634301 A CN 201910634301A CN 110363797 B CN110363797 B CN 110363797B
Authority
CN
China
Prior art keywords
pet
image
registration
image block
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910634301.2A
Other languages
Chinese (zh)
Other versions
CN110363797A (en
Inventor
姜慧研
康鸿健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910634301.2A priority Critical patent/CN110363797B/en
Publication of CN110363797A publication Critical patent/CN110363797A/en
Application granted granted Critical
Publication of CN110363797B publication Critical patent/CN110363797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of medical image registration, and provides a PET and CT image registration method based on excessive deformation inhibition. Firstly, acquiring a two-dimensional PET/CT sequence image to obtain a PET/CT sequence image set and preprocessing the PET/CT sequence image set to obtain a PET/CT image block training set; then constructing a PET/CT registration network based on the 3D U-Net convolution neural network, and constructing a cost function by combining an image similarity constraint term and an excessive deformation inhibition term; initializing a neural network weight parameter, setting a hyper-parameter, inputting a PET/CT image block training set into a PET/CT registration network, and performing iterative training on the PET/CT image block training set; and finally, inputting the PET/CT image to be registered into the trained PET/CT registration network to generate a registered PET image block. The invention can realize PET/CT elastic registration, improve registration efficiency and accuracy and reduce calculation cost for restraining excessive deformation.

Description

PET and CT image registration method based on excessive deformation inhibition
Technical Field
The invention relates to the technical field of medical image registration, in particular to a PET and CT image registration method based on excessive deformation inhibition.
Background
Medical image registration plays an important role in many medical image processing tasks. Image registration is often formulated as an optimization problem to seek a spatial transformation that establishes a pixel/voxel correspondence between a pair of fixed and moving images by maximizing a surrogate metric of spatial correspondence between the images (e.g., image intensity correlation between the registered images).
Positron Emission Tomography (PET) uses a cyclotron to produce radioisotopes 18F, 13N, which are injected intravenously to participate in human metabolism. Tissues or lesions with high metabolic rate show clear high metabolic bright signals on PET; tissues or lesions with low metabolic rates show low metabolic dark signals on PET. Computer Tomography (CT) scans a certain part of a human body according to a certain thickness of a layer by using an X-ray beam, when the X-ray is emitted to the tissue of the human body, part of the ray is absorbed by the tissue, and part of the ray passes through the detected organ of the human body to be received, so that a signal is generated, and the image can be accurately positioned.
PET/CT can be used for the function and the same-machine image fusion of the anatomical structure, and is an important development of the image medicine. The multi-modal image registration utilizes the characteristics of various imaging modes to provide complementary information for different images, increase the image information amount, help to more comprehensively understand the nature of lesions and the relation with surrounding anatomical structures, and provide an effective method for the positioning of clinical diagnosis and treatment. The fusion image with structural information and functional information is obtained by fusing two PET/CT images with different modals, and the fusion image has great significance for medical image analysis and diagnosis. PET/CT registration is a challenging task due to the low similarity of pixel intensities between PET/CT images, which tends to produce excessive deformation after registration.
In the existing PET and CT image registration methods, registration is mostly performed based on iterative optimization. The most commonly used method is to transform the registration problem into an optimization problem to minimize the cost function. Commonly used cost functions include Mean Square Error (MSE), mutual Information (MI), normalized Mutual Information (NMI), normalized cross-correlation (NCC), and Gradient Correlation (GC). These similarity metrics compare images directly at the pixel level and do not reflect higher level structures in the image. Although global optimization methods exist, such as simulated annealing algorithms and genetic algorithms, they require a comprehensive sampling of the parameter space, which results in excessive computational costs. Therefore, the existing PET and CT image registration method has high calculation cost for excessive deformation inhibition and low efficiency and accuracy for image registration.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a PET and CT image registration method based on excessive deformation inhibition, which can realize PET/CT elastic registration, improve registration efficiency and accuracy and reduce the calculation cost of excessive deformation inhibition.
The technical scheme of the invention is as follows:
a PET and CT image registration method based on excessive deformation inhibition is characterized by comprising the following steps:
step 1: acquiring two-dimensional PET sequence images and two-dimensional CT sequence images of m patients to obtain a PET/CT sequence image set;
and 2, step: preprocessing a PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing comprises calculating an SUV value and a hu value, limiting a threshold range, adjusting image resolution, generating an image block and carrying out normalization processing;
and 3, step 3: constructing a PET/CT registration network based on the 3DU-Net convolution neural network;
and 4, step 4: combining the image similarity constraint term and the excessive deformation inhibition term to construct a cost function of the PET/CT registration network; the similarity constraint item is normalized cross-correlation NCC, and the excessive deformation inhibition item is sum of punishment items based on difference values between displacement vector field elements and Gaussian distribution function elements;
and 5: initializing a neural network weight parameter, setting the batch size N, regularization term weight lambda, maximum iteration COUNT COUNT, network learning rate and an optimizer in batch processing, and adopting a learning rate attenuation strategy;
step 6: the method comprises the steps of taking a PET/CT image block training set as an input of a PET/CT registration network, outputting a displacement vector field, inputting the displacement vector field and a PET image block into a space transformer together, obtaining a registered PET image block, obtaining a similarity constraint item according to the CT image block and the registered PET image block, obtaining an excessive deformation inhibition item according to the displacement vector field, updating a neural network weight parameter through a cost function, and performing back propagation, thereby performing iterative training on the PET/CT registration network until the maximum iteration COUNT is reached, and obtaining the trained PET/CT registration network;
and 7: and (3) carrying out the pretreatment in the step (1) on the PET/CT image pair to be registered, inputting the obtained PET/CT image block pair into the trained PET/CT registration network, generating a registered PET image block, and carrying out visualization.
The step 2 comprises the following steps:
step 2.1: calculating the SUV value of the two-dimensional PET sequence image as SUV = Pixel PET ×LBM×1000/injecteddose
Calculating Hu values of two-dimensional CT sequence images to Hu = Pixels CT ×slopes+intercepts
Wherein, pixel PET The pixel values of the PET sequence image are shown, LBM is lean body mass, and injected dose is tracer injection dose; pixel CT Taking pixel values of the CT sequence image, slope as a slope, and intercepts as an intercept;
step 2.2: carrying out image contrast enhancement processing on the two-dimensional PET sequence image and the two-dimensional CT sequence image, and adjusting the hu value window width window level to [ a ] 1 ,b 1 ]Limiting the SUV value to [ a ] 2 ,b 2 ]Inner; wherein, a 1 、b 1 、a 2 、b 2 Are all constants;
step 2.3: performing image resolution adjustment processing on the two-dimensional CT sequence image, and adjusting the size of the 512 x 512 two-dimensional CT sequence image to the size H x W =128 x 128 of the two-dimensional PET sequence image;
step 2.4: generating three-dimensional volume data [ H, W, D ] for the two-dimensional PET sequence image and the two-dimensional CT sequence image of the ith patient PET,i ]、[H,W,D CT,i ]Transforming three-dimensional volume data into five-dimensional volume data [ N, H, W, D ] i ,C]Cutting five-dimensional volume data in the Z direction by taking D pixels as sampling intervals to generate a plurality of pairs of H multiplied by W multiplied by D image blocks, carrying out normalization processing on the image blocks to obtain an image block set, and randomly selecting one pair of PET image blocks and CT image blocks from the image block set to form a PET/CT image block training set; wherein, i is epsilon {1,2, …, m }, D PET,i Number of slices, D, of PET sequence images for the ith patient CT,i Number of slices, D, for CT sequence images of the ith patient PET,i =D CT,i =D i (ii) a N is the size of the batch in the batch process, C is the number of channels of input neural network data, and C =2.
In the step 2, [ a ] 1 ,b 1 ]=[-90,300],[a 2 ,b 2 ]=[0,5],d=32,D=64。
In the step 2.4, the formula for carrying out normalization processing on the image block is
Figure BDA0002129709220000031
Changing the data of the image block into normal distribution with the average value of 0 and the standard deviation of 1; wherein, x and x * The pixel points before and after normalization processing in the image block are respectively, and mu and sigma are respectively the mean value and standard deviation of all the pixel points in the image block.
In the step 3, the PET/CT registration network is constructed based on the 3DU-Net convolutional neural network and comprises an encoding path and a decoding path, and each path has 4 resolution levels; said coding path having n 1 Each layer of the encoding path comprises a convolution layer with a convolution kernel of 3 x 3 and a step size of 2, and each convolution layer is followed by a BN layer and a ReLU layer; said decoding path has n 2 Each layer of the decoding path comprises a deconvolution layer with a convolution kernel of 3 x 3 and a step size of 2, and each deconvolution layer is followed by a BN layer and a ReLU layer; transmitting the layers with the same resolution in the encoding path to the decoding path through shortcut, and providing original high-resolution characteristics for the decoding path; the last layer of the PET/CT registration network is a convolution layer with the size of 3 multiplied by 3, and the number of the final output channels is 3.
In the step 4, combining the image similarity constraint term and the excessive deformation inhibition term, the cost function for constructing the PET/CT registration network is
Figure BDA0002129709220000032
Wherein F, M is CT image block, PET image block, D v In the form of a matrix of displacement vector fields,
Figure BDA0002129709220000033
the mean value is mu, the standard deviation is theta, and lambda is the weight of the regularization term;
Figure BDA0002129709220000034
constraining terms for similarity
Figure BDA0002129709220000035
Wherein S is a subgraph, T is a template image, (S, T) is a coordinate index, S (S, T) is a pixel value of the subgraph, T (S, T) is a pixel value of the template image, and E (S) and E (T) are average gray values of the subgraph and the template image respectively;
Figure BDA0002129709220000041
as an excessive deformation suppressing term
Figure BDA0002129709220000042
Where i is the displacement vector field matrix D v J is a random number following the Gaussian distribution function , f (i, j, θ) is a penalty term,
Figure BDA0002129709220000043
the invention has the beneficial effects that:
the method is characterized in that a PET/CT registration network is constructed based on a 3DU-Net convolution neural network, a displacement vector field is predicted through an unsupervised end-to-end 3D elastic registration neural network based on deep learning, voxel-by-voxel displacement prediction is carried out on an image to be registered, normalized cross correlation is used as a similarity constraint item, and the inhibition of an excessive deformation inhibition item on image deformation is combined to construct a cost function of the PET/CT registration network, so that the problem of excessive registration deformation caused by low similarity of the PET/CT image can be solved, PET/CT elastic registration can be realized, the registration efficiency and accuracy are improved, and the calculation cost of excessive deformation inhibition is reduced.
Drawings
FIG. 1 is a flow chart of a PET and CT image registration method based on excessive deformation suppression according to the present invention;
FIG. 2 is a schematic structural diagram of a PET/CT registration network in the PET and CT image registration method based on excessive deformation suppression according to the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of the PET and CT image registration method based on excessive deformation suppression according to the present invention. The invention discloses a PET and CT image registration method based on excessive deformation inhibition, which is characterized by comprising the following steps:
step 1: and acquiring two-dimensional PET sequence images and two-dimensional CT sequence images of m patients to obtain a PET/CT sequence image set.
Step 2: preprocessing a PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing comprises calculating the SUV value and the hu value, limiting the range of a threshold value, adjusting the resolution of an image, generating an image block and carrying out normalization processing.
The step 2 comprises the following steps:
step 2.1: calculating the SUV value of the two-dimensional PET sequence image as SUV = Pixel PET ×LBM×1000/injecteddose
Calculating Hu values of two-dimensional CT sequence images to Hu = Pixels CT ×slopes+intercepts
Wherein, pixel PET The pixel values of the PET sequence image are shown, LBM is lean body mass, and injected dose is tracer injection dose; pixel CT Taking pixel values of the CT sequence image, slope as a slope, and intersections as an intercept;
step 2.2: carrying out image contrast enhancement processing on the two-dimensional PET sequence image and the two-dimensional CT sequence image, and adjusting the hu value window width window level to [ a ] 1 ,b 1 ]Limiting the SUV value to [ a ] 2 ,b 2 ]Internal; wherein, a 1 、b 1 、a 2 、b 2 Are all constants;
step 2.3: performing image resolution adjustment processing on the two-dimensional CT sequence image, and adjusting the size of the 512 x 512 two-dimensional CT sequence image to the size H x W =128 x 128 of the two-dimensional PET sequence image;
step 2.4: generating three-dimensional volume data [ H, W, D ] for the two-dimensional PET sequence image and the two-dimensional CT sequence image of the ith patient PET,i ]、[H,W,D CT,i ]Transforming three-dimensional volume data into five-dimensional volume data [ N, H, W, D ] i ,C]Cutting five-dimensional volume data in the Z direction by taking D pixels as sampling intervals to generate a plurality of pairs of H multiplied by W multiplied by D image blocks, carrying out normalization processing on the image blocks to obtain an image block set, and randomly selecting one pair of PET image blocks and CT image blocks from the image block set to form a PET/CT image block training set; wherein i belongs to {1,2, …, m }, D PET,i Number of slices, D, of PET sequence images for the ith patient CT,i Number of slices, D, of CT sequence images for the ith patient PET,i =D CT,i =D i (ii) a N is the size of the batch in the batch process, C is the number of channels of input neural network data, and C =2.
In this embodiment, m =176; generating three-dimensional volume data for the SUV value image and the Hu value image which are processed by PET and CT respectively and storing the three-dimensional volume data in the database; in said step 2, [ a ] 1 ,b 1 ]=[-90,300],[a 2 ,b 2 ]=[0,5],d=32,D=64。
For all 176 patients, i =900 pairs of SUV, hu valued image blocks generated by the volume data of 141 patients were randomly selected as the PET/CT image block training set, and 259 pairs of SUV, hu valued image blocks generated by the volume data of 35 patients were selected as the validation set.
In the step 2.4, the formula for carrying out normalization processing on the image block is
Figure BDA0002129709220000051
Changing the data of the image block into normal distribution with the average value of 0 and the standard deviation of 1; wherein, x and x * The pixel points before and after normalization processing in the image block are respectively, and mu and sigma are respectively the mean value and standard deviation of all the pixel points in the image block.
And step 3: and constructing a PET/CT registration network based on the 3DU-Net convolutional neural network, as shown in figure 2.
The PET/CT registration network of the present invention comprises: (1) 3D U-Net to regress the displacement vector field; (2) A Spatial Transformer (Spatial Transformer) for performing Spatial transformation. In this example, in the step 3The method comprises the steps that a PET/CT registration network is built in a 3DU-Net convolutional neural network and comprises an encoding path and a decoding path, and each path has 4 resolution levels; said coding path having n 1 Each layer of the encoding path comprises a convolution layer with a convolution kernel of 3 x 3 and a step size of 2, and each convolution layer is followed by a BN layer and a ReLU layer; said decoding path has n 2 Each layer of the decoding path comprises an deconvolution layer with a convolution kernel of 3 × 3 × 3 and a step size of 2, and each deconvolution layer is followed by a BN layer and a ReLU layer; transmitting the layers with the same resolution in the encoding path to the decoding path through shortcut, and providing original high-resolution characteristics for the decoding path; the last layer of the PET/CT registration network is a convolution layer with the size of 3 multiplied by 3, and the number of the final output channels is 3.
Wherein, the BN layer is a batch normalization layer, the ReLU layer is a rectification linear unit layer, and the shortcut is a jump connection. The last layer is a 3 × 3 × 3 convolutional layer, the number of output channels is reduced, and the number of output channels is 3 (i.e., representing three directions of x, y, and z).
And 4, step 4: combining the image similarity constraint term and the excessive deformation inhibition term to construct a cost function of the PET/CT registration network; the similarity constraint term is normalized cross-correlation NCC, and the excessive deformation inhibition term is sum of punishment terms based on difference values between displacement vector field elements and Gaussian distribution function elements.
The method comprises the steps that an excessive deformation inhibition measure is defined based on the deformation degree of a 3D deformation field, and an excessive deformation inhibition item is introduced into a cost function to optimize a registration network.
In the step 4, combining the image similarity constraint term and the excessive deformation inhibition term, the cost function for constructing the PET/CT registration network is
Figure BDA0002129709220000061
Wherein F, M is respectively CT image block, PET image block, D v In the form of a matrix of displacement vector fields,
Figure BDA0002129709220000062
the mean value is mu, the standard deviation is theta, and lambda is the weight of the regularization term; f, M is a fixed image block and a floating image block respectively;
Figure BDA0002129709220000063
constraining terms for similarity
Figure BDA0002129709220000064
Wherein S is a subgraph, T is a template image, (S, T) is a coordinate index, S (S, T) is a pixel value of the subgraph, T (S, T) is a pixel value of the template image, and E (S) and E (T) are average gray values of the subgraph and the template image respectively;
Figure BDA0002129709220000065
as an excessive deformation suppressing term
Figure BDA0002129709220000066
Wherein i is a displacement vector field matrix D v J is a random number following a Gaussian distribution function , f (i, j, θ) is a penalty term,
Figure BDA0002129709220000067
in this embodiment, k is set to 2 according to experience, and when | i-j | > θ, the penalty term is | i-j | > k I.e. the penalty term is amplified by the power of k.
And 5: global variable initialization (global _ variables _ initializer) is adopted to initialize the weight parameters of the neural network, the size N =16 of a batch in batch processing, the weight lambda =0.3 of a regularization item, the number COUNT =1000 of the maximum iteration times, the network learning rate is 0.001, an optimizer is Adam, and a learning rate attenuation strategy is adopted.
Step 6: the method comprises the steps of taking a PET/CT image block training set as an input of a PET/CT registration network, outputting a displacement vector field, inputting the displacement vector field and the PET image block into a space transformer together, obtaining a registered PET image block, obtaining a similarity constraint item according to the CT image block and the registered PET image block, obtaining an excessive deformation inhibition item according to the displacement vector field, updating a neural network weight parameter through a cost function, and performing back propagation, thereby performing iterative training on the PET/CT registration network until the maximum iteration COUNT is reached, and obtaining the trained PET/CT registration network.
A pair of PET/CT image blocks of 128 x 64 size serves as input of a 3D U-Net network, a displacement vector field (128 x 64 x 3 respectively corresponding to displacement in x, y and z directions) of the same resolution size is output, the displacement vector field and the PET image blocks are input into a space transformer together, and the registered PET image blocks are output.
And 7: and (3) carrying out the pretreatment in the step (1) on the PET/CT image pair to be registered, inputting the obtained PET/CT image block pair into the trained PET/CT registration network, generating a registered PET image block, and carrying out visualization.
In this embodiment, the PET and CT image registration method based on excessive deformation suppression of the present invention is operated in the Windows10 system environment of an Intel kernel, and performs medical image registration based on Python and a tensflow framework. The image registration algorithm based on deep learning adopted by the invention converts the image registration into the regression problem of a displacement vector field, namely predicting the spatial correspondence between pixels/voxels from a pair of images. Image registration is realized by simultaneously optimizing a similarity constraint term and a displacement vector field excessive deformation inhibition term between a fixed image and a floating image pair through a 3D U-Net convolution neural network. Quantitative and qualitative results show that good effects are obtained when the registration method of the invention is used for 3D PET/CT image registration. Wherein, for the trained model, given a new pair of PET/CT volume data, the registration result can be obtained by one forward calculation within 10 seconds.
It is to be understood that the above-described embodiments are only a few embodiments of the present invention, and not all embodiments. The above examples are only for explaining the present invention and do not constitute a limitation to the scope of protection of the present invention. All other embodiments, which can be derived by those skilled in the art from the above-described embodiments without any creative effort, namely all modifications, equivalents, improvements and the like made within the spirit and principle of the present application, fall within the protection scope of the present invention claimed.

Claims (5)

1. A PET and CT image registration method based on excessive deformation inhibition is characterized by comprising the following steps:
step 1: acquiring two-dimensional PET sequence images and two-dimensional CT sequence images of m patients to obtain a PET/CT sequence image set;
and 2, step: preprocessing a PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing comprises calculating an SUV value and a hu value, limiting a threshold range, adjusting image resolution, generating an image block and carrying out normalization processing;
and step 3: constructing a PET/CT registration network based on a 3D U-Net convolution neural network;
and 4, step 4: combining the image similarity constraint term and the excessive deformation inhibition term to construct a cost function of the PET/CT registration network; the similarity constraint term is normalized cross-correlation NCC, and the excessive deformation inhibition term is sum of punishment terms based on difference values between displacement vector field elements and Gaussian distribution function elements;
the cost function for constructing the PET/CT registration network by combining the image similarity constraint term and the excessive deformation inhibition term is as follows:
Figure FDA0003947483700000011
wherein F, M is CT image block, PET image block, D v In the form of a matrix of displacement vector fields,
Figure FDA0003947483700000012
the mean value is mu, the standard deviation is theta, and lambda is the weight of the regularization term;
Figure FDA0003947483700000013
constraining terms for similarity
Figure FDA0003947483700000014
Wherein S is a subgraph, T is a template image, (S, T) is a coordinate index, S (S, T) is a pixel value of the subgraph, T (S, T) is a pixel value of the template image, and E (S) and E (T) are average gray values of the subgraph and the template image respectively;
Figure FDA0003947483700000015
as an excessive deformation suppressing term
Figure FDA0003947483700000016
Where i is the displacement vector field matrix D v Wherein j is a function following a Gaussian distribution
Figure FDA0003947483700000018
F (i, j, theta) is a penalty term,
Figure FDA0003947483700000017
and 5: initializing a neural network weight parameter, setting the batch size N, regularization term weight lambda, maximum iteration COUNT COUNT, network learning rate and an optimizer in batch processing, and adopting a learning rate attenuation strategy;
step 6: the method comprises the steps of taking a PET/CT image block training set as an input of a PET/CT registration network, outputting a displacement vector field, inputting the displacement vector field and a PET image block into a space transformer together, obtaining a registered PET image block, obtaining a similarity constraint item according to the CT image block and the registered PET image block, obtaining an excessive deformation inhibition item according to the displacement vector field, updating a neural network weight parameter through a cost function, and performing back propagation, thereby performing iterative training on the PET/CT registration network until the maximum iteration COUNT is reached, and obtaining the trained PET/CT registration network;
and 7: and (3) carrying out the pretreatment in the step (1) on the PET/CT image pair to be registered, inputting the obtained PET/CT image block pair into the trained PET/CT registration network, generating a registered PET image block, and carrying out visualization.
2. The method for PET and CT image registration based on excessive deformation suppression according to claim 1, wherein the step 2 comprises the steps of:
step 2.1: calculating the SUV value of the two-dimensional PET sequence image as SUV = Pixels PET ×LBM×1000/injecteddose
Calculating Hu values for two-dimensional CT sequence images as Hu = Ptxels CT ×slOpeS+interCepts
Wherein, pixel PET The pixel values of the PET sequence image, LBM is lean body mass, injectedose is tracer injection dose; pixel CT Taking pixel values of the CT sequence image, slope as a slope, and intercepts as an intercept;
step 2.2: carrying out image contrast enhancement processing on the two-dimensional PET sequence image and the two-dimensional CT sequence image, and adjusting the hu value window width window level to [ a ] 1 ,b 1 ]Limiting the SUV value to [ a ] 2 ,b 2 ]Internal; wherein, a 1 、b 1 、a 2 、b 2 Are all constants;
step 2.3: performing image resolution adjustment processing on the two-dimensional CT sequence image, and adjusting the size of the 512 x 512 two-dimensional CT sequence image to the size H x W =128 x 128 of the two-dimensional PET sequence image;
step 2.4: generating three-dimensional volume data [ H, W, D ] for the two-dimensional PET sequence image and the two-dimensional CT sequence image of the ith patient PET,i ]、[H,W,D CT,i ]Transforming three-dimensional volume data into five-dimensional volume data [ N, H, W, D ] i ,C]Cutting five-dimensional volume data in Z direction with D pixels as sampling interval to generate multiple pairs of H × W × D image blocksLine normalization processing is carried out to obtain an image block set, and one pair of PET image blocks and CT image blocks are randomly selected from the image block set to form a PET/CT image block training set; wherein i belongs to {1,2, …, m }, D PET,i Number of slices, D, of PET sequence images for the ith patient CT,i Number of slices, D, of CT sequence images for the ith patient PET,i =D CT,i =D i (ii) a N is the size of the batch in the batch process, C is the number of channels of input neural network data, and C =2.
3. The PET and CT image registration method based on excessive deformation suppression according to claim 2, characterized in that in the step 2.2, [ a ] 1 ,b 1 ]=[-90,300],[a 2 ,b 2 ]=[0,5],d=32,D=64。
4. The PET and CT image registration method based on excessive deformation suppression as claimed in claim 2, wherein in the step 2.4, the formula for normalizing the image blocks is
Figure FDA0003947483700000021
Changing the data of the image block into normal distribution with the average value of 0 and the standard deviation of 1; wherein, x and x are pixel points before and after normalization processing in the image block respectively, and μ and σ are the mean value and standard deviation of all pixel points in the image block respectively.
5. The PET and CT image registration method based on excessive deformation suppression according to any one of claims 2 to 4, wherein in the step 3, constructing the PET/CT registration network based on the 3D U-Net convolutional neural network comprises an encoding path and a decoding path, each path having 4 resolution levels; said coding path having n 1 Each layer of the encoding path comprises a convolution layer with a convolution kernel of 3 x 3 and a step size of 2, and each convolution layer is followed by a BN layer and a ReLU layer; said decoding path has n 2 Layers, each layer of the decoding path comprising an deconvolution layer with a convolution kernel of 3 x 3 and a step size of 2, each inverseThe coiling layers are all followed by a BN layer and a ReLU layer; transmitting the layers with the same resolution in the encoding path to the decoding path through shortcut, and providing original high-resolution characteristics for the decoding path; the last layer of the PET/CT registration network is a convolution layer with the size of 3 multiplied by 3, and the number of the final output channels is 3.
CN201910634301.2A 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition Active CN110363797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910634301.2A CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910634301.2A CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Publications (2)

Publication Number Publication Date
CN110363797A CN110363797A (en) 2019-10-22
CN110363797B true CN110363797B (en) 2023-02-14

Family

ID=68219107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910634301.2A Active CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Country Status (1)

Country Link
CN (1) CN110363797B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111524170B (en) * 2020-04-13 2023-05-26 中南大学 Pulmonary CT image registration method based on unsupervised deep learning
CN112598718B (en) * 2020-12-31 2022-07-12 北京深睿博联科技有限责任公司 Unsupervised multi-view multi-mode intelligent glasses image registration method and device
CN113706451B (en) * 2021-07-07 2024-07-12 杭州脉流科技有限公司 Method, apparatus, system and computer readable storage medium for intracranial aneurysm identification detection
CN114511602B (en) * 2022-02-15 2023-04-07 河南工业大学 Medical image registration method based on graph convolution Transformer
CN114820432B (en) * 2022-03-08 2023-04-11 安徽慧软科技有限公司 Radiotherapy effect evaluation method based on PET and CT elastic registration technology
CN115393527A (en) * 2022-09-14 2022-11-25 北京富益辰医疗科技有限公司 Anatomical navigation construction method and device based on multimode image and interactive equipment
CN116740218B (en) * 2023-08-11 2023-10-27 南京安科医疗科技有限公司 Heart CT imaging image quality optimization method, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507234A (en) * 2017-08-29 2017-12-22 北京大学 Cone beam computed tomography image and x-ray image method for registering
CN108171738A (en) * 2018-01-25 2018-06-15 北京雅森科技发展有限公司 Multimodal medical image registration method based on brain function template
CN109074659A (en) * 2016-05-04 2018-12-21 皇家飞利浦有限公司 Medical image resources registration
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 A kind of PET based on full convolutional neural networks and CT method for registering images
CN109685811A (en) * 2018-12-24 2019-04-26 北京大学第三医院 PET/CT hypermetabolism lymph node dividing method based on dual path U-net convolutional neural networks
CN109872332A (en) * 2019-01-31 2019-06-11 广州瑞多思医疗科技有限公司 A kind of 3 d medical images method for registering based on U-NET neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0912845D0 (en) * 2009-07-24 2009-08-26 Siemens Medical Solutions Initialisation of registration using an anatomical atlas

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074659A (en) * 2016-05-04 2018-12-21 皇家飞利浦有限公司 Medical image resources registration
CN107507234A (en) * 2017-08-29 2017-12-22 北京大学 Cone beam computed tomography image and x-ray image method for registering
CN108171738A (en) * 2018-01-25 2018-06-15 北京雅森科技发展有限公司 Multimodal medical image registration method based on brain function template
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 A kind of PET based on full convolutional neural networks and CT method for registering images
CN109685811A (en) * 2018-12-24 2019-04-26 北京大学第三医院 PET/CT hypermetabolism lymph node dividing method based on dual path U-net convolutional neural networks
CN109872332A (en) * 2019-01-31 2019-06-11 广州瑞多思医疗科技有限公司 A kind of 3 d medical images method for registering based on U-NET neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Unsupervised learning for fast probabilistic diffeomorphic registration;Hessam Sokootil 等;《Medical lmage Computing and Computer Assisted Intervention -MICCAl 2017》;20170904;232-239 *
三维医学图像刚性配准新算法研究;陈明;《中国优秀博硕士学位论文全文数据库 (博士)医药卫生科技辑》;20031215(第04期);E080-5 *

Also Published As

Publication number Publication date
CN110363797A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363797B (en) PET and CT image registration method based on excessive deformation inhibition
CN109272443B (en) PET and CT image registration method based on full convolution neural network
US11756160B2 (en) ML-based methods for pseudo-CT and HR MR image estimation
JP7039153B2 (en) Image enhancement using a hostile generation network
RU2709437C1 (en) Image processing method, an image processing device and a data medium
Lei et al. Learning‐based CBCT correction using alternating random forest based on auto‐context model
JP2021035502A (en) System and methods for image segmentation using convolutional neural network
CN113256753B (en) PET image region-of-interest enhancement reconstruction method based on multitask learning constraint
JP2022536107A (en) sCT Imaging Using CycleGAN with Deformable Layers
CN115605915A (en) Image reconstruction system and method
Song et al. Denoising of MR and CT images using cascaded multi-supervision convolutional neural networks with progressive training
CN109191564A (en) Exciting tomography fluorescence imaging three-dimensional rebuilding method based on deep learning
US11776128B2 (en) Automatic detection of lesions in medical images using 2D and 3D deep learning networks
CN114332287B (en) Method, device, equipment and medium for reconstructing PET (positron emission tomography) image based on transformer feature sharing
Khagi et al. 3D CNN based Alzheimer’ s diseases classification using segmented Grey matter extracted from whole-brain MRI
CN110270015B (en) sCT generation method based on multi-sequence MRI
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
US20230177746A1 (en) Machine learning image reconstruction
Imran et al. Personalized CT organ dose estimation from scout images
CN113205567A (en) Method for synthesizing CT image by MRI image based on deep learning
Xue et al. PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network
Mishra et al. Hybrid multiagent based adaptive genetic algorithm for limited view tomography using oppositional learning
Lei et al. Generative adversarial networks for medical image synthesis
Martinot et al. High-particle simulation of monte-carlo dose distribution with 3D convlstms
Saadi et al. Blind restoration of radiological images using hybrid swarm optimized model implemented on FPGA.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant