CN114648562A - Medical image registration method based on deep learning network - Google Patents

Medical image registration method based on deep learning network Download PDF

Info

Publication number
CN114648562A
CN114648562A CN202210272128.8A CN202210272128A CN114648562A CN 114648562 A CN114648562 A CN 114648562A CN 202210272128 A CN202210272128 A CN 202210272128A CN 114648562 A CN114648562 A CN 114648562A
Authority
CN
China
Prior art keywords
image
voxel
displacement field
deep learning
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210272128.8A
Other languages
Chinese (zh)
Inventor
周宏�
牟建波
杨维斌
谢婷婷
吴永忠
林博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University Cancer Hospital
Original Assignee
Chongqing University Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University Cancer Hospital filed Critical Chongqing University Cancer Hospital
Priority to CN202210272128.8A priority Critical patent/CN114648562A/en
Publication of CN114648562A publication Critical patent/CN114648562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a medical image registration method based on a deep learning network, which comprises the following steps: s1, acquiring a plurality of image pairs; s2, inputting the image pair in the training set into a convolutional neural network containing an encoder-decoder structure, predicting a displacement field from a floating image to a fixed image, and smoothing the mean value of the displacement field by utilizing regularization on a spatial gradient to obtain a smoothed displacement field; s3, deforming the smoothed displacement field by using a spatial transformation network to obtain a sampling grid, and then resampling the floating image by using the sampling grid to obtain a registered image; s4, calculating a similarity loss function between the registered image and the fixed image, then minimizing the similarity loss function to obtain an optimal registered image, and selecting the minimum parameter as a parameter theta; and S5, inputting the image pair in the test set, and registering by using a deep learning image registration network to finally obtain a registered image. The invention can quickly register the image and can obtain better registration effect without adding label data.

Description

Medical image registration method based on deep learning network
Technical Field
The invention relates to the field of image processing, in particular to a medical image registration method based on a deep learning network.
Background
The medical image registration is used for registering medical images obtained by patients in different periods, so that the growth conditions of corresponding focuses can be compared, and further, the treatment effect can be analyzed, and the diagnosis and treatment efficiency can be improved.
Before or during the operation of a patient, a doctor can analyze according to the imaging information of the imaging equipment, and the method is helpful for accurately positioning a focus, assisting radiotherapy planning and the like. Visible medical image registration is a key auxiliary medical technology such as image-guided radiotherapy, endoscopy and targeted biopsy. Therefore, there is a great need for the study of medical image registration.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a medical image registration method based on a deep learning network.
In order to achieve the above object, the present invention provides a medical image registration method based on a deep learning network, comprising the following steps:
s1, acquiring a plurality of image pairs, and randomly dividing the image pairs into a training set and a testing set; the ratio of the training set to the test set is 7: 3; the image pair comprises a floating image and a fixed image;
s2, inputting the image pair in the training set into a convolutional neural network containing an encoder-decoder structure, predicting a displacement field from a floating image to a fixed image, averaging the displacement field to obtain a displacement field mean value, and smoothing the displacement field mean value by utilizing regularization on a spatial gradient to obtain a smoothed displacement field; an image pair includes a floating image and a fixed image.
S3, deforming the smoothed displacement field by using a spatial transformation network to obtain a sampling grid, and then resampling the floating image by using the sampling grid to obtain a registered image;
s4, calculating a similarity loss function between the registered image and the fixed image, minimizing the similarity loss function in an improved U-net convolutional neural network to obtain an optimal registered image, storing corresponding parameters, namely weights, which are variable along with the network, comparing the parameters, and selecting the minimum parameter as a parameter theta;
and S5, inputting the image pair in the test set, and registering by using a deep learning image registration network to finally obtain a registered image.
Further, the S1 includes the following steps:
s1-1, collecting samples of the same mode, selecting an image of one period as a floating image, and selecting an image of the other period as a fixed image;
in practice, a plurality of pictures are taken before and after a time interval, one image before the time interval is selected as a floating image, and one image after the time interval is selected as a fixed image; or selecting one image after a time interval as a floating image and one image before the time interval as a fixed image.
A plurality of image pairs can be obtained by selecting different images before and after different time intervals.
S1-2, normalizing the selected image to obtain a sub-image with a predetermined size, and forming a training image pair or a test image pair.
Further, the convolutional neural network is a modified U-net convolutional neural network:
the method comprises an encoding stage and a decoding stage, wherein the encoding stage is divided into four layers, each layer extracts features by rolling 1 convolution kernel with the size of 3 multiplied by 3 by step length 1, each layer downsamples a feature map by sliding the convolution kernel with the size of 3 multiplied by 3 by step length 2, the size of the feature map after each layer downsampling is half, and the number of channels is doubled;
the decoding stage is also divided into four layers, each layer extracts features by rolling 2 convolution kernels with the size of 3 × 3 × 3 at step size 1, each layer up samples the feature map by deconvolving 2 × 2 convolution kernels with step size 2, and connects the feature maps with the same size in the encoding stage and the decoding stage together by jump connection.
Further, the calculation formula of the displacement field is as follows:
u=hθ(f,m) (1)
wherein u represents a displacement field;
hθ(-) is a mapping function;
theta is a parameter in the mapping function;
f represents a fixed image;
m represents a floating image.
Further, the transforming the displacement field by using the spatial transform network to obtain the sampling grid includes:
for each voxel v, the displacement of the floating image with respect to the fixed image and the linear interpolation of the ly adjacent voxels of this voxel are calculated using a spatial transform network, ly being an integer not less than 8. Obtaining a new voxel v':
v'=v+u(v) (2)
where u (v) represents the displacement of voxels in the floating image m to voxel-like positions in the fixed image f;
then, obtaining a sampling grid according to the following formula:
Figure BDA0003553957580000031
wherein m ° u (v) represents a sampling grid formed by voxel linear interpolation;
z (v ') is a domain set of voxels v'; the domain represents a collection of points around the pixel point.
q is an element in the Z (v') set, representing a new voxel position;
d is the corresponding dimension;
x, y and z respectively represent positions of the voxels in three dimensions;
m (q) represents the transformation difference from the original voxel position to the new voxel position;
v'dand q isdRepresenting a transformation in a corresponding dimension;
v'drepresenting the position of the new voxel in dimension d;
qdtransformation of representation element q in dimension d
| represents an absolute value.
Further, the similarity loss function includes:
Figure BDA0003553957580000041
wherein MSE (f, m ° u) represents the mean square error of f and m ° u;
f is a fixed image;
Ω represents a set of pixels of the whole image;
| represents the number of sets;
(v) the anatomical location to which the respective voxel corresponds;
(m ° u) (v) represents the corresponding anatomical location of the resampled image voxel;
m ° u represents the image after resampling by the sampling grid;
v represents a voxel;
and minimizing the loss function MSE to enable the registered image to be closer to a fixed image, so that the registered image with higher accuracy is obtained.
Further, the smoothing of the displacement field using regularization on spatial gradients includes:
the spatial gradient is approximately equal to the difference between neighboring voxels, and the calculation formula is as follows:
Figure BDA0003553957580000042
wherein lsmooth (u) denotes regularizing the displacement field u;
v represents a voxel;
Ω represents a set of pixels of the whole image;
| | · | | represents a norm;
Figure BDA0003553957580000043
Figure BDA0003553957580000044
represents the x-dimension correction, i.e. smoothing, of any voxel position transformation;
Figure BDA0003553957580000051
represents the correction of the y dimension of any voxel position transformation;
Figure BDA0003553957580000052
represents the correction of the z dimension of any voxel position transformation;
x, y, z represent the position of the voxel v in three dimensions, respectively.
In conclusion, due to the adoption of the technical scheme, the image registration method and the image registration device can quickly register the image, and can obtain a good registration effect without adding label data.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of the present invention.
Fig. 2 is a diagram of the improved U-net convolutional neural network architecture of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The invention provides a medical image registration method based on a deep learning network, which comprises the following steps as shown in figure 1:
s1, acquiring a plurality of image pairs, and randomly dividing the image pairs into a training set and a testing set;
s2, inputting the image pair in the training set into a convolutional neural network containing an encoder-decoder structure, predicting a displacement field from a floating image to a fixed image, averaging the displacement field to obtain a displacement field mean value, and smoothing the displacement field mean value by utilizing regularization on a spatial gradient to obtain a smoothed displacement field;
s3, deforming the smoothed displacement field by using a spatial transformation network to obtain a sampling grid, and then resampling the floating image by using the sampling grid to obtain a registered image;
s4, calculating a similarity loss function between the registered image and the fixed image, minimizing the similarity loss function to obtain an optimal registered image, storing corresponding parameters, comparing the parameters, and selecting the minimum parameter as a parameter theta;
and S5, inputting the image pair in the test set, and registering by using a deep learning image registration network to finally obtain a registered image.
Further, the S1 includes the following steps:
s1-1, collecting samples of the same mode, selecting an image of one period as a floating image, and selecting an image of the other period as a fixed image;
in practice, a plurality of pictures are taken before and after a time interval, one image before the time interval is selected as a floating image, and one image after the time interval is selected as a fixed image; or selecting one image after a time interval as a floating image and one image before the time interval as a fixed image.
A plurality of image pairs can be obtained by selecting different images before and after different time intervals.
S1-2, normalizing the selected image to obtain a sub-image with a predetermined size, and forming a training image pair or a test image pair.
Further, the convolutional neural network is a modified U-net convolutional neural network:
the method comprises an encoding stage and a decoding stage, wherein the encoding stage is divided into four layers, each layer extracts features by rolling 1 convolution kernel with the size of 3 multiplied by 3 by step length 1, each layer downsamples a feature map by sliding the convolution kernel with the size of 3 multiplied by 3 by step length 2, the size of the feature map after each layer downsampling is half, and the number of channels is doubled;
the decoding stage is also divided into four layers, each layer is scrolled by step 1 to extract features through 2 convolution kernels with the size of 3 × 3 × 3, each layer is deconvolved by step 2 with the convolution kernel of 2 × 2 × 2 to sample the feature map, and the feature maps with the same size in the encoding stage and the decoding stage are connected together by jump connection.
Further, the calculation formula of the displacement field is as follows:
u=hθ(f,m) (1)
wherein u represents a displacement field;
hθ(-) is a mapping function;
theta is a parameter in the mapping function;
f represents a fixed image;
m represents a floating image.
Further, the transforming the displacement field by using the spatial transform network to obtain the sampling grid includes:
for each voxel v, the displacement of the floating image with respect to the fixed image and the linear interpolation of the ly adjacent voxels of this voxel are calculated using a spatial transform network, ly being an integer not less than 8. Obtaining a new voxel v':
v'=v+u(v) (2)
where u (v) represents the displacement of voxels in the floating image m to voxel-like positions in the fixed image f;
then, obtaining a sampling grid according to the following formula:
Figure BDA0003553957580000071
wherein m ° u (v) represents a sampling grid formed by voxel linear interpolation;
z (v ') is a domain set of voxels v'; the domain represents a collection of points around the pixel point.
q is an element in the Z (v') set, representing a new voxel position;
d is the corresponding dimension;
x, y and z respectively represent positions of the voxels in three dimensions;
m (q) represents the transformation difference from the original voxel position to the new voxel position;
v'dand q isdRepresenting a transformation in a corresponding dimension;
v'drepresenting the position of the new voxel in dimension d;
qdtransformation of representation element q in dimension d
| represents an absolute value.
Further, the similarity loss function includes:
Figure BDA0003553957580000081
wherein MSE (f, m ° u) represents the mean square error of f and m ° u;
f is a fixed image;
Ω represents a set of pixels of the whole image;
| · | represents the number of sets;
(v) the anatomical location to which the respective voxel corresponds;
(m ° u) (v) represents the corresponding anatomical location of the resampled image voxel;
m ° u represents the image after resampling by the sampling grid;
v represents a voxel;
and minimizing the loss function MSE to enable the registered image to be closer to a fixed image, so that the registered image with higher accuracy is obtained.
Further, the smoothing of the displacement field using regularization on spatial gradients includes:
the spatial gradient is approximately equal to the difference between neighboring voxels, and the calculation formula is as follows:
Figure BDA0003553957580000082
wherein lsmooth (u) denotes regularizing the displacement field u;
v represents a voxel;
Ω represents a set of pixels of the whole image;
| | · | | represents a norm;
Figure BDA0003553957580000083
Figure BDA0003553957580000084
represents the x-dimension correction, i.e. smoothing, of any voxel position transformation;
Figure BDA0003553957580000091
represents the correction of the y dimension of any voxel position transformation;
Figure BDA0003553957580000092
represents the correction of the z dimension of any voxel position transformation;
x, y, z represent the position of the voxel v in three dimensions, respectively.
The medical image registration of the invention can be used for registration of other medical images such as magnetic resonance images, ultrasonic images and the like.
The improved U-net convolutional neural network structure is shown in fig. 2 and comprises an encoding stage and a decoding stage, wherein the encoding stage extracts effective characteristics of an input image group, and the decoding stage performs up-sampling on the image.
The encoding stage is divided into four layers, each layer is rolled by step length 1 through 1 convolution kernel with the size of 3 multiplied by 3 to extract features, each layer is used for downsampling the feature map by step length 2 sliding through the convolution kernels with the same size, the size of the feature map after downsampling of each layer is half, and the number of channels is doubled;
the decoding stage is also divided into four layers, each layer is rolled by step 1 through 2 convolution kernels with the size of 3 x 3 to extract features, each layer is deconvoluted by step 2 through 2 convolution kernels with the size of 2 x 2 to perform upsampling on feature maps, jump connection is used for connecting the feature maps with the same size in the encoding stage and the decoding stage, and the Relu activation function is arranged after the convolution layers.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. A medical image registration method based on a deep learning network is characterized by comprising the following steps:
s1, acquiring a plurality of image pairs, and randomly dividing the image pairs into a training set and a testing set; the pair of images comprising a floating image and a fixed image;
s2, inputting the image pair in the training set into a convolutional neural network containing an encoder-decoder structure, predicting the displacement field from a floating image to a fixed image, averaging the displacement field to obtain a displacement field mean value, and then smoothing the displacement field mean value by utilizing regularization on a spatial gradient to obtain a smoothed displacement field;
s3, deforming the smoothed displacement field by using a spatial transformation network to obtain a sampling grid, and then resampling the floating image by using the sampling grid to obtain a registered image;
s4, calculating a similarity loss function between the registered image and the fixed image, minimizing the similarity loss function to obtain an optimal registered image, storing corresponding parameters, comparing the parameters, and selecting the minimum parameter as a parameter theta;
and S5, inputting the image pair in the test set, and registering by using a deep learning image registration network to finally obtain a registered image.
2. The deep learning network-based medical image registration method according to claim 1, wherein the S1 comprises the steps of:
s1-1, collecting samples of the same mode, selecting an image of one period as a floating image, and selecting an image of the other period as a fixed image;
s1-2, normalizing the selected image to obtain a sub-image with a predetermined size, and forming a training image pair or a test image pair.
3. The deep learning network-based medical image registration method according to claim 1, wherein the convolutional neural network is a modified U-net convolutional neural network:
the method comprises an encoding stage and a decoding stage, wherein the encoding stage is divided into four layers, each layer extracts features by rolling 1 convolution kernel with the size of 3 multiplied by 3 by step length 1, each layer downsamples a feature map by sliding the convolution kernel with the size of 3 multiplied by 3 by step length 2, the size of the feature map after each layer downsampling is half, and the number of channels is doubled;
the decoding stage is also divided into four layers, each layer is scrolled by step 1 to extract features through 2 convolution kernels with the size of 3 × 3 × 3, each layer is deconvolved by step 2 with the convolution kernel of 2 × 2 × 2 to sample the feature map, and the feature maps with the same size in the encoding stage and the decoding stage are connected together by jump connection.
4. The deep learning network-based medical image registration method according to claim 3, wherein the calculation formula of the displacement field is as follows:
u=hθ(f,m)
wherein u represents a displacement field;
hθ(-) is a mapping function;
theta is a parameter in the mapping function;
f represents a fixed image;
m represents a floating image.
5. The deep learning network-based medical image registration method according to claim 1, wherein the transforming the displacement field by using the spatial transformation network to obtain the sampling grid comprises:
for each voxel v, calculating the displacement of the floating image relative to the fixed image and the linear interpolation of the ly adjacent voxels of the voxel by using a spatial transformation network to obtain a new voxel v':
v'=v+u(v)
where u (v) represents the displacement of voxels in the floating image m to voxel-like positions in the fixed image f;
then, obtaining a sampling grid according to the following formula:
Figure FDA0003553957570000021
wherein
Figure FDA0003553957570000022
Representing a sampling grid formed after voxel linear interpolation;
z (v ') is a domain set of voxels v';
q is an element in the Z (v') set, representing a new voxel position;
d is the corresponding dimension;
x, y and z respectively represent positions of the voxels in three dimensions;
m (q) represents a transformation difference from the original voxel location to the new voxel location;
v'dand q isdRepresenting a transformation in a corresponding dimension;
v'drepresenting the position of the new voxel in dimension d;
qdtransformation in dimension d representing voxel q
| represents an absolute value.
6. The deep learning network-based medical image registration method according to claim 1, wherein the similarity loss function comprises:
Figure FDA0003553957570000031
wherein
Figure FDA0003553957570000033
Express and
Figure FDA0003553957570000034
the mean square error of (d);
f is a fixed image;
Ω represents a set of pixels of the whole image;
| represents the number of sets;
(v) the anatomical location to which the respective voxel corresponds;
Figure FDA0003553957570000035
representing the corresponding anatomical location of the resampled image voxel;
Figure FDA0003553957570000036
representing the image after resampling by the sampling grid;
v denotes a voxel.
7. The deep learning network-based medical image registration method according to claim 1, wherein the smoothing of the displacement field using regularization on spatial gradients comprises:
the spatial gradient is approximately equal to the difference between neighboring voxels, and the calculation formula is as follows:
Figure FDA0003553957570000032
wherein lsmooth (u) denotes regularizing the displacement field u;
v represents a voxel;
Ω represents a set of pixels of the whole image;
| | · | | represents a norm;
Figure FDA0003553957570000041
Figure FDA0003553957570000042
represents the x-dimension correction, i.e. smoothing, of any voxel position transformation;
Figure FDA0003553957570000043
represents the correction of the y dimension of any voxel position transformation;
Figure FDA0003553957570000044
representing a transformation to an arbitrary voxel positionCorrecting the z dimension;
x, y, z represent the position of the voxel v in three dimensions, respectively.
CN202210272128.8A 2022-03-18 2022-03-18 Medical image registration method based on deep learning network Pending CN114648562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210272128.8A CN114648562A (en) 2022-03-18 2022-03-18 Medical image registration method based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210272128.8A CN114648562A (en) 2022-03-18 2022-03-18 Medical image registration method based on deep learning network

Publications (1)

Publication Number Publication Date
CN114648562A true CN114648562A (en) 2022-06-21

Family

ID=81995554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210272128.8A Pending CN114648562A (en) 2022-03-18 2022-03-18 Medical image registration method based on deep learning network

Country Status (1)

Country Link
CN (1) CN114648562A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664635A (en) * 2023-07-31 2023-08-29 柏意慧心(杭州)网络科技有限公司 Method, computing device and medium for constructing multi-dimensional dynamic model of target object
CN116958217A (en) * 2023-08-02 2023-10-27 德智鸿(上海)机器人有限责任公司 MRI and CT multi-mode 3D automatic registration method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664635A (en) * 2023-07-31 2023-08-29 柏意慧心(杭州)网络科技有限公司 Method, computing device and medium for constructing multi-dimensional dynamic model of target object
CN116664635B (en) * 2023-07-31 2023-10-24 柏意慧心(杭州)网络科技有限公司 Method, computing device and medium for constructing multi-dimensional dynamic model of target object
CN116958217A (en) * 2023-08-02 2023-10-27 德智鸿(上海)机器人有限责任公司 MRI and CT multi-mode 3D automatic registration method and device
CN116958217B (en) * 2023-08-02 2024-03-29 德智鸿(上海)机器人有限责任公司 MRI and CT multi-mode 3D automatic registration method and device

Similar Documents

Publication Publication Date Title
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
CN114648562A (en) Medical image registration method based on deep learning network
CN111932550A (en) 3D ventricle nuclear magnetic resonance video segmentation system based on deep learning
JP2008511395A (en) Method and system for motion correction in a sequence of images
CN112862805A (en) Automatic auditory neuroma image segmentation method and system
CN111292336A (en) Omnidirectional image non-reference quality evaluation method based on segmented spherical projection format
CN115578427A (en) Unsupervised single-mode medical image registration method based on deep learning
CN112037304A (en) Two-stage edge enhancement QSM reconstruction method based on SWI phase image
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
Tong et al. Registration of histopathology images using self supervised fine grained feature maps
Zheng et al. Multi-strategy mutual learning network for deformable medical image registration
CN117911432A (en) Image segmentation method, device and storage medium
CN112149728B (en) Rapid multi-mode image template matching method
CN113610746A (en) Image processing method and device, computer equipment and storage medium
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
Hellier et al. Multimodal non-rigid warping for correction of distortions in functional MRI
CN111696167A (en) Single image super-resolution reconstruction method guided by self-example learning
Liang et al. 3D MRI image super‐resolution for brain combining rigid and large diffeomorphic registration
Remedios et al. Joint image and label self-super-resolution
CN108022230B (en) Kidney multi-phase CT image fusion system
Fourcade et al. Deformable image registration with deep network priors: a study on longitudinal PET images
Huang et al. Super-resolution reconstruction of fetal brain MRI with prior anatomical knowledge
Roy et al. Synthesizing MR contrast and resolution through a patch matching technique
CN116958217B (en) MRI and CT multi-mode 3D automatic registration method and device
CN112215814B (en) Prostate image segmentation method based on 3DHOG auxiliary convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination