CN112132878A - End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network - Google Patents

End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network Download PDF

Info

Publication number
CN112132878A
CN112132878A CN202011207170.9A CN202011207170A CN112132878A CN 112132878 A CN112132878 A CN 112132878A CN 202011207170 A CN202011207170 A CN 202011207170A CN 112132878 A CN112132878 A CN 112132878A
Authority
CN
China
Prior art keywords
image
displacement vector
neural network
vector field
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011207170.9A
Other languages
Chinese (zh)
Other versions
CN112132878B (en
Inventor
唐堃
王丽会
李智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202011207170.9A priority Critical patent/CN112132878B/en
Publication of CN112132878A publication Critical patent/CN112132878A/en
Application granted granted Critical
Publication of CN112132878B publication Critical patent/CN112132878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network, which solves the technical problem that the existing deep learning method needs to utilize an additional tool to pre-align data. The method comprises the following steps: the method comprises the following steps: removing skull from the image to be registered and the target image and normalizing the gray value of the image to be registered and the target image to be in [0,1 ]; step two: the affine transformation convolution neural network model takes an image to be registered and a target image as input, and affine transformation parameters are predicted; step three: performing geometric transformation on the image to be registered according to the affine transformation parameters to obtain a pre-aligned image and calculating a corresponding displacement vector field; step four: inputting the pre-alignment data and the target image into a nonlinear transformation convolutional neural network model, and predicting a displacement vector field required by nonlinear transformation; step five: fusing the two displacement vector fields; step six: and performing geometric transformation on the image to be registered by using the fused displacement vector field to obtain a result registration image.

Description

End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network
Technical Field
The invention relates to an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network, and belongs to the field of medical image processing.
Background
With the continuous innovation of computer technology and imaging devices, medical imaging technology is gradually developing towards the trends of high resolution, high precision and high dimension. Different modality medical images have respective advantages and disadvantages and can reflect physiological information of different tissues, for example, Magnetic Resonance Imaging (MRI) is suitable for reflecting soft tissue information, Positron Emission Tomography (PET) can reflect tissue metabolism information, and the method is suitable for tumor detection. The fusion of images in different modalities can provide richer and comprehensive information for doctors, thereby improving the diagnosis precision. The registration of medical images is an important link for ensuring the correct fusion of medical images.
Medical image registration utilizes an optimization strategy to find an optimal geometric transformation in a geometric transformation space, which enables maximum similarity between two or more medical images, so that corresponding anatomical structures in the two or more geometrically transformed medical images are located at the same position in the same coordinate system, and generally, medical image registration can be expressed as the following optimization process:
Figure BDA0002757409320000011
wherein,
Figure BDA0002757409320000012
and
Figure BDA0002757409320000013
representing geometric transformation parameters (e.g. affine transformation parameters), Similarity (I) being a Similarity measure functionfAnd ImRespectively representing a target image (reference image) and an image to be registered (floating image),
Figure BDA0002757409320000014
representing the registration result obtained by geometrically transforming the image to be registered by using the geometric transformation parameters.
Medical image registration is divided into a conventional method and a deep learning method. The main idea of the traditional method is to directly use the similarity between the gray levels of two images, search the point with the maximum similarity by an optimization method, and determine the optimal geometric transformation parameters between a reference image and an image to be registered. At present, a medical image registration technology based on a traditional method is developed and matured, and the registration precision is high. However, the conventional registration method has a key problem: and each image to be registered needs to search the optimal transformation in the deformation space according to a specific optimization algorithm until the similarity measurement function is converged. The optimization process is time-consuming and easily falls into local extreme values, and cannot meet the requirements of the medical image registration on real-time performance and precision. In addition, the conventional method has no learning ability, and each pair of images needs to be subjected to repeated optimization work. It becomes critical how the algorithm can learn the commonality between images from the images. The registration method based on deep learning is characterized in that the high-order abstract features of images are extracted from massive image data by utilizing the powerful learning capacity of a convolutional neural network, and the trained deep learning model can complete the registration task of the image pairs in a very short time. At present, the registration technology based on deep learning mainly focuses on researching nonlinear registration, and researches aiming at end-to-end, namely a convolutional neural network model of the registration technology simultaneously covers affine transformation and nonlinear transformation are still rarely reported.
Disclosure of Invention
Aiming at the defects of the traditional registration method and the deep learning-based registration method, the invention aims to provide a technical scheme for end-to-end brain nuclear magnetic resonance image registration based on a convolutional neural network. The method can construct the convolutional neural network model and directly complete affine and nonlinear registration of the image by using the original image data, namely end-to-end image registration, on the premise of not utilizing the existing tools to preprocess the data, and realizes higher registration accuracy.
The invention relates to an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network, which adopts the technical scheme that the method comprises the following steps: the method comprises the following steps: selecting an image needing to be registered, stripping and normalizing the skull, and then stacking the two images to obtain a block comprising two channels; step two: performing feature extraction on the image by an affine transformation registration convolutional neural network to obtain geometric transformation parameters required by affine transformation, namely affine transformation parameters, performing geometric transformation on the image to be registered according to the affine transformation parameters to obtain a pre-aligned image, and performing geometric deformation on a unit grid by using the affine transformation parameters to calculate a corresponding displacement vector field 1; step three: carrying out feature extraction on the pre-aligned image and the target image by the nonlinear registration convolutional neural network to predict a displacement vector field 2 required by the non-rigid linear transformation; step four: and performing geometric transformation on the displacement vector field 1 by using the displacement vector field 2, adding the result and the displacement vector field to obtain a fused final displacement vector field, and performing geometric transformation on the image to be registered according to the final displacement vector field to obtain a final registered image.
In the first step, the original image size is (a, B, C), where a and B are the height and width of the image, respectively, and C is the number of image slices. Skull stripping and normalization are more beneficial for model training.
In the second step, two images are first stacked to obtain a block with a dimension of (1, a, B, C, 2). For a convolutional neural network constructed for affine transformation, stacked image blocks are successively downsampled using multiple step convolutions, enabling the network to extract useful features at different resolutions. Since affine transformation parameters have two parameters with different value ranges, finally, two full-connection layers are used for respectively predicting: 1. rotation, scaling and miscut parameters; 2. and (4) translation parameters. The obtained parameters are used for pre-aligning the images to be registered and calculating the corresponding displacement vector field.
In the third step, the nonlinear transformation model includes two parts: an encoder and a decoder. The encoder is used for learning the relevance between the two images under different resolutions and extracting useful features; the decoder is used for up-sampling the features extracted by the encoder to be consistent with the size of an original image to obtain a displacement vector field.
In the fourth step, since the whole registration model includes two modules, knowledge learned by the two modules needs to be effectively fused, that is, the two displacement vector fields are fused by the method described in the third step, and then the image to be registered is subjected to geometric transformation according to the fused displacement vector fields to obtain a final registration image.
The invention abandons the step of pre-aligning data by using an additional tool in the prior deep learning registration technology, but constructs a convolutional neural network model to realize the function and integrates the convolutional neural network model into a nonlinear transformation model, so that the two models can more effectively integrate the respective learning knowledge, and the high-precision end-to-end image registration result is realized.
Compared with the prior art, the invention has the following advantages:
1. the invention utilizes the convolution neural network model to learn the pre-alignment data, thereby replacing the prior pre-alignment software and combining the non-linear registration convolution neural network model to realize the end-to-end linear and non-linear image registration.
Drawings
FIG. 1 is a schematic diagram of a model training process of the present invention;
FIG. 2 is a diagram of an affine transformation convolutional neural network model in the present invention;
FIG. 3 is a diagram of a model of a non-linear transform convolutional neural network in the present invention.
Detailed Description
Aiming at the defects that the traditional image registration computation complexity is high, an additional tool needs to be used for pre-aligning data in the existing deep learning method and the like, the invention develops research and discussion and provides an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network.
Example 1: as shown in fig. 1, fig. 2 and fig. 3, the invention discloses an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network, and the method model training comprises the following steps:
(1) firstly, performing skull stripping on all images in the training data set, removing the influence of skull on model training, and then performing normalization operation on all the images to enable the gray values of all the images to be constrained within [0,1 ];
(2) setting the network loss function as:
Figure BDA0002757409320000031
f, W are respectively target image and registration result image, Pf_i,Pw_iRespectively obtaining gray values of pixel points at corresponding positions in the target image and the registration result image;
(3) the brain nuclear magnetic resonance 3D image of the T1 modality is randomly selected as an image to be registered, the size is [182,218,182], the image is represented by 182 slices in total, and the size of each slice is [182,218 ]. Stacking the image to be registered and the target image;
(4) the stacked images are input into an affine transformation convolutional neural network model. The model learns the correlation between two images and predicts 4 classes of affine transformation parameters (translation, scaling, rotation and miscut, total 12 values);
(5) and performing geometric transformation on the image to be registered according to the transformation parameters to obtain a pre-aligned image, and calculating a corresponding displacement vector field, wherein the method specifically comprises the following steps: firstly, generating a unit grid tensor which has the same size as an image, multiplying the unit grid tensor by a 3 x 3 matrix formed by other affine transformation parameters except translation parameters, and adding the result of the multiplication with a vector formed by the translation parameters to obtain a corresponding displacement vector field;
(6) stacking the pre-aligned image and the target image, inputting the stacked pre-aligned image and the target image into a nonlinear transformation convolutional neural network model, and obtaining a displacement vector field corresponding to nonlinear transformation;
(7) and fusing the displacement vector field corresponding to the affine transformation with the displacement vector field corresponding to the nonlinear transformation, wherein the method comprises the following steps:
let warp (I, u) (x) ═ I (x + u (x)) denote that image I undergoes a geometric transformation process from the displacement vector field u. In the existing displacement vector field 1 and displacement vector field 2, the geometric transformation of the image I according to the two displacement vector fields can be expressed as: warp (warp (I, u)1),u2) (x) then:
Figure BDA0002757409320000041
i.e. two displacement vector field fusionShown as follows: u-u2+warp(u1,u2) Therefore, the displacement vector field 1 is first geometrically deformed by the displacement vector field 2, and then the result is added to the displacement vector field 2 to obtain the final displacement vector field.
(8) Performing geometric transformation on the image to be registered according to the fused displacement vector field;
(9) and (5) circulating the steps 3 to 8 until the loss function is converged.
The effects of the present invention can be further illustrated by comparative experiments:
example 2:
compared with the traditional method and the deep learning method, the end-to-end brain nuclear magnetic resonance image registration method based on the convolutional neural network comprises the following specific steps:
1) content and results of comparative experiments:
the method selects and compares the method ANTs with the method with a better registration result in the traditional method and the VoxelMorph with the method with a better registration result in the deep learning method respectively. In a server configured with the same hardware and software, firstly, the model and a VoxelMorph model are trained until convergence, then, three methods are respectively utilized to register brain nuclear magnetic resonance images of 8 individuals, then, a professional segmentation tool is utilized to segment all results according to anatomical structures, and then, Dice is utilized to carry out quantitative comparison, wherein the Dice formula is as follows:
Figure BDA0002757409320000051
the results obtained by quantification were as follows:
Figure BDA0002757409320000052
in summary, the invention provides an end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network, which adopts a convolutional neural network model to replace the existing pre-alignment data tool, can learn more affine transformation parameters than the existing pre-alignment data tool, simultaneously constructs a nonlinear convolutional neural network model to pay attention to nonlinear registration, and combines the affine and nonlinear convolutional neural network model to realize end-to-end brain nuclear magnetic resonance image registration. The invention can be used as a precondition for a plurality of medical image processing technologies.
The present invention is not described in detail, but is known to those skilled in the art. Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (5)

1. An end-to-end brain nuclear magnetic resonance image registration method based on a convolutional neural network is characterized by comprising the following steps:
1) selecting an image to be registered and a target image which need to be registered, and preprocessing the image to be registered and the target image;
2) inputting a block with two stacked images into a 2-channel block into an affine transformation convolutional neural network model to predict affine transformation parameters, performing geometric transformation on the image to be registered according to the affine transformation parameters to obtain a pre-aligned image, and performing geometric transformation on a unit grid according to the affine transformation parameters to obtain a corresponding displacement vector field 1;
3) stacking the pre-aligned image and the target image according to the mode of the step 2) and inputting the stacked pre-aligned image and the target image into a nonlinear transformation convolution neural network model to predict a displacement vector field 2;
4) and adding the result of the geometric transformation of the displacement vector field 1 by using the displacement vector field 2 and the displacement vector field 2 to obtain a fused final displacement vector field, and performing geometric transformation on the image to be registered according to the final displacement vector field to obtain a registered image.
2. The convolutional neural network-based end-to-end brain nuclear magnetic resonance image registration method of claim 1, wherein: selecting the sizes of the nuclear magnetic resonance images in the step 1) as [ A, B and C ], wherein A and B are width and height, and C is the number of slices, and the preprocessing step comprises the following steps: skull stripping and normalization.
3. The convolutional neural network-based end-to-end brain nuclear magnetic resonance image registration method of claim 1, wherein: the size of the image blocks stacked in the step 2) is [1, A, B, C,2], the affine transformation convolutional neural network model continuously downsamples the image to extract features under different resolutions, affine transformation parameters are output, the transformation parameters are used for carrying out geometric transformation on unit grids to obtain a displacement vector field 1, and then the image to be registered is subjected to geometric transformation to obtain a pre-aligned image.
4. The convolutional neural network-based end-to-end brain nuclear magnetic resonance image registration method of claim 1, wherein: and 3) firstly stacking the pre-aligned image and the target image into a whole in the step 3), continuously down-sampling the pre-aligned image and the target image by a nonlinear transformation convolutional neural network model to learn the commonality of the pre-aligned image and the target image under different resolutions, and up-sampling the learned features to obtain a displacement vector field with the size of the original image being consistent with that of a channel of 3.
5. The convolutional neural network-based end-to-end brain nuclear magnetic resonance image registration method of claim 1, wherein: in the step 4), firstly, knowledge learned by the affine transformation convolutional neural network model and the nonlinear transformation convolutional neural network model needs to be fused, that is, the displacement vector field 1 of the displacement vector field 2 is subjected to geometric transformation and then added with the displacement vector field 2 to realize the fusion of the two displacement vector fields, and then the image to be registered is subjected to geometric transformation according to the fused displacement vector field to obtain the final registered image.
CN202011207170.9A 2020-11-03 2020-11-03 End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network Active CN112132878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011207170.9A CN112132878B (en) 2020-11-03 2020-11-03 End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011207170.9A CN112132878B (en) 2020-11-03 2020-11-03 End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112132878A true CN112132878A (en) 2020-12-25
CN112132878B CN112132878B (en) 2024-04-05

Family

ID=73852177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011207170.9A Active CN112132878B (en) 2020-11-03 2020-11-03 End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112132878B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012207A (en) * 2021-03-23 2021-06-22 北京安德医智科技有限公司 Image registration method and device
CN113487657A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
CN113516693A (en) * 2021-05-21 2021-10-19 郑健青 Rapid and universal image registration method
CN114332447A (en) * 2022-03-14 2022-04-12 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium
CN118261822A (en) * 2024-05-30 2024-06-28 贵州大学 Self-supervision magnetic resonance diffusion weighted image denoising method based on voxel replacement

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090143668A1 (en) * 2007-12-04 2009-06-04 Harms Steven E Enhancement of mri image contrast by combining pre- and post-contrast raw and phase spoiled image data
CA2757533A1 (en) * 2009-04-03 2010-10-07 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Magnetic microstructures for magnetic resonance imaging
US20180143281A1 (en) * 2016-11-22 2018-05-24 Hyperfine Research, Inc. Systems and methods for automated detection in magnetic resonance images
CN109727270A (en) * 2018-12-10 2019-05-07 杭州帝视科技有限公司 The movement mechanism and analysis of texture method and system of Cardiac Magnetic Resonance Images
WO2019121693A1 (en) * 2017-12-18 2019-06-27 Koninklijke Philips N.V. Motion compensated magnetic resonance imaging
US20190205766A1 (en) * 2018-01-03 2019-07-04 Siemens Healthcare Gmbh Medical Imaging Diffeomorphic Registration based on Machine Learning
US20200151309A1 (en) * 2018-11-08 2020-05-14 Idemia Identity & Security France Method of classification of an input image representative of a biometric trait by means of a convolutional neural network
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
US20200294282A1 (en) * 2019-03-14 2020-09-17 Hyperfine Research, Inc. Deep learning techniques for alignment of magnetic resonance images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090143668A1 (en) * 2007-12-04 2009-06-04 Harms Steven E Enhancement of mri image contrast by combining pre- and post-contrast raw and phase spoiled image data
CA2757533A1 (en) * 2009-04-03 2010-10-07 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Magnetic microstructures for magnetic resonance imaging
US20180143281A1 (en) * 2016-11-22 2018-05-24 Hyperfine Research, Inc. Systems and methods for automated detection in magnetic resonance images
WO2019121693A1 (en) * 2017-12-18 2019-06-27 Koninklijke Philips N.V. Motion compensated magnetic resonance imaging
US20190205766A1 (en) * 2018-01-03 2019-07-04 Siemens Healthcare Gmbh Medical Imaging Diffeomorphic Registration based on Machine Learning
US20200151309A1 (en) * 2018-11-08 2020-05-14 Idemia Identity & Security France Method of classification of an input image representative of a biometric trait by means of a convolutional neural network
CN109727270A (en) * 2018-12-10 2019-05-07 杭州帝视科技有限公司 The movement mechanism and analysis of texture method and system of Cardiac Magnetic Resonance Images
US20200294282A1 (en) * 2019-03-14 2020-09-17 Hyperfine Research, Inc. Deep learning techniques for alignment of magnetic resonance images
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈向前;郭小青;周钢;樊瑜波;王豫;: "基于深度学习的2D/3D医学图像配准研究", 中国生物医学工程学报, no. 04 *
陈颖;李绩鹏;陈恒实;: "灰度二次校正改进空间变换网络的遥感图像配准", 中国科技论文, no. 08 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012207A (en) * 2021-03-23 2021-06-22 北京安德医智科技有限公司 Image registration method and device
CN113516693A (en) * 2021-05-21 2021-10-19 郑健青 Rapid and universal image registration method
CN113487657A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
CN113487657B (en) * 2021-07-29 2022-02-01 广州柏视医疗科技有限公司 Deep learning-based mode conversion method
CN114332447A (en) * 2022-03-14 2022-04-12 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium
CN114332447B (en) * 2022-03-14 2022-08-09 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium
CN118261822A (en) * 2024-05-30 2024-06-28 贵州大学 Self-supervision magnetic resonance diffusion weighted image denoising method based on voxel replacement
CN118261822B (en) * 2024-05-30 2024-08-02 贵州大学 Self-supervision magnetic resonance diffusion weighted image denoising method based on voxel replacement

Also Published As

Publication number Publication date
CN112132878B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN112132878B (en) End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
Wang et al. Multiscale transunet++: dense hybrid u-net with transformer for medical image segmentation
CN116664588A (en) Mask modeling-based 3D medical image segmentation model building method and application thereof
Qamar et al. Multi stream 3D hyper-densely connected network for multi modality isointense infant brain MRI segmentation
CN114596318A (en) Breast cancer magnetic resonance imaging focus segmentation method based on Transformer
CN117974693B (en) Image segmentation method, device, computer equipment and storage medium
Sreeja et al. Image fusion through deep convolutional neural network
CN117274599A (en) Brain magnetic resonance segmentation method and system based on combined double-task self-encoder
CN115100165A (en) Colorectal cancer T staging method and system based on tumor region CT image
CN112990359B (en) Image data processing method, device, computer and storage medium
Yuan et al. FM-Unet: Biomedical image segmentation based on feedback mechanism Unet
Zhang et al. Coarse-to-fine depth super-resolution with adaptive RGB-D feature attention
CN115861396A (en) Medical image registration method based on deep learning
CN114022521A (en) Non-rigid multi-mode medical image registration method and system
Ma et al. IDC-Net: Multi-stage Registration Network Using Intensity Adjustment, Dual-Stream and Cost Volume
Shen et al. DSKCA-UNet: Dynamic selective kernel channel attention for medical image segmentation
CN117994273B (en) Polyp segmentation algorithm based on reparameterization and convolution attention
CN114596408B (en) Micro-parallel three-dimensional reconstruction method based on continuous two-dimensional metal distribution image
Li et al. LiU-Net: Ischemic Stroke Lesion Segmentation Based on Improved KiU-Net.
CN118314175A (en) Unsupervised deformable three-dimensional medical image registration method and device
Baldeon-Calisto et al. DistilIQA: Distilling vision transformers for no-reference perceptual CT image quality assessment
Xiong et al. Institute of Natural Sciences and School of Mathematical Sciences and MOE-LSC and Shanghai National Center for Applied Mathematics (SJTU Center), Shanghai Jiao Tong University, Shanghai 200240, China {dingqiaoqiao, xqzhang}@ sjtu. edu. cn
CN112420175A (en) STN-based autism brain magnetic resonance image visualization method
Shi et al. MAST-UNet: More adaptive semantic texture for segmenting pulmonary nodules

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant