CN111260700A - Full-automatic registration and segmentation method for multi-parameter magnetic resonance image - Google Patents

Full-automatic registration and segmentation method for multi-parameter magnetic resonance image Download PDF

Info

Publication number
CN111260700A
CN111260700A CN202010023904.1A CN202010023904A CN111260700A CN 111260700 A CN111260700 A CN 111260700A CN 202010023904 A CN202010023904 A CN 202010023904A CN 111260700 A CN111260700 A CN 111260700A
Authority
CN
China
Prior art keywords
segmentation
registration
image
loss
phi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010023904.1A
Other languages
Chinese (zh)
Other versions
CN111260700B (en
Inventor
夏威
李郁欣
尹波
胡斌
杨丽琴
高欣
耿道颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202010023904.1A priority Critical patent/CN111260700B/en
Publication of CN111260700A publication Critical patent/CN111260700A/en
Application granted granted Critical
Publication of CN111260700B publication Critical patent/CN111260700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a full-automatic registration and segmentation method of a multi-parameter magnetic resonance image, which comprises the following steps of joint training of a registration model and a segmentation model: (1) establishing a registration model which takes the reference image as a reference and registers the floating image by taking one sequence in the multi-parameter magnetic resonance image as the reference image and other sequences as the floating image, and establishing a registration loss function based on the image gray level similarity measure; (2) constructing a segmentation model for performing target segmentation on the reference image and the floating image, and establishing a segmentation loss function related to the reference image and the floating image; (3) constructing a contour similarity loss function for measuring the contour similarity of the segmentation model to the segmentation result of the reference image and the floating image and a combined loss function for fusing gray information and contour information; (4) the registration model and the segmentation model are trained alternately until a convergence condition is satisfied. Compared with the prior art, the registration and the segmentation can be mutually promoted, and the registration and the segmentation precision can be effectively improved.

Description

Full-automatic registration and segmentation method for multi-parameter magnetic resonance image
Technical Field
The invention relates to an image processing method, in particular to a full-automatic registration and segmentation method for a multi-parameter magnetic resonance image.
Background
The registration of each sequence image in multi-parameter magnetic resonance (MP-MRI) and the segmentation of the target focus are important steps for the analysis and calculation of brain tumor images.
The multi-parameter magnetic resonance (MP-MRI) comprises a plurality of image sequences such as T1W-MRI, T2W-MRI and DWI-MRI, the T1W-MRI and T2W-MRI sequences provide brain form images under different contrasts, the DWI-MRI provides molecular images of the brain, and brain tumors can be comprehensively analyzed by combining the plurality of sequences. Because the head of the patient may move during the MP-MRI scanning process and the resolution of each magnetic resonance sequence is not consistent, the image registration is the first step in developing the brain tumor image analysis and calculation. In the traditional method, an objective function is constructed, a deformation model is selected, and image registration is performed by means of an optimization algorithm. Avants B et al propose an image registration method (SyN) based on symmetric differential isoembryo and cross-correlation, and achieve the best registration accuracy in 2009 international brain registration competition. However, the conventional registration method needs iterative optimization for each image to be registered, the registration parameters are difficult to adjust, and the calculation time can be as long as 77 minutes. With the rapid development of artificial intelligence techniques, researchers have applied deep learning to image registration. Cao X et al propose a brain image registration algorithm based on Convolutional Neural Network (CNN), first estimate the deformation field through the regression model based on CNN, then generate the final registration result through the regression model based on fully convolutional neural network (FCN), have obtained the registration accuracy equivalent to traditional method, and reduce the computation time by a wide margin.
Brain tumor can be divided into necrosis, edema, enhanced and non-enhanced regions, and the image characteristics of each region are closely related to the tumor grade, so that brain tumor division is another important step for developing brain tumor image analysis and calculation. In brain tumor segmentation based on a traditional image processing algorithm, Gooya A et al propose a joint segmentation and registration strategy based on a maximum expectation algorithm, and utilize complementary information of segmentation and registration to simultaneously perform brain tumor image segmentation and registration, so that a result superior to that of single segmentation or registration is obtained, however, due to the complex parameters of the algorithm, the segmentation and registration time can be as long as 3-6 hours. The tumor segmentation model based on deep learning can be adopted to greatly shorten the segmentation time. Li Zeju et al, university of Compound Dan, trains CNN to segment brain tumor subregions by using 151 cases of low-grade brain tumor patient data and artificially labeled tumor symptom subregions as gold standards; pereira S et al trained a CNN-based neural network model, and achieved the first performance in the brain tumor segmentation race (BRATS). However, researchers view deep learning based segmentation and registration as independent tasks and do not fully utilize their complementary information.
The traditional image registration and segmentation method is long in time consumption and cannot meet clinical requirements; the existing deep learning method considers registration and segmentation as two independent problems, and neglects the synergistic effect of registration and segmentation. The existing method cannot utilize complementary information of segmentation and registration to further improve the registration and segmentation precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a full-automatic registration and segmentation method for a multi-parameter magnetic resonance image.
The purpose of the invention can be realized by the following technical scheme:
a full-automatic registration and segmentation method for multi-parameter magnetic resonance images comprises the joint training of a registration model and a segmentation model, and specifically comprises the following steps:
(1) establishing a registration model which takes the reference image as a reference and carries out registration on the floating image by taking one sequence in the multi-parameter magnetic resonance image as the reference image and other sequences as the floating image, generating a deformation field phi for image registration, and establishing a registration Loss function Loss based on the image gray level similarity measurer
(2) Constructing a segmentation model for target segmentation of a reference image and a floating imageEstablishing a segmentation Loss function Loss related to the reference image and the floating images
(3) Constructing a contour similarity Loss function Loss for measuring the contour similarity of the segmentation model to the segmentation result of the reference image and the floating imagecAnd a joint Loss function Loss fusing the gray information and the contour informationrc
(4) By a joint Loss function LossrcAnd training the registration model and the segmentation model alternately until a convergence condition is met.
The registration model comprises a convolutional neural network.
The segmentation model comprises a three-dimensional full convolution neural network.
Registering Loss function Loss in step (1)rThe method specifically comprises the following steps:
Lossr=-NMI(F,φ(M)),
wherein, F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, and NMI (F, phi (M)) is local normalized mutual information of M and phi (M).
Segmentation Loss function Loss in step (2)sThe method specifically comprises the following steps:
Losss=D(Seg(F),S)+D(Seg(φ(M)),S),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation marking template, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S is a Dice score of Seg (F) and S, and D (Seg (phi (M)), S is a Dice score of Seg (phi (M)) and S.
Contour similarity Loss function Loss in step (3)cThe method specifically comprises the following steps:
Lossc=D(Seg(F),Seg(φ(M))),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F) and Seg (phi (M)) are Dice scores of Seg (F) and Seg (phi (M)).
Loss function Loss in step (3)rcThe method specifically comprises the following steps:
Lossrc=Lossr+βLossc
therein, LossrFor registration Loss function, LosscFor the contour similarity loss function, β is a weighted weight.
The step (4) is specifically as follows: firstly, training a registration model, updating a deformation field phi, performing registration transformation on a floating image by using the deformation field phi, correcting the offset and deformation of the floating image, updating a segmentation loss function, performing segmentation model training, further updating a joint loss function, and repeatedly training the segmentation model and the registration model alternately until the iteration number reaches a set value or the segmentation and registration accuracy reaches a set index.
Compared with the prior art, the invention has the following advantages:
the registration and the segmentation are combined, the registration and the segmentation can be mutually promoted, and the registration can better align the multi-sequence images, so that more accurate multi-sequence image fusion information is provided for the segmentation to improve the segmentation precision; the segmentation can provide contour shape information of a segmentation target for registration, and the registration accuracy can be further improved by combining the contour shape information and the gray scale information of the image.
Drawings
FIG. 1 is a flow chart of a method for fully automatically registering and segmenting a multi-parameter magnetic resonance image according to the present invention;
fig. 2 is a schematic process diagram of the co-training of registration and segmentation of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
As shown in fig. 1, a full-automatic registration and segmentation method for a multi-parameter magnetic resonance image includes joint training of a registration model and a segmentation model, and specifically includes the following steps:
step (ii) of1: and constructing a registration model which takes the reference image as a reference and carries out registration on the floating image by taking one sequence in the multi-parameter magnetic resonance image as the reference image and other sequences as the floating image, and generating a deformation field phi for image registration, wherein the registration model comprises a convolutional neural network. And then establishing a registration Loss function Loss based on image gray level similarity measurer
Lossr=-NMI(F,φ(M)),
Wherein, F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, and NMI (F, phi (M)) is local normalized mutual information of M and phi (M).
And (3) performing loss function optimization and model training by adopting an Adam optimization algorithm, and obtaining a deformation field phi for image registration in the training process. M can be transformed using phi to correct for displacement and distortion of M relative to F, aligning M with pixels in F.
Step 2: and constructing a segmentation model for performing target segmentation on the reference image and the floating image, wherein the segmentation model adopts a three-dimensional full convolution neural network. Thereby establishing a segmentation Loss function Loss related to the reference image and the floating images
Losss=D(Seg(F),S)+D(Seg(φ(M)),S),
Wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation marking template, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S is a Dice score of Seg (F) and S, and D (Seg (phi (M)), S is a Dice score of Seg (phi (M)) and S. And (3) performing loss function optimization and model training by adopting an Adam optimization algorithm, and obtaining segmentation results Seg (F) and Seg (phi (M)) of the reference image F and the floating image M in the training process.
And step 3: different sequences image the same target, and the target segmentation results in different sequences should be consistent, so the Dice score is used for measuring the contour similarity of the segmentation results Seg (F) and Seg (phi (M)), and a contour similarity Loss function Loss for measuring the contour similarity of the segmentation model to the reference image and the floating image segmentation results is constructedcAnd fusing the gray scale information and the contour informationCombined Loss function Loss ofrcThereby providing additional contour similarity information for registration by means of the segmentation result.
Wherein the contour similarity Loss function LosscThe method specifically comprises the following steps:
Lossc=D(Seg(F),Seg(φ(M))),
f is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F)) and Seg (phi (M)) are Dice scores of Seg (F) and Seg (phi (M)).
Loss function Loss in combinationrcThe method specifically comprises the following steps:
Lossrc=Lossr+βLossc
Lossrfor registration Loss function, LosscFor the contour similarity loss function, β is a weighted weight.
And 4, step 4: by a joint Loss function LossrcAnd training the registration model and the segmentation model alternately until a convergence condition is met. Specifically, firstly, training a registration model, updating a deformation field phi, performing registration transformation on a floating image by using the deformation field phi, correcting the offset and deformation of the floating image, updating a segmentation loss function, performing segmentation model training, further updating a joint loss function, and repeatedly training the segmentation model and the registration model alternately until the iteration number reaches a set value or the segmentation and registration accuracy reaches a set index. Fig. 2 is a schematic process diagram of the joint training of registration and segmentation in this embodiment, in which a solid line with an arrow to be detected represents the registration process, and a dashed line with an arrow represents the segmentation process. The invention trains to obtain a registration model and a segmentation model, and then the registration model and the segmentation model are cascaded, so that the registration model is used for registering each sequence image in MP-MRI images, and the segmentation model is used for segmenting the outline of a target focus (such as brain tumor and the like).
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.

Claims (8)

1. A full-automatic registration and segmentation method for multi-parameter magnetic resonance images is characterized by comprising the joint training of a registration model and a segmentation model, and specifically comprising the following steps:
(1) establishing a registration model which takes the reference image as a reference and carries out registration on the floating image by taking one sequence in the multi-parameter magnetic resonance image as the reference image and other sequences as the floating image, generating a deformation field phi for image registration, and establishing a registration Loss function Loss based on the image gray level similarity measurer
(2) Constructing a segmentation model for performing target segmentation on the reference image and the floating image, and establishing a segmentation Loss function Loss related to the reference image and the floating images
(3) Constructing a contour similarity Loss function Loss for measuring the contour similarity of the segmentation model to the segmentation result of the reference image and the floating imagecAnd a joint Loss function Loss fusing the gray information and the contour informationrc
(4) By a joint Loss function LossrcAnd training the registration model and the segmentation model alternately until a convergence condition is met.
2. A method for full-automatic registration and segmentation of multi-parameter magnetic resonance images as claimed in claim 1, wherein the registration model comprises a convolutional neural network.
3. The method of claim 1, wherein the segmentation model comprises a three-dimensional fully convolutional neural network.
4. The full-automatic multi-parameter magnetic resonance image registration and segmentation method according to claim 1, wherein the Loss of registration function Loss in step (1)rThe method specifically comprises the following steps:
Lossr=-NMI(F,φ(M)),
wherein, F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, and NMI (F, phi (M)) is local normalized mutual information of M and phi (M).
5. The full-automatic multi-parameter MRI registration and segmentation method as claimed in claim 1, wherein the segmentation Loss function Loss in step (2)sThe method specifically comprises the following steps:
Losss=D(Seg(F),S)+D(Seg(φ(M)),S),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation marking template, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S is a Dice score of Seg (F) and S, and D (Seg (phi (M)), S is a Dice score of Seg (phi (M)) and S.
6. The full-automatic registration and segmentation method for multi-parameter magnetic resonance images as claimed in claim 1, wherein the Loss of contour similarity function Loss in step (3)cThe method specifically comprises the following steps:
Lossc=D(Seg(F),Seg(φ(M))),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, Seg (F) is a segmentation result of F, Seg (phi (M)) is a segmentation result of phi (M), D (Seg (F) and Seg (phi (M)) are Dice scores of Seg (F) and Seg (phi (M)).
7. The full-automatic multi-parameter MRI registration and segmentation method as claimed in claim 1, wherein the Loss function Loss is combined in step (3)rcThe method specifically comprises the following steps:
Lossrc=Lossr+βLossc
therein, LossrFor registration Loss function, LosscFor the contour similarity loss function, β is a weighted weight.
8. The full-automatic multi-parameter magnetic resonance image registration and segmentation method according to claim 1, wherein the step (4) is specifically as follows: firstly, training a registration model, updating a deformation field phi, performing registration transformation on a floating image by using the deformation field phi, correcting the offset and deformation of the floating image, updating a segmentation loss function, performing segmentation model training, further updating a joint loss function, and repeatedly training the segmentation model and the registration model alternately until the iteration number reaches a set value or the segmentation and registration accuracy reaches a set index.
CN202010023904.1A 2020-01-09 2020-01-09 Full-automatic registration and segmentation method for multi-parameter magnetic resonance image Active CN111260700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010023904.1A CN111260700B (en) 2020-01-09 2020-01-09 Full-automatic registration and segmentation method for multi-parameter magnetic resonance image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010023904.1A CN111260700B (en) 2020-01-09 2020-01-09 Full-automatic registration and segmentation method for multi-parameter magnetic resonance image

Publications (2)

Publication Number Publication Date
CN111260700A true CN111260700A (en) 2020-06-09
CN111260700B CN111260700B (en) 2023-05-30

Family

ID=70950357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010023904.1A Active CN111260700B (en) 2020-01-09 2020-01-09 Full-automatic registration and segmentation method for multi-parameter magnetic resonance image

Country Status (1)

Country Link
CN (1) CN111260700B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598669A (en) * 2021-03-04 2021-04-02 之江实验室 Lung lobe segmentation method based on digital human technology
CN112767299A (en) * 2021-04-07 2021-05-07 成都真实维度科技有限公司 Multi-mode three-dimensional image registration and fusion method
CN113627564A (en) * 2021-08-23 2021-11-09 李永鑫 Deep learning-based CT medical image processing model training method and diagnosis and treatment system
WO2021238732A1 (en) * 2020-05-23 2021-12-02 Ping An Technology (Shenzhen) Co., Ltd. Device and method for alignment of multi-modal clinical images using joint synthesis, segmentation, and registration
WO2022011984A1 (en) * 2020-07-16 2022-01-20 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007054864A1 (en) * 2005-11-14 2007-05-18 Koninklijke Philips Electronics N.V. A method, a system and a computer program for volumetric registration
CN107103618A (en) * 2017-02-20 2017-08-29 南方医科大学 Lung 4D CT leggy method for registering images based on regression forecasting
CN110503699A (en) * 2019-07-01 2019-11-26 天津大学 A kind of CT projection path reduce in the case of CT image rebuilding method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007054864A1 (en) * 2005-11-14 2007-05-18 Koninklijke Philips Electronics N.V. A method, a system and a computer program for volumetric registration
CN107103618A (en) * 2017-02-20 2017-08-29 南方医科大学 Lung 4D CT leggy method for registering images based on regression forecasting
CN110503699A (en) * 2019-07-01 2019-11-26 天津大学 A kind of CT projection path reduce in the case of CT image rebuilding method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘建磊;隋青美;朱文兴: "结合概率密度函数和主动轮廓模型的磁共振图像分割", 光学精密工程 *
程焱;周焰;林洪涛;潘恒辉;: "基于SIFT特征遥感影像自动配准与拼接", 遥感技术与应用 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238732A1 (en) * 2020-05-23 2021-12-02 Ping An Technology (Shenzhen) Co., Ltd. Device and method for alignment of multi-modal clinical images using joint synthesis, segmentation, and registration
WO2022011984A1 (en) * 2020-07-16 2022-01-20 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and program product
CN112598669A (en) * 2021-03-04 2021-04-02 之江实验室 Lung lobe segmentation method based on digital human technology
CN112598669B (en) * 2021-03-04 2021-06-01 之江实验室 Lung lobe segmentation method based on digital human technology
CN112767299A (en) * 2021-04-07 2021-05-07 成都真实维度科技有限公司 Multi-mode three-dimensional image registration and fusion method
CN113627564A (en) * 2021-08-23 2021-11-09 李永鑫 Deep learning-based CT medical image processing model training method and diagnosis and treatment system

Also Published As

Publication number Publication date
CN111260700B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN111260700A (en) Full-automatic registration and segmentation method for multi-parameter magnetic resonance image
Hering et al. Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
CN108776969B (en) Breast ultrasound image tumor segmentation method based on full convolution network
WO2021088747A1 (en) Deep-learning-based method for predicting morphological change of liver tumor after ablation
Shen et al. Measuring temporal morphological changes robustly in brain MR images via 4-dimensional template warping
CN111091589A (en) Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
US10937158B1 (en) Medical image segmentation based on mixed context CNN model
CN107680107B (en) Automatic segmentation method of diffusion tensor magnetic resonance image based on multiple maps
CN101666865B (en) Method for registering diffusion tensor nuclear magnetic resonance image in local quick traveling mode
CN113674330B (en) Pseudo CT image generation system based on generation countermeasure network
WO2022247218A1 (en) Image registration method based on automatic delineation
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN115457020B (en) 2D medical image registration method fusing residual image information
CN112215844A (en) MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
CN111430025B (en) Disease diagnosis model training method based on medical image data augmentation
CN114266939B (en) Brain extraction method based on ResTLU-Net model
CN113159223A (en) Carotid artery ultrasonic image identification method based on self-supervision learning
CN100411587C (en) Elastic registration method of stereo MRI brain image based on machine learning
CN111080676A (en) Method for tracking endoscope image sequence feature points through online classification
CN112802073B (en) Fusion registration method based on image data and point cloud data
Che et al. Dgr-net: Deep groupwise registration of multispectral images
CN114529551A (en) Knowledge distillation method for CT image segmentation
CN111127488B (en) Method for automatically constructing patient anatomical structure model based on statistical shape model
CN112200810A (en) Multi-modal automated ventricular segmentation system and method of use thereof
Peng et al. Keypoint matching networks for longitudinal fundus image affine registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant