CN111260700B - Full-automatic registration and segmentation method for multi-parameter magnetic resonance image - Google Patents
Full-automatic registration and segmentation method for multi-parameter magnetic resonance image Download PDFInfo
- Publication number
- CN111260700B CN111260700B CN202010023904.1A CN202010023904A CN111260700B CN 111260700 B CN111260700 B CN 111260700B CN 202010023904 A CN202010023904 A CN 202010023904A CN 111260700 B CN111260700 B CN 111260700B
- Authority
- CN
- China
- Prior art keywords
- segmentation
- image
- registration
- phi
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a full-automatic registration and segmentation method of a multi-parameter magnetic resonance image, which comprises the following steps of joint training of a registration model and a segmentation model: (1) Taking one sequence in the multi-parameter magnetic resonance image as a reference image and the other sequences as floating images, constructing a registration model which takes the reference image as a reference and registers the floating images, and constructing a registration loss function based on image gray level similarity measure; (2) Constructing a segmentation model for carrying out target segmentation on the reference image and the floating image, and establishing a segmentation loss function related to the reference image and the floating image; (3) Constructing a contour similarity loss function for measuring the contour similarity of the segmentation model to the reference image and the segmentation result of the floating image, and combining the gray information and the contour information; (4) The registration model and the segmentation model are trained alternately until convergence conditions are met. Compared with the prior art, the method can promote the registration and the segmentation mutually, and effectively improve the registration and the segmentation precision.
Description
Technical Field
The invention relates to an image processing method, in particular to a multi-parameter magnetic resonance image full-automatic registration and segmentation method.
Background
Registering each sequence image in multi-parameter magnetic resonance (MP-MRI) and dividing out target focus is an important step for carrying out brain tumor image analysis and calculation.
Multiparameter magnetic resonance (MP-MRI) comprises a plurality of image sequences such as T1W-MRI, T2W-MRI, DWI-MRI and the like, wherein the T1W-MRI and the T2W-MRI provide brain morphological images under different contrasts, the DWI-MRI provides molecular images of the brain, and the brain tumor can be comprehensively analyzed by combining the plurality of sequences. Since there may be head movements of the patient during the MP-MRI scan and the resolution of each magnetic resonance sequence is not uniform, image registration is the first step in developing brain tumor image analysis and calculation. In the traditional method, an objective function is constructed, a deformation model is selected, and image registration is carried out by means of an optimization algorithm. Avants B et al propose an image registration method (SyN) based on symmetrical differential synembryo and cross correlation, and achieved the best registration accuracy in 2009 international brain registration competition. However, the conventional registration method needs to perform iterative optimization on each image to be registered, so that the registration parameters are difficult to adjust, and the calculation time can be as long as 77 minutes. With the rapid development of artificial intelligence technology, researchers have applied deep learning to image registration. Cao X et al propose a brain image registration algorithm based on a Convolutional Neural Network (CNN), firstly, estimating a deformation field through a regression model based on the CNN, and then generating a final registration result through a regression model based on a full convolutional neural network (FCN), thereby obtaining registration accuracy equivalent to that of the traditional method and greatly shortening calculation time.
Brain tumors can be segmented into necrotic, edematous, enhanced and non-enhanced regions, with regional image features closely related to tumor classification, whereby brain tumor segmentation is another important step in developing brain tumor image analysis and calculation. In brain tumor segmentation based on traditional image processing algorithms, gooya a et al propose a joint segmentation and registration strategy based on the maximum expected algorithm, and the brain tumor image segmentation and registration are performed simultaneously by using the complementary information of the segmentation and registration, so that a result superior to that of single segmentation or registration is obtained, however, due to the complex parameters of the algorithm, the segmentation and registration time can be as long as 3-6 hours. The tumor segmentation model based on deep learning can greatly shorten the segmentation time. The Li Zeju et al at the university of double denier trains CNN to divide brain tumor subareas by using 151 cases of low-level brain tumor patient data and taking artificially marked tumor symptom subareas as gold standards; pereira S et al train a CNN-based neural network model, and achieved the first performance in brain tumor segmentation competition (BRATS). However, researchers consider deep learning based segmentation and registration as independent tasks, and do not fully exploit their complementary information.
The traditional image registration and segmentation method is long in time consumption and cannot meet clinical requirements; the existing deep learning method regards registration and segmentation as two independent problems, and ignores the synergy of registration and segmentation. The existing method cannot utilize the complementary information of segmentation and registration to further improve registration and segmentation accuracy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a full-automatic registration and segmentation method for multi-parameter magnetic resonance images.
The aim of the invention can be achieved by the following technical scheme:
a multi-parameter magnetic resonance image full-automatic registration and segmentation method comprises joint training of a registration model and a segmentation model, and specifically comprises the following steps:
(1) Taking one sequence in the multi-parameter magnetic resonance image as a reference image and the other sequence as a floating image, constructing a registration model taking the reference image as a reference and registering the floating image, generating a deformation field phi for image registration, and establishing a registration Loss function Loss based on image gray level similarity measure r ;
(2) Constructing a segmentation model for performing target segmentation on the reference image and the floating image, and establishing a segmentation Loss function Loss about the reference image and the floating image s ;
(3) Constructing a contour similarity Loss function Loss for measuring contour similarity of a segmentation model to a reference image and a floating image segmentation result c Combined Loss function Loss for fusing gray information and contour information rc ;
(4) By a joint Loss function Loss rc The minimum is the target, and the registration model and the segmentation model are trained alternately until convergence conditions are met.
The registration model includes a convolutional neural network.
The segmentation model comprises a three-dimensional full convolution neural network.
Step (1)Mid registration Loss function Loss r The method comprises the following steps:
Loss r =-NMI(F,φ(M)),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, NMI (F, phi (M)) is local normalized mutual information of M and phi (M).
The segmentation Loss function Loss in step (2) s The method comprises the following steps:
Loss s =D(Seg(F),S)+D(Seg(φ(M)),S),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation labeling template, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S) is a Dice fraction of Seg (F) and S, and D (Seg (phi (M)) and S are Dice fractions of Seg (phi (M)).
Contour similarity Loss function Loss in step (3) c The method comprises the following steps:
Loss c =D(Seg(F),Seg(φ(M))),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), and D (Seg (F), seg (phi (M)) is a Dice fraction of Seg (F) and Seg (phi (M)).
Joint Loss function Loss in step (3) rc The method comprises the following steps:
Loss rc =Loss r +βLoss c ,
wherein, loss r To register the Loss function, loss c As a contour similarity loss function, β is a weighted weight.
The step (4) is specifically as follows: firstly, training a registration model, updating a deformation field phi, carrying out registration transformation on a floating image by utilizing the deformation field phi, correcting offset and deformation of the floating image, updating a segmentation loss function, carrying out segmentation model training, further updating a joint loss function, and repeatedly and alternately training the segmentation model and the registration model until the iteration times reach a set value or the segmentation and registration accuracy reaches a set index.
Compared with the prior art, the invention has the following advantages:
the invention combines the registration and the segmentation, the registration and the segmentation can be mutually promoted, and the registration can better align the multi-sequence images, thereby providing more accurate multi-sequence image fusion information for the segmentation so as to improve the segmentation precision; the segmentation can also provide contour shape information of a segmentation target for registration, and the registration accuracy can be further improved by combining the contour shape information with the gray level information of the image.
Drawings
FIG. 1 is a block flow diagram of a multi-parameter magnetic resonance image full-automatic registration and segmentation method of the present invention;
fig. 2 is a schematic diagram of the process of the registration and segmentation joint training of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. Note that the following description of the embodiments is merely an example, and the present invention is not intended to be limited to the applications and uses thereof, and is not intended to be limited to the following embodiments.
Examples
As shown in fig. 1, a multi-parameter magnetic resonance image full-automatic registration and segmentation method includes joint training of a registration model and a segmentation model, and specifically includes the following steps:
step 1: one sequence in the multi-parameter magnetic resonance image is taken as a reference image, the other sequences are taken as floating images, a registration model taking the reference image as a standard and registering the floating images is constructed, a deformation field phi for image registration is generated, and the registration model comprises a convolutional neural network. And further establishes a registration Loss function Loss based on the image gray level similarity measure r :
Loss r =-NMI(F,φ(M)),
Wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, NMI (F, phi (M)) is local normalized mutual information of M and phi (M).
And carrying out loss function optimization and model training by adopting an Adam optimization algorithm, and obtaining a deformation field phi for image registration in the training process. M can be transformed by phi, and the displacement and deformation of M relative to F are corrected to align M with the pixels in F.
Step 2: and constructing a segmentation model for carrying out target segmentation on the reference image and the floating image, wherein the segmentation model adopts a three-dimensional full convolution neural network. Thereby establishing a segmentation Loss function Loss about the reference image and the floating image s :
Loss s =D(Seg(F),S)+D(Seg(φ(M)),S),
Wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation labeling template, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S) is a Dice fraction of Seg (F) and S, and D (Seg (phi (M)) and S are Dice fractions of Seg (phi (M)). And carrying out loss function optimization and model training by adopting an Adam optimization algorithm, and obtaining segmentation results Seg (F) and Seg (phi (M)) of the reference image F and the floating image M in the training process.
Step 3: the different sequences are used for imaging the same target, and the target segmentation results in the different sequences are consistent, so that the contour similarity of the segmentation results Seg (F) and Seg (phi (M)) is measured by a Dice score, and a contour similarity Loss function Loss for measuring the contour similarity of the segmentation model to the reference image and the floating image segmentation results is constructed c Combined Loss function Loss for fusing gray information and contour information rc Thereby providing additional contour similarity information for registration by means of the segmentation result.
Wherein the contour similarity Loss function Loss c The method comprises the following steps:
Loss c =D(Seg(F),Seg(φ(M))),
f is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), seg (phi (M)) is a difference fraction of Seg (F) and Seg (phi (M)).
Joint Loss function Loss rc The method comprises the following steps:
Loss rc =Loss r +βLoss c ,
Loss r to register the Loss function, loss c As a contour similarity loss function, β is a weighted weight.
Step 4: by a joint Loss function Loss rc The minimum is the target, and the registration model and the segmentation model are trained alternately until convergence conditions are met. Specifically, firstly, training a registration model, updating a deformation field phi, carrying out registration transformation on a floating image by utilizing the deformation field phi, correcting offset and deformation of the floating image, updating a segmentation loss function, carrying out segmentation model training, further updating a joint loss function, and repeatedly and alternately training the segmentation model and the registration model until the iteration times reach a set value or the segmentation and registration accuracy reaches a set index. Fig. 2 is a schematic diagram of a process of registration and segmentation combined training in the present embodiment, in which a solid line to be arrow indicates a registration process and a broken line to be arrow indicates a segmentation process. The invention trains and obtains the registration model and the segmentation model and then cascades the registration model and the segmentation model, thereby the registration model is used for registering each sequence image in the MP-MRI image, and the segmentation model is used for segmenting the outline of a target focus (such as brain tumor and the like).
The above embodiments are merely examples, and do not limit the scope of the present invention. These embodiments may be implemented in various other ways, and various omissions, substitutions, and changes may be made without departing from the scope of the technical idea of the present invention.
Claims (4)
1. A multi-parameter magnetic resonance image full-automatic registration and segmentation method is characterized by comprising the combined training of a registration model and a segmentation model, and specifically comprises the following steps:
(1) Taking one sequence in the multi-parameter magnetic resonance images as a reference image, taking sequences except the reference image in the multi-parameter magnetic resonance images as floating images, constructing a registration model taking the reference image as a reference and registering the floating images, generating a deformation field phi for image registration, and establishing a registration Loss function Loss based on image gray level similarity measure r ;
(2) Constructing a segmentation model for performing object segmentation on the reference image and the floating image, and establishing a reference-related reference imageDivision Loss function Loss of test image and floating image s ;
(3) Constructing a contour similarity Loss function Loss for measuring contour similarity of a segmentation model to a reference image and a floating image segmentation result c Combined Loss function Loss for fusing gray information and contour information rc ;
(4) By a joint Loss function Loss rc The minimum is the target, and the registration model and the segmentation model are trained alternately until convergence conditions are met;
registration Loss function Loss in step (1) r The method comprises the following steps:
Loss r =-NMI(F,φ(M)),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, NMI (F) , Phi (M)) is the local normalized mutual information of M and phi (M);
the segmentation Loss function Loss in step (2) s The method comprises the following steps:
Loss s =D(Seg(F),S)+D(Seg(φ(M)),S),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, S is a segmentation labeling template, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), S) is a Dice fraction of Seg (F) and S, and D (Seg (phi (M)) and S are Dice fractions of Seg (phi (M)).
Contour similarity Loss function Loss in step (3) c The method comprises the following steps:
Loss c =D(Seg(F),Seg(φ(M))),
wherein F is a reference image, M is a floating image, phi (M) is a floating image transformed by a deformation field phi, seg (F) is a segmentation result of F, seg (phi (M)) is a segmentation result of phi (M), D (Seg (F), seg (phi (M)) is a Dice fraction of Seg (F) and Seg (phi (M));
joint Loss function Loss in step (3) rc The method comprises the following steps:
Loss rc =Loss r +βLoss c ,
wherein, loss r To register the Loss function, loss c Is the similarity loss of the contourThe loss function, β, is a weighted weight.
2. The method for full-automatic registration and segmentation of a multi-parameter magnetic resonance image according to claim 1, wherein the registration model comprises a convolutional neural network.
3. The method for full-automatic registration and segmentation of a multi-parameter magnetic resonance image according to claim 1, wherein the segmentation model comprises a three-dimensional full-convolution neural network.
4. The method for full-automatic registration and segmentation of multi-parameter magnetic resonance images according to claim 1, wherein the step (4) is specifically: firstly, training a registration model, updating a deformation field phi, carrying out registration transformation on a floating image by utilizing the deformation field phi, correcting offset and deformation of the floating image, updating a segmentation loss function, carrying out segmentation model training, further updating a joint loss function, and repeatedly and alternately training the segmentation model and the registration model until the iteration times reach a set value or the segmentation and registration accuracy reaches a set index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010023904.1A CN111260700B (en) | 2020-01-09 | 2020-01-09 | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010023904.1A CN111260700B (en) | 2020-01-09 | 2020-01-09 | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111260700A CN111260700A (en) | 2020-06-09 |
CN111260700B true CN111260700B (en) | 2023-05-30 |
Family
ID=70950357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010023904.1A Active CN111260700B (en) | 2020-01-09 | 2020-01-09 | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111260700B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11348259B2 (en) * | 2020-05-23 | 2022-05-31 | Ping An Technology (Shenzhen) Co., Ltd. | Device and method for alignment of multi-modal clinical images using joint synthesis, segmentation, and registration |
CN111798498A (en) * | 2020-07-16 | 2020-10-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112598669B (en) * | 2021-03-04 | 2021-06-01 | 之江实验室 | Lung lobe segmentation method based on digital human technology |
CN112767299B (en) * | 2021-04-07 | 2021-07-06 | 成都真实维度科技有限公司 | Multi-mode three-dimensional image registration and fusion method |
CN113627564A (en) * | 2021-08-23 | 2021-11-09 | 李永鑫 | Deep learning-based CT medical image processing model training method and diagnosis and treatment system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007054864A1 (en) * | 2005-11-14 | 2007-05-18 | Koninklijke Philips Electronics N.V. | A method, a system and a computer program for volumetric registration |
CN107103618A (en) * | 2017-02-20 | 2017-08-29 | 南方医科大学 | Lung 4D CT leggy method for registering images based on regression forecasting |
CN110503699A (en) * | 2019-07-01 | 2019-11-26 | 天津大学 | A kind of CT projection path reduce in the case of CT image rebuilding method |
-
2020
- 2020-01-09 CN CN202010023904.1A patent/CN111260700B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007054864A1 (en) * | 2005-11-14 | 2007-05-18 | Koninklijke Philips Electronics N.V. | A method, a system and a computer program for volumetric registration |
CN107103618A (en) * | 2017-02-20 | 2017-08-29 | 南方医科大学 | Lung 4D CT leggy method for registering images based on regression forecasting |
CN110503699A (en) * | 2019-07-01 | 2019-11-26 | 天津大学 | A kind of CT projection path reduce in the case of CT image rebuilding method |
Non-Patent Citations (2)
Title |
---|
基于SIFT特征遥感影像自动配准与拼接;程焱;周焰;林洪涛;潘恒辉;;遥感技术与应用;第23卷(第06期);721-728 * |
结合概率密度函数和主动轮廓模型的磁共振图像分割;刘建磊;隋青美;朱文兴;光学精密工程;第22卷(第12期);3435-3443 * |
Also Published As
Publication number | Publication date |
---|---|
CN111260700A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111260700B (en) | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image | |
CN111091589B (en) | Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning | |
CN109377520B (en) | Heart image registration system and method based on semi-supervised circulation GAN | |
CN112365464B (en) | GAN-based medical image lesion area weak supervision positioning method | |
CN110390665B (en) | Knee joint disease ultrasonic diagnosis method based on deep learning multichannel and graph embedding method | |
WO2022247218A1 (en) | Image registration method based on automatic delineation | |
CN113298830B (en) | Acute intracranial ICH region image segmentation method based on self-supervision | |
CN111145200B (en) | Blood vessel center line tracking method combining convolutional neural network and cyclic neural network | |
CN111325750A (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN112164043A (en) | Method and system for splicing multiple fundus images | |
CN115457020B (en) | 2D medical image registration method fusing residual image information | |
CN111210909A (en) | Deep neural network-based rectal cancer T stage automatic diagnosis system and construction method thereof | |
CN113159223A (en) | Carotid artery ultrasonic image identification method based on self-supervision learning | |
CN113674330A (en) | Pseudo CT image generation system based on generation countermeasure network | |
CN111080676B (en) | Method for tracking endoscope image sequence feature points through online classification | |
CN111128349A (en) | GAN-based medical image focus detection marking data enhancement method and device | |
CN117218127B (en) | Ultrasonic endoscope auxiliary monitoring system and method | |
CN112802073B (en) | Fusion registration method based on image data and point cloud data | |
CN112686932B (en) | Image registration method for medical image, image processing method and medium | |
CN111476802B (en) | Medical image segmentation and tumor detection method, equipment and readable storage medium | |
CN110728660B (en) | Method and device for lesion segmentation based on ischemic stroke MRI detection mark | |
Dandan et al. | A multi-model organ segmentation method based on abdominal ultrasound image | |
CN114549396A (en) | Spine interactive and automatic segmentation and refinement method based on graph neural network | |
Favaedi et al. | Cephalometric landmarks identification using probabilistic relaxation | |
CN109697713B (en) | Intervertebral disc positioning and labeling method based on deep learning and spatial relationship reasoning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |