CN117994303A - Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result - Google Patents

Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result Download PDF

Info

Publication number
CN117994303A
CN117994303A CN202311851862.0A CN202311851862A CN117994303A CN 117994303 A CN117994303 A CN 117994303A CN 202311851862 A CN202311851862 A CN 202311851862A CN 117994303 A CN117994303 A CN 117994303A
Authority
CN
China
Prior art keywords
image
prostate
mri
tumor
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311851862.0A
Other languages
Chinese (zh)
Inventor
王博
张兆东
陈波
张凯凯
王亚飞
何云迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lb Ke Ce Shanghai Intelligent Medical Technology Co ltd
Original Assignee
Lb Ke Ce Shanghai Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lb Ke Ce Shanghai Intelligent Medical Technology Co ltd filed Critical Lb Ke Ce Shanghai Intelligent Medical Technology Co ltd
Priority to CN202311851862.0A priority Critical patent/CN117994303A/en
Publication of CN117994303A publication Critical patent/CN117994303A/en
Pending legal-status Critical Current

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a multi-mode registration method, device, equipment and storage medium based on a prostate MR image segmentation result, and relates to the field of image registration. The registration method comprises the following steps: segmentation: respectively inputting the MRI image and the US image into respective segmentation networks to respectively obtain an MRI prostate image, a US prostate image and probability maps of respective tumor and feature point labels; affine transformation: inputting the MRI prostate image, the US prostate image and the probability map of the respective tumor and characteristic point labels into an affine transformation network, outputting an affine transformation matrix, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point labels after affine transformation; elastic transformation: and inputting the probability map of the MRI prostate image, the tumor and the characteristic point label after the US prostate image and affine transformation into an elastic transformation network, outputting a dense deformation field, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point label after the elastic transformation.

Description

Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result
Technical Field
The present invention relates to the field of image registration, and in particular, to a method, apparatus, device, and storage medium for multi-modality registration based on a prostate MR image segmentation result.
Background
In the course of radiation therapy of prostate cancer, real-time Ultrasound (US) images are often preferred to guide the penetration of tumors in order to increase the effectiveness of the therapy regimen. However, ultrasound imaging is limited by low tissue contrast and the needle can introduce artifacts and reduce image quality. Magnetic resonance (MR, magnetic Resonance) imaging can better show the fine structure of soft tissue than US imaging and is therefore advantageous in tumor recognition. Delineating organs and tumor contours on high quality MR images, which are then fused to US images using image registration techniques, will help to improve the effectiveness of prostate radiation therapy. In recent years, an image registration method based on depth learning has been proposed to realize automation of registration. Typically, the source and target images to be registered are input to a deep learning network and the corresponding transformation matrix or deformation field is predicted. The source image may then be transformed with the matrix or deformation field to generate a registered image. During the network training process, network parameters are optimized by measuring the similarity between the registered image and the target image. However, multi-modal image registration remains a challenging task, mainly because it is difficult to measure similarity of images during the optimization process. In MR and US images, the same anatomical structure may appear in different image intensity ranges due to the differences in imaging principles, so that conventional intensity-based measurement methods cannot be used for multi-mode image registration. To address this problem, many multimode image registration strategies have been studied. Some studies use a supervised method to evaluate the similarity between the source and target images by measuring the difference between the predictive and real transform matrices, which is highly limited by the availability and quality of the real transform matrices. In addition, some studies have proposed weak supervision strategies that bypass the use of a true transformation matrix and intensity-based image similarity measures, but instead replace pixel intensities with labels representing anatomical structures, achieving image consistency measures. By marking anatomical structures on the source and target images and measuring the degree of overlap between the transformed markers and the corresponding target markers, similarity of the images is assessed and the learning process of the model is guided. Aiming at the problems of poor image quality, lack of weak supervision marks and difficult accurate registration, the scheme provides a novel deep learning multi-modal registration scheme, combines an image segmentation model to obtain marks of prostate, tumor and some characteristic points, and realizes more accurate multi-modal registration.
Disclosure of Invention
The invention aims to: a multi-modal registration method, device, equipment and storage medium based on the segmentation result of the prostate MR image are provided to solve the problems existing in the prior art.
In a first aspect, a multi-modality registration method based on a segmentation result of a prostate MR image is provided, including the following stages:
s1, a segmentation stage: respectively inputting the MRI image and the US image into respective segmentation networks to respectively obtain an MRI prostate image, a US prostate image and probability maps of respective tumor and feature point labels;
S2, affine transformation stage: inputting the MRI prostate image, the US prostate image and the probability map of the respective tumor and characteristic point labels into an affine transformation network, outputting an affine transformation matrix, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point labels after affine transformation;
S3, elastic transformation phase: and inputting the probability map of the MRI prostate image, the tumor and the characteristic point label after the US prostate image and affine transformation into an elastic transformation network, outputting a dense deformation field, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point label after the elastic transformation.
In a further embodiment of the first aspect, step S2 further comprises:
After affine registration, inputting the deformed MRI segmentation tags and the segmentation tags of US into a deformable registration network;
In the network training process, the price penalty is used as part of the tag similarity cost function, encouraging the deformable registration network to create an overlap between the deformed MR tag and the fixed US tag.
In a further embodiment of the first aspect, the affine transformation loss function in step S2 is the Dice loss function L Dice:
Where W p and f p represent label probability values for pixel p and V represents the entire 3D image.
In a further embodiment of the first aspect, on the basis of the Dice loss function L Dice, the L2 norm of the deformation field gradient is added as a regularization term to smooth the registration deformation field:
wherein L Smooth is a regular constraint term of a deformation field, specifically an L 2 norm of a deformation field spatial gradient, For registering the deformation field.
In a further embodiment of the first aspect, the total loss function L Total is obtained based on the Dice loss function L Dice, the canonical constraint term of the deformation field L Smooth:
Where L Total is the total loss function, As a Dice loss function for the kth tag, k=0, 1,..n; /(I)Is the loss function weight corresponding to the kth tag.
In a second aspect of the present invention, a multi-modal registration apparatus based on a segmentation result of an MR image of a prostate is provided, the apparatus comprising a segmentation module, an affine transformation module, and an elastic transformation module.
The segmentation module is used for respectively inputting the MRI image and the US image into respective segmentation networks to respectively obtain the MRI prostate image, the US prostate image and probability maps of respective tumor and feature point labels;
The affine transformation module is used for inputting the MRI prostate image, the US prostate image and the probability images of the respective tumor and characteristic point labels into an affine transformation network, outputting an affine transformation matrix and obtaining the probability images of the MRI prostate image, the tumor and the characteristic point labels after affine transformation;
the elastic transformation module is used for inputting the probability map of the US prostate image and the MRI prostate image, the tumor and the characteristic point label after affine transformation to an elastic transformation network, outputting a dense deformation field and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point label after elastic transformation.
In a third aspect of the invention, an electronic device is presented, the device comprising: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements a multimodal registration method based on the segmentation results of the MR images of the prostate as described in the first aspect.
In a fourth aspect of the present invention, a computer readable storage medium is provided, wherein at least one executable instruction is stored in the storage medium, and when the executable instruction is executed on an electronic device, the electronic device is caused to perform the multi-modality registration method based on the segmentation result of the MR image of the prostate according to the first aspect.
Compared with the prior art, the method combines the image segmentation model to obtain the marks of the prostate, the tumor and some characteristic points, and aims at the problems of poor image quality, lack of weak supervision marks and difficult accurate registration, so as to realize more accurate multi-mode registration.
Drawings
FIG. 1 is an overall workflow diagram of an MRI-US multi-modality registration scheme in an embodiment of the present invention.
Fig. 2 is a schematic diagram of a 3D MRI-US multi-modality registration framework in an embodiment of the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the invention.
The overall workflow of the multimodal registration scheme is shown in fig. 1. Firstly, acquiring preoperative prostate magnetic resonance and ultrasonic imaging, reconstructing to obtain 3D MRI and 3D US, and carrying out three-dimensional image registration on the 3D MRI and the 3D US. When a real-time two-dimensional ultrasonic image is performed in operation to guide the puncture of the prostate tumor, a transformation matrix of the 3DUS mapping to the 2D US is obtained through a space tracking device on an ultrasonic probe, and the transformation matrix is applied to the registered 3DMRI, so that a corresponding registered 2D MRI image is obtained.
The three-dimensional image registration framework of the 3D MRI and the 3D US consists of two parts, as shown in figure 2, wherein the first part is the segmentation of the 3D MRI and the 3D US prostate, the tumor and other characteristic points, the basic method is the full convolution neural network segmentation model ((FCN, fully Convolutional Neural Network); the second part is the three-dimensional image registration of the 3D MRI and the 3D US, and the result of the first part segmentation model is utilized as the input of the three-dimensional registration network.
The registration framework implementation flow is specifically described as follows:
segmentation: and respectively inputting the 3D MRI and the 3D US into respective segmentation networks to obtain probability maps of respective prostate, tumor and characteristic point labels.
Affine transformation: and inputting the probability graphs of the MRI and the US prostate, the tumor and the characteristic point labels into an affine transformation network, outputting an affine transformation matrix, and obtaining the probability graphs of the MRI prostate, the tumor and the characteristic point labels after affine transformation. The affine transformation loss function is a Dice loss function.
Elastic transformation: inputting the probability map of the prostate, tumor and characteristic point labels of the MRI after the US and affine transformation into an elastic transformation network, outputting a dense deformation field, and obtaining the probability map of the prostate, tumor and characteristic point labels of the MRI after the elastic transformation. The elastic transformation loss function is a Dice loss function and a deformation field regularization loss function.
FCNs shows good performance in automatic segmentation of medical images, pixel-level prediction enables FCN to obtain end-to-end prediction of the whole image in one forward pass. We have trained two FCNs for MRI and US segmentation, respectively. The prostate, which was manually labeled using MRI and US images, tumors and feature points served as learning targets for the two FCNs. We combine the binary cross entropy loss and the Dice loss into one hybrid loss function for deep supervised training.
MR and US segmentation results are designated as source and target markers, respectively, because our goal is to use deformation MRI matching the intraoperative US as image guidance during surgery. According to related studies, setting rigid or affine registration results as initialization inputs can help to achieve more accurate and stable registration performance prior to subsequent non-rigid registration. For this, we have devised a 3D Unet with 3D tags as input, which predicts 12 affine transformation parameters to deform the original MR tag as an initialization step for non-rigid registration. The loss function of the affine transformation network is a Dice function, and is used for measuring the coincidence degree of labels of deformed MRI (magnetic resonance imaging) prostate, tumor and other characteristic points and corresponding labels of the target US in space positions.
After affine registration, the deformed MRI segmentation labels and US segmentation labels are input to a deformable registration network, which is similar in structure to 3D Unet for the present scheme. In the network training process, the price loss is used as part of the tag similarity cost function, encouraging the network to create a large amount of overlap between the deformed MR tag and the fixed US tag.
W p and f p represent label probability values for pixel p, and V represents the entire 3D image.
In addition, we add the L2 norm of the deformation field gradient to the loss function as a regularization term to smooth the registration deformation field.
L Smooth is a canonical constraint term of the deformation field, specifically the L 2 norm of the deformation field spatial gradient,For registering the deformation field.
Where L Total is the total loss function,As a Dice loss function for the kth tag, k=0, 1. /(I)The loss function weight corresponding to the kth tag.
In summary, the scheme provides a novel deep learning multi-modal registration scheme, combines an image segmentation model to obtain marks of the prostate, the tumor and some characteristic points, and aims at solving the problems of poor image quality, lack of weak supervision marks and difficult accurate registration, so as to realize more accurate multi-modal registration.
The multi-mode registration scheme provided by the invention takes MRI and US images of the prostate of an organ as an example, and a specific registration flow and method are described, but the targets and image modes applicable to the invention are not limited to the specific examples, and the following description is specifically given. The invention is not only limited to the registration of the prostate, but also suitable for the registration of other organs or tumors such as liver, heart, brain functional areas and the like, and the multi-modal registration is not limited to two modes of 3D MRI and 3D US, but also suitable for other multi-modalities such as 3D MRI and 3D CT,3D CT and 3D US and the like.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A multi-modality registration method based on prostate MR image segmentation results, comprising the steps of:
s1, a segmentation stage: respectively inputting the MRI image and the US image into respective segmentation networks to respectively obtain an MRI prostate image, a US prostate image and probability maps of respective tumor and feature point labels;
S2, affine transformation stage: inputting the MRI prostate image, the US prostate image and the probability map of the respective tumor and characteristic point labels into an affine transformation network, outputting an affine transformation matrix, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point labels after affine transformation;
S3, elastic transformation phase: and inputting the probability map of the MRI prostate image, the tumor and the characteristic point label after the US prostate image and affine transformation into an elastic transformation network, outputting a dense deformation field, and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point label after the elastic transformation.
2. The multi-modality registration method based on the segmentation result of the MR image of the prostate according to claim 1, wherein step S2 further comprises:
After affine registration, inputting the deformed MRI segmentation tags and the segmentation tags of US into a deformable registration network;
In the network training process, the price penalty is used as part of the tag similarity cost function, encouraging the deformable registration network to create an overlap between the deformed MR tag and the fixed US tag.
3. The multi-modal registration method based on the segmentation result of the MR image of the prostate according to claim 2, wherein the affine transformation loss function in step S2 is the Dice loss function L Dice:
Where w p and f p represent label probability values for pixel p and V represents the entire 3D image.
4. A multi-modal registration method based on prostate MR image segmentation results according to claim 3, characterized in that on the basis of the Dice loss function L Dice, the L2 norm of the deformation field gradient is added as a regularization term to smooth the registration of the deformation field:
wherein L Smooth is a regular constraint term of a deformation field, specifically an L 2 norm of a deformation field spatial gradient, For registering the deformation field.
5. The multi-modal registration method based on the segmentation result of the prostate MR image according to claim 4, wherein the total loss function L Total is obtained based on the Dice loss function L Dice, the canonical constraint term of the deformation field L Smooth:
Where L Total is the total loss function, As a Dice loss function for the kth tag, k=0, 1, … n; /(I)Is the loss function weight corresponding to the kth tag.
6. A multi-modality registration device based on a segmentation result of an MR image of a prostate, comprising:
A segmentation module; the segmentation module is used for respectively inputting the MRI image and the US image into respective segmentation networks to respectively obtain the MRI prostate image, the US prostate image and probability maps of respective tumor and feature point labels;
Affine transformation module; the affine transformation module is used for inputting the MRI prostate image, the US prostate image and the probability maps of the respective tumor and feature point labels into an affine transformation network, outputting an affine transformation matrix and obtaining the probability maps of the MRI prostate image, the tumor and the feature point labels after affine transformation;
An elastic transformation module; the elastic transformation module is used for inputting the probability map of the US prostate image and the MRI prostate image, the tumor and the characteristic point label after affine transformation to an elastic transformation network, outputting a dense deformation field and obtaining the probability map of the MRI prostate image, the tumor and the characteristic point label after elastic transformation.
7. An electronic device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a multimodal registration method based on segmentation results of a prostate MR image as defined in any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that at least one executable instruction is stored in the storage medium, which executable instructions, when run on an electronic device, cause the electronic device to perform the multi-modality registration method based on the segmentation result of the MR image of the prostate as claimed in any one of claims 1 to 5.
CN202311851862.0A 2023-12-29 2023-12-29 Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result Pending CN117994303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311851862.0A CN117994303A (en) 2023-12-29 2023-12-29 Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311851862.0A CN117994303A (en) 2023-12-29 2023-12-29 Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result

Publications (1)

Publication Number Publication Date
CN117994303A true CN117994303A (en) 2024-05-07

Family

ID=90900110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311851862.0A Pending CN117994303A (en) 2023-12-29 2023-12-29 Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result

Country Status (1)

Country Link
CN (1) CN117994303A (en)

Similar Documents

Publication Publication Date Title
Li et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy
US10842445B2 (en) System and method for unsupervised deep learning for deformable image registration
US10769791B2 (en) Systems and methods for cross-modality image segmentation
US9547902B2 (en) Method and system for physiological image registration and fusion
JP2022544229A (en) 3D Object Segmentation of Localized Medical Images Using Object Detection
US9218542B2 (en) Localization of anatomical structures using learning-based regression and efficient searching or deformation strategy
Lukashevich et al. Medical image registration based on SURF detector
Yang et al. Synthesizing multi-contrast MR images via novel 3D conditional Variational auto-encoding GAN
CN113826143A (en) Feature point detection
US20110019885A1 (en) Methods and apparatus for registration of medical images
Thomson et al. MR-to-US registration using multiclass segmentation of hepatic vasculature with a reduced 3D U-Net
CN109949318A (en) Full convolutional neural networks epileptic focus dividing method based on multi-modal image
US20170360396A1 (en) Ultrasound imaging apparatus and method for segmenting anatomical objects
Chen et al. Real-time and multimodal brain slice-to-volume registration using CNN
Touati et al. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis
US9361701B2 (en) Method and system for binary and quasi-binary atlas-based auto-contouring of volume sets in medical images
Alam et al. An investigation towards issues and challenges in medical image registration
Sreeja et al. Image fusion through deep convolutional neural network
CN114187338B (en) Organ deformation registration method based on estimated 2d displacement field
Zheng Cross-modality medical image detection and segmentation by transfer learning of shapel priors
Mikaeili et al. Estimating rotation angle and transformation matrix between consecutive ultrasound images using deep learning
Gu et al. Cross-modality image translation: CT image synthesis of MR brain images using multi generative network with perceptual supervision
Chen et al. Self-learning based medical image representation for rigid real-time and multimodal slice-to-volume registration
US8150133B2 (en) System and method for automatic registration of 4D (3D plus time) renal perfusion MRI data
CN117994303A (en) Multi-mode registration method, device, equipment and storage medium based on prostate MR image segmentation result

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination