CN109785405A - The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image - Google Patents

The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image Download PDF

Info

Publication number
CN109785405A
CN109785405A CN201910087064.2A CN201910087064A CN109785405A CN 109785405 A CN109785405 A CN 109785405A CN 201910087064 A CN201910087064 A CN 201910087064A CN 109785405 A CN109785405 A CN 109785405A
Authority
CN
China
Prior art keywords
magnetic resonance
image
images
same
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910087064.2A
Other languages
Chinese (zh)
Inventor
子威·林
塞缪尔·陈·吕
志斌·黄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910087064.2A priority Critical patent/CN109785405A/en
Publication of CN109785405A publication Critical patent/CN109785405A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A method of quasi- CT image is generated using the multivariate regression of multiple groups magnetic resonance image: obtaining multiple groups magnetic resonance image and corresponding one group of CT image respectively from several trainers, and the magnetic resonance image of the different sequence groups of each trainer is obtained using different magnetic resonance parameters;Magnetic resonance image in the multiple groups magnetic resonance image of different trainers positioned at same sequence group is obtained using identical magnetic resonance parameters;Each image in every group of magnetic resonance image of the same position of each trainer is aligned with corresponding CT image;Generate the mapping function that identical voxel CT value is corresponded to from the intensity value of the multiple groups magnetic resonance image of all trainers;The quasi- CT image of target person is generated from the multiple groups magnetic resonance image of target person.The present invention is not needed trainer's image and target person's image alignment, thus avoids error caused by alignment procedures in map set method.The present invention can be used for simulation calculating and the medical imaging of guided by magnetic resonance radiotherapy treatment planning.

Description

Method for generating quasi-CT image by multivariate regression of multiple groups of magnetic resonance images
Technical Field
The present invention relates to generation of CT images. In particular to a method for generating quasi-CT images by utilizing multivariate regression of a plurality of groups of magnetic resonance images.
Background
Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are two major imaging modalities for obtaining three-dimensional images. Computed tomography imaging is an imaging modality traditionally used to create radiation treatment plans. The CT image can accurately describe the geometry of the target, and the corresponding CT values of the CT image can be directly converted into electron densities for calculating the radiation dose distribution in the target. However, CT images do not have good contrast to soft tissue, and CT also subjects the target to additional radiation doses. Magnetic resonance imaging modalities also have a wide range of uses compared to CT images due to their excellent soft tissue contrast. Magnetic resonance imaging does not contain ionizing radiation and can provide functional information such as metabolism of a target person.
Currently, magnetic resonance images are mainly used to supplement CT images to obtain more accurate anatomical contours and tumor targeting, so it is necessary to align the magnetic resonance image of the target person with the corresponding CT image. Since the magnetic resonance image and the CT image are usually acquired on different machines, the magnetic resonance image and the CT image are not perfectly aligned when they are superimposed; this is an important source of tumor targeting localization error. The magnetic resonance guided radiation therapy planning utilizes magnetic resonance images without the need for CT images, so tumor targeting can be more accurate. Since the magnetic resonance intensity values cannot be directly converted into electron densities, there is a need for a method to accurately convert the magnetic resonance image into an image corresponding to the electron density values, i.e. a quasi-CT image, also known as a pCT image or a derivative CT image.
Journal articles by Edmund and Nyhold, "A Review of subsystem CT Generation for MRI-only Radiation Therapy," RadiatOncol 12:28(2017), doi:10.1186/s13014-016-0747-y, Review various methods employed to create a pCT, including the atlas (atlas) method and the voxel (voxel) method. As described in the journal article of Dowling et al, "An Atlas-based Electron sensitivity Mapping Method for Magnetic Resonance Imaging (MRI) -acetone Treatment Mapping and Adaptive MRI-based protocol Radiation Therapy," Int J Radiata oncol Phys 83,5(2012), doi:10.1016/j.ijrobp.2011.11.056, the Atlas Method utilizes a pre-existing Atlas image as a reference to help generate a quasi-CT image. In generating the quasi-CT image, the atlas magnetic resonance image and the atlas CT image are references for generating the quasi-CT image from the new objective magnetic resonance image. The atlas magnetic resonance image needs to be aligned with the magnetic resonance image of the target and the same alignment transformation is applied in the process of fusing the atlas CT images at the same location into the target quasi-CT image. However, there are some errors in the alignment process of the atlas image and the alignment process from the atlas magnetic resonance image to the objective magnetic resonance image. The voxel method creates a quasi-CT image using a conversion method from magnetic resonance image intensity values to CT values without aligning the magnetic resonance images of the target person and the trainer.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for generating a quasi-CT image by multivariate regression of a plurality of groups of magnetic resonance images.
The technical scheme adopted by the invention is as follows: a method of generating quasi-CT images using multivariate regression of multiple sets of magnetic resonance images, comprising the steps of:
1) acquiring a plurality of sets of Magnetic Resonance Images (MRI) and a corresponding set of CT images from a plurality of trainees respectively, wherein the MRI images of different sequencing sets of each trainer are acquired by using different MRI parameters; the magnetic resonance images in the same sequencing group in the multiple groups of magnetic resonance images of different trainees are obtained by using the same magnetic resonance parameters;
2) aligning each image of each set of magnetic resonance images of the same location of each trainer with a corresponding CT image;
3) generating a mapping function corresponding to the same voxel CT value from the intensity values of the multiple groups of magnetic resonance images of all trainees;
4) a quasi-CT image of the subject is generated from the plurality of sets of magnetic resonance images of the subject.
When the magnetic resonance images of different trainees in the same sorting group are acquired by using different scanning machines and/or different imaging conditions in the step 1), the magnetic resonance images are normalized so that the average intensity values of the magnetic resonance images in the same sorting group in all trainees are the same.
The step 3) comprises the following steps:
(3.1) dividing the human body outline for each magnetic resonance image and each CT image respectively;
(3.2) creating a region segmentation mask for the magnetic resonance images of the first ordered set;
(3.3) using the obtained region segmentation mask for carrying out region segmentation on the magnetic resonance image and the CT image of other sequencing groups except the first sequencing group at the same position;
(3.4) extracting respective magnetic resonance image intensity values and CT values from voxels in each region outside the exclusion zone;
(3.5) determining a mapping function of a multiple high degree polynomial for each region using a multivariate regression method based on the multiple sets of magnetic resonance image intensity values and the average CT values of voxels having the same multiple sets of magnetic resonance image intensity values:
where N is the polynomial maximum degree, CT (S)1,…,Sm) For the mapping function being a dependent variable, S1Is a first set of magnetic resonance image intensity values, being an independent variable in a mapping function, SmIs the m-th set of magnetic resonance image intensity values, being the independent variable, i, in the mapping function1Is S1Index of (1), imIs SmThe index of (a) is,are fitting coefficients.
The human body contour dividing method in the step (3.1) is to separate human body structures from surrounding air in the image.
In the step (3.2), when only bone or soft tissue exists in the magnetic resonance image of the first sequencing group, the magnetic resonance image is taken as a region; when the magnetic resonance image of the first ordering group has bone and soft tissue at the same time, the magnetic resonance image is divided into a bone region, a soft tissue region and a mixed tissue region, the mixed tissue region refers to an uncertain part between the bone and the soft tissue, and the boundary of each region forms a region division mask.
In the step (3.2), a region in which the anatomical structures are misaligned due to daily movement of the organ or insufficient image alignment in the plurality of sets of magnetic resonance images and CT images at the same position, and a region outside the contour of the human body are set as an exclusion region.
The step 4) comprises the following steps:
(4.1) acquiring each group of magnetic resonance images of the target person by using the same magnetic resonance parameters of the same sequencing group of the trainer;
(4.2) aligning a plurality of groups of magnetic resonance images of the same position of the target person with each other;
(4.3) when the magnetic resonance image of the target person is obtained by using a different scanning machine and/or different imaging conditions from those of the trainer, standardizing the magnetic resonance image of the target person to make the average intensity value of the magnetic resonance images of the same sequencing group at the same position of the target person and all the trainees be same;
(4.4) carrying out region segmentation on the magnetic resonance image of the first sequencing group of the target person and establishing a region segmentation mask by adopting the region segmentation mode in the steps (3.2) and (3.3) in the step 3), and acquiring the regions of the magnetic resonance images of other sequencing groups except the first sequencing group at the same position of the target person, wherein the image of the target person is not provided with an exclusion region;
(4.5) extracting a plurality of groups of magnetic resonance image intensity values of voxels in the magnetic resonance image of the target;
(4.6) passing the intensity values of the multiple groups of magnetic resonance images of each voxel of the target through the mapping function of the corresponding region obtained in the step (3) and the step (3.5) of the step 3) to obtain the CT value of the same voxel of the target;
and (4.7) forming the quasi-CT image of the target by the set of CT values of all voxels of the target.
The invention discloses a method for generating a quasi-CT image by utilizing multivariate regression of a plurality of groups of magnetic resonance images, which uses a multivariate function to correspond the plurality of groups of magnetic resonance images of a target person into the quasi-CT image. The accuracy of this method is similar to the best results available with the atlas method, but is more convenient and faster than the atlas method. The invention uses a voxel method to directly convert the magnetic resonance image of the target into a quasi-CT image. This method does not require alignment of the trainer and target images, thus avoiding errors caused by the alignment process in the atlas method. The invention can be used for simulation calculation and medical imaging of magnetic resonance guide radiotherapy planning.
Drawings
FIG. 1 is an exemplary flow chart of a method for determining a regression model using sets of magnetic resonance images and CT images of a trainer;
fig. 2 is an exemplary flow chart for generating a quasi-CT image using multiple sets of magnetic resonance images of a subject.
Detailed Description
The method for generating quasi-CT images by multivariate regression of multiple sets of magnetic resonance images according to the invention is described in detail below with reference to the embodiments and the accompanying drawings.
In describing the illustrative examples, the drawings and specific terminology are used for the sake of clarity only and are not intended to be limiting.
The invention discloses a method for generating a quasi-CT image by multivariate regression of multiple groups of magnetic resonance images, which comprises the following steps:
1) acquiring a plurality of sets of Magnetic Resonance Images (MRI) and a corresponding set of CT images from a plurality of trainees respectively, wherein the MRI images of different sequencing sets of each trainer are acquired by using different MRI parameters; the magnetic resonance images in the same sequencing group in the multiple groups of magnetic resonance images of different trainees are obtained by using the same magnetic resonance parameters;
when magnetic resonance images of different trainees in the same ordered group are acquired by different scanning machines and/or different imaging conditions, the magnetic resonance images are normalized so that the average intensity values of the magnetic resonance images in the same ordered group are the same among all the trainees. The normalization is to ensure that the acquired magnetic resonance image intensity values are self-consistent for different trainees. A simple way to decide whether normalization is needed is to compare the magnetic resonance image intensity values of voxels in the same region of different trainees in a histogram; normalization is required if the histograms of different trainees show similar shapes but very different peak and mean values. In an exemplary normalization process, the mean intensity values of the magnetic resonance images of each trainer in the same ordered set are found, and then the correction factor for each trainer is determined to be multiplied by the magnetic resonance image intensity values of the voxels in the magnetic resonance images of the ordered set of the corresponding trainer so as to make the mean intensity values of the magnetic resonance images of the ordered set of all trainers the same.
2) Each image of each set of magnetic resonance images of the same location of each trainer is aligned with a corresponding CT image. The magnetic resonance image and the CT image are usually acquired on different machines, so that the magnetic resonance image and the CT image are not perfectly aligned when they are superimposed. It is necessary to align the magnetic resonance image of each trainer and the CT image of the corresponding location by an image alignment technique in an attempt to have each voxel of the trainer magnetic resonance image and the CT image correspond to each other.
3) Generating a mapping function corresponding to the same voxel CT value from the intensity values of the multiple groups of magnetic resonance images of all trainees; the method comprises the following steps:
(3.1) dividing the human body outline for each magnetic resonance image and each CT image respectively; the human body contour is divided by separating the human body structure in the image from the surrounding air.
(3.2) creating a region segmentation mask for the magnetic resonance images of the first ordered set; wherein,
when only bone or soft tissue exists in the magnetic resonance image of the first sequencing group, taking the magnetic resonance image as a region; when the magnetic resonance image of the first sequencing group has bone and soft tissue at the same time, the magnetic resonance image is divided into a bone region, a soft tissue region and a mixed tissue region, wherein the mixed tissue region refers to an uncertain part between the bone and the soft tissue, and the boundary of each region forms a region division mask; and the regions of the multi-group magnetic resonance image and the CT image at the same position, which cause the uneven anatomical structure due to the daily movement of organs or insufficient image alignment, and the regions outside the human body outline are set as exclusion regions.
(3.3) using the obtained region segmentation mask for carrying out region segmentation on the magnetic resonance image and the CT image of other sequencing groups except the first sequencing group at the same position;
(3.4) extracting respective magnetic resonance image intensity values and CT values from voxels in each region outside the exclusion zone;
(3.5) determining a mapping function of a multiple high degree polynomial for each region using a multivariate regression method based on the multiple sets of magnetic resonance image intensity values and the average CT values of voxels having the same multiple sets of magnetic resonance image intensity values:
where N is the polynomial maximum degree, CT (S)1,…,Sm) For the mapping function being a dependent variable, S1Is a first set of magnetic resonance image intensity values, being an independent variable in a mapping function, SmIs the m-th set of magnetic resonance image intensity values, being the independent variable, i, in the mapping function1Is S1Index of (1), imIs SmThe index of (a) is,are fitting coefficients.
4) A quasi-CT image of the subject is generated from the plurality of sets of magnetic resonance images of the subject. The method comprises the following steps:
and (4.1) acquiring each group of magnetic resonance images of the target person by using the same magnetic resonance parameters of the same sequencing group of the trainer. In a preferred example, the magnetic resonance images of the target and the trainer are generated by the same magnetic resonance scanner; in other examples, the magnetic resonance images of the trainer and the target may be generated by different magnetic resonance scanners.
(4.2) aligning a plurality of groups of magnetic resonance images of the same position of the target person with each other;
(4.3) when the magnetic resonance image of the target person is obtained by using a different scanning machine and/or different imaging conditions from those of the trainer, standardizing the magnetic resonance image of the target person to make the average intensity value of the magnetic resonance images of the same sequencing group at the same position of the target person and all the trainees be same;
(4.4) carrying out region segmentation on the magnetic resonance image of the first sequencing group of the target person and establishing a region segmentation mask by adopting the region segmentation mode in the steps (3.2) and (3.3) in the step 3), and acquiring the regions of the magnetic resonance images of other sequencing groups except the first sequencing group at the same position of the target person, wherein the image of the target person is not provided with an exclusion region;
(4.5) extracting a plurality of groups of magnetic resonance image intensity values of voxels in the magnetic resonance image of the target;
(4.6) passing the intensity values of the multiple groups of magnetic resonance images of each voxel of the target through the mapping function of the corresponding region obtained in the step (3) and the step (3.5) of the step 3) to obtain the CT value of the same voxel of the target;
and (4.7) forming the quasi-CT image of the target by the set of CT values of all voxels of the target.
A specific example is given below:
FIG. 1 is an exemplary flow chart of a method for determining a regression model using sets of magnetic resonance images and CT images of a trainer.
At step 110, the training data are two sets of magnetic resonance (referred to as the MRI1 and MRI 2) images obtained from multiple trainees using the same magnetic resonance scanner and CT images of the corresponding sites obtained from these trainees using the same CT scanner. Magnetic resonance images in the same ordered set of the multiple sets of magnetic resonance images of different trainees are acquired using the same magnetic resonance parameters and imaging conditions. The magnetic resonance images of the different ordered sets of each trainer are acquired using different magnetic resonance parameters, such as magnetic resonance image sequence parameters with different contrast properties (T1 weighted, T2 weighted). The data of each trainer comprises two sets of magnetic resonance images and a corresponding set of CT images to serve as ground truth. Since the magnetic resonance images of different trainees in the same ordered set are acquired using the same magnetic resonance parameters and imaging conditions, there is no need to normalize the magnetic resonance images of the trainees.
In step 120, the magnetic resonance image and the CT image are typically acquired on different machines, so the magnetic resonance image and the CT image do not align perfectly when overlaid. It is necessary to align the magnetic resonance image of each trainer and the CT image of the corresponding location by an image alignment technique in an attempt to have each voxel of the trainer magnetic resonance image and the CT image correspond to each other.
In step 130, the body contour is first divided for each of the magnetic resonance image and the CT image, respectively, the body structure in the images is separated from the surrounding air, and then region segmentation is performed for each image. A region segmentation is performed for the first ordered set of magnetic resonance images. When only bone or soft tissue exists in the magnetic resonance image of the first sequencing group, taking the magnetic resonance image as a region; when the magnetic resonance image of the first ordered set has both bone and soft tissue, the magnetic resonance image is divided into three regions, namely a bone region, a soft tissue region and a mixed tissue region. The bone region contains only defined skeletal anatomy, including cortical and spongy (cancellous) bone structure. The soft tissue region includes only all of the determined non-skeletal anatomy. The remaining area remaining between the areas where the bone region and the soft tissue region are in contact (i.e., the indeterminate region) is the mixed tissue region. In addition, the region in which the anatomical structure is not aligned due to the daily movement of the organ or insufficient image alignment in the plurality of sets of magnetic resonance images and CT images at the same position, and the region other than the contour of the human body are set as the exclusion region. Region segmentation may be done manually or using computer generated trainer contours. The boundaries of each region in the magnetic resonance image of the first ordering group form a region segmentation mask; the obtained region segmentation mask is then used for region segmentation on the magnetic resonance image and the CT image of the other sorting groups than the first sorting group at the same position.
At step 140, magnetic resonance image intensity values and CT values of the voxels are extracted from each image of the training data. The data for each voxel is a triplet of magnetic resonance image intensity values from the MRI1 set, magnetic resonance image intensity values from the MRI2 set, and corresponding CT values. The data extracted from the voxels of each trainer's given region are grouped together; voxels in the exclusion zone are excluded from the numerical extraction in order to ensure a true correlation between the magnetic resonance image intensity values and the CT values.
Step 150 performs a multivariate regression on each region, using a function of the two sets of magnetic resonance image intensity values to fit the corresponding average CT values (i.e., the average of the voxels of the same region of the training data in the interval of the two sets of magnetic resonance image intensity values). The multivariate regression in this example uses the following mapping function of a binary high order polynomial:
with a polynomial of maximum degree N of 30, CT (S)1,S2) For the mapping function being a dependent variable, the MRI1 sets of magnetic resonance image intensity values S1And MRI2 set of magnetic resonance image intensity values S2Is an independent variable, and the number of the independent variables,are fitting coefficients.
Fig. 2 is an exemplary flow chart for generating a quasi-CT image using multiple sets of magnetic resonance images of a subject.
At step 210, two sets of magnetic resonance images (MRI1 and MRI 2) of the subject are acquired using the same model of magnetic resonance scanner as the handler and using the same magnetic resonance parameters and imaging conditions as the same ordered set of the handler. Since the magnetic resonance image of the subject is obtained using the same model of magnetic resonance scanner and imaging conditions as the trainer, there is no need to normalize the magnetic resonance image of the subject.
In step 220, two sets of magnetic resonance images of the same location of the subject are aligned with each other.
In step 230, the region segmentation method described in step 130 is adopted to perform region segmentation on the magnetic resonance image of the first ordered group of the target person and to establish a region segmentation mask, and a region of the magnetic resonance image of the second ordered group at the same position of the target person is obtained; the image of the target person is not provided with an exclusion zone.
At step 240, magnetic resonance image intensity values for each voxel of the subject are extracted, namely the magnetic resonance image intensity value from the MRI1 set and the magnetic resonance image intensity value from the MRI2 set.
In step 250, the multivariate function for each region determined in step 150 is applied to the corresponding region of the target. Two sets of magnetic resonance image intensity values for each voxel of the subject are used as independent variables, and then the dependent variable is obtained as the CT value for that voxel using the binary high degree polynomial of the region in which that voxel is determined in step 150.
In step 260, all of the quasi-CT values corresponding to each voxel of the target constitute a quasi-CT image. The images can be used for simulation calculation and medical imaging of the magnetic resonance guided radiation treatment plan of the target person.

Claims (7)

1. A method for generating quasi-CT images using multivariate regression of multiple sets of magnetic resonance images, comprising the steps of:
1) acquiring a plurality of sets of Magnetic Resonance Images (MRI) and a corresponding set of CT images from a plurality of trainees respectively, wherein the MRI images of different sequencing sets of each trainer are acquired by using different MRI parameters; the magnetic resonance images in the same sequencing group in the multiple groups of magnetic resonance images of different trainees are obtained by using the same magnetic resonance parameters;
2) aligning each image of each set of magnetic resonance images of the same location of each trainer with a corresponding CT image;
3) generating a mapping function corresponding to the same voxel CT value from the intensity values of the multiple groups of magnetic resonance images of all trainees;
4) a quasi-CT image of the subject is generated from the plurality of sets of magnetic resonance images of the subject.
2. The method of claim 1, wherein when the magnetic resonance images of different trainees in the same ordered group are acquired by different scanning machines and/or different imaging conditions in step 1), the magnetic resonance images are normalized so that the mean intensity values of the magnetic resonance images of the same ordered group are the same among all trainees.
3. The method of generating quasi-CT images using multivariate regression of multiple sets of magnetic resonance images as set forth in claim 1, wherein step 3) comprises:
(3.1) dividing the human body outline for each magnetic resonance image and each CT image respectively;
(3.2) creating a region segmentation mask for the magnetic resonance images of the first ordered set;
(3.3) using the obtained region segmentation mask for carrying out region segmentation on the magnetic resonance image and the CT image of other sequencing groups except the first sequencing group at the same position;
(3.4) extracting respective magnetic resonance image intensity values and CT values from voxels in each region outside the exclusion zone;
(3.5) determining a mapping function of a multiple high degree polynomial for each region using a multivariate regression method based on the multiple sets of magnetic resonance image intensity values and the average CT values of voxels having the same multiple sets of magnetic resonance image intensity values:
where N is the polynomial maximum degree, CT (S)1,…,Sm) For the mapping function being a dependent variable, S1Is a first set of magnetic resonance image intensity values, being an independent variable in a mapping function, SmIs the m-th set of magnetic resonance image intensity values, being the independent variable, i, in the mapping function1Is S1Index of (1), imIs SmThe index of (a) is,are fitting coefficients.
4. The method of claim 2, wherein the step of dividing the contour of the human body in step (3.1) is to separate the structure of the human body from the surrounding air in the image.
5. The method of claim 3, wherein in the step (3.2), when only bone or soft tissue exists in the first ordered set of magnetic resonance images, the magnetic resonance images are used as a region; when the magnetic resonance image of the first ordering group has bone and soft tissue at the same time, the magnetic resonance image is divided into a bone region, a soft tissue region and a mixed tissue region, the mixed tissue region refers to an uncertain part between the bone and the soft tissue, and the boundary of each region forms a region division mask.
6. The method of claim 3, wherein in the step (3.2), the regions of the multi-set magnetic resonance image and the CT image at the same position where the anatomical structure is not aligned due to the daily movement of the organ or insufficient image alignment, and the regions outside the contour of the human body are set as exclusion regions.
7. The method of generating quasi-CT images using multivariate regression of multiple sets of magnetic resonance images as set forth in claim 1, wherein step 4) comprises:
(4.1) acquiring each group of magnetic resonance images of the target person by using the same magnetic resonance parameters of the same sequencing group of the trainer;
(4.2) aligning a plurality of groups of magnetic resonance images of the same position of the target person with each other;
(4.3) when the magnetic resonance image of the target person is obtained by using a different scanning machine and/or different imaging conditions from those of the trainer, standardizing the magnetic resonance image of the target person to make the average intensity value of the magnetic resonance images of the same sequencing group at the same position of the target person and all the trainees be same;
(4.4) carrying out region segmentation on the magnetic resonance image of the first sequencing group of the target person and establishing a region segmentation mask by adopting the region segmentation mode in the steps (3.2) and (3.3) in the step 3), and acquiring the regions of the magnetic resonance images of other sequencing groups except the first sequencing group at the same position of the target person, wherein the image of the target person is not provided with an exclusion region;
(4.5) extracting a plurality of groups of magnetic resonance image intensity values of voxels in the magnetic resonance image of the target;
(4.6) passing the intensity values of the multiple groups of magnetic resonance images of each voxel of the target through the mapping function of the corresponding region obtained in the step (3) and the step (3.5) of the step 3) to obtain the CT value of the same voxel of the target;
and (4.7) forming the quasi-CT image of the target by the set of CT values of all voxels of the target.
CN201910087064.2A 2019-01-29 2019-01-29 The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image Withdrawn CN109785405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910087064.2A CN109785405A (en) 2019-01-29 2019-01-29 The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910087064.2A CN109785405A (en) 2019-01-29 2019-01-29 The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image

Publications (1)

Publication Number Publication Date
CN109785405A true CN109785405A (en) 2019-05-21

Family

ID=66503634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910087064.2A Withdrawn CN109785405A (en) 2019-01-29 2019-01-29 The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image

Country Status (1)

Country Link
CN (1) CN109785405A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794739A (en) * 2015-05-03 2015-07-22 南方医科大学 Method for predicting CT (computerized tomography) image from MR (magnetic resonance) image on the basis of combination of corresponding partial sparse points
CN108351395A (en) * 2015-10-27 2018-07-31 皇家飞利浦有限公司 Virtual CT images from magnetic resonance image
CN108770373A (en) * 2015-10-13 2018-11-06 医科达有限公司 It is generated according to the pseudo- CT of MR data using feature regression model
CN108778416A (en) * 2015-10-13 2018-11-09 医科达有限公司 It is generated according to the pseudo- CT of MR data using organizational parameter estimation
CN109272486A (en) * 2018-08-14 2019-01-25 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of MR image prediction model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794739A (en) * 2015-05-03 2015-07-22 南方医科大学 Method for predicting CT (computerized tomography) image from MR (magnetic resonance) image on the basis of combination of corresponding partial sparse points
CN108770373A (en) * 2015-10-13 2018-11-06 医科达有限公司 It is generated according to the pseudo- CT of MR data using feature regression model
CN108778416A (en) * 2015-10-13 2018-11-09 医科达有限公司 It is generated according to the pseudo- CT of MR data using organizational parameter estimation
CN108351395A (en) * 2015-10-27 2018-07-31 皇家飞利浦有限公司 Virtual CT images from magnetic resonance image
CN109272486A (en) * 2018-08-14 2019-01-25 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of MR image prediction model

Similar Documents

Publication Publication Date Title
Spadea et al. Deep convolution neural network (DCNN) multiplane approach to synthetic CT generation from MR images—application in brain proton therapy
Liu et al. MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method
Liu et al. Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning
Wang et al. MRI-based treatment planning for brain stereotactic radiosurgery: Dosimetric validation of a learning-based pseudo-CT generation method
Dinkla et al. Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch‐based three‐dimensional convolutional neural network
US11455732B2 (en) Knowledge-based automatic image segmentation
US10149987B2 (en) Method and system for generating synthetic electron density information for dose calculations based on MRI
US8787648B2 (en) CT surrogate by auto-segmentation of magnetic resonance images
Hill et al. A strategy for automated multimodality image registration incorporating anatomical knowledge and imager characteristics
US8953856B2 (en) Method and system for registering a medical image
US9082169B2 (en) Longitudinal monitoring of pathology
RU2589461C2 (en) Device for creation of assignments between areas of image and categories of elements
Kang et al. Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy
US9727975B2 (en) Knowledge-based automatic image segmentation
US10453224B1 (en) Pseudo-CT generation with multi-variable regression of multiple MRI scans
EP3011358B1 (en) Cortical bone segmentation from mr dixon data
Geraghty et al. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial
US20080285822A1 (en) Automated Stool Removal Method For Medical Imaging
Kim et al. Prostate localization on daily cone-beam computed tomography images: accuracy assessment of similarity metrics
CN116168097A (en) Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN109785405A (en) The method for generating quasi- CT image using the multivariate regression of multiple groups magnetic resonance image
Wang et al. Applications of generative adversarial networks (GANs) in radiotherapy: narrative review
Malladi et al. Reduction of variance of observations on pelvic structures in CBCT images using novel mean-shift and mutual information based image registration?
Zhang et al. A comprehensive liver CT landmark pair dataset for evaluating deformable image registration algorithms
Ryalat Automatic Construction of Immobilisation Masks for use in Radiotherapy Treatment of Head-and-Neck Cancer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190521

WW01 Invention patent application withdrawn after publication