CN117853543A - Combined segmentation and registration method suitable for brain tumor image - Google Patents

Combined segmentation and registration method suitable for brain tumor image Download PDF

Info

Publication number
CN117853543A
CN117853543A CN202410023147.6A CN202410023147A CN117853543A CN 117853543 A CN117853543 A CN 117853543A CN 202410023147 A CN202410023147 A CN 202410023147A CN 117853543 A CN117853543 A CN 117853543A
Authority
CN
China
Prior art keywords
segmentation
registration
brain tumor
model
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410023147.6A
Other languages
Chinese (zh)
Inventor
张晶晶
程学斌
李腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202410023147.6A priority Critical patent/CN117853543A/en
Publication of CN117853543A publication Critical patent/CN117853543A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method suitable for joint segmentation and registration of brain tumor images, which comprises the following steps: acquiring brain tumor images of a patient at two different time points, preprocessing the brain tumor images, and dividing the preprocessed brain tumor images into a training set, a verification set and a test set; constructing a joint segmentation and registration training model and establishing a target loss function; taking the training set as input and carrying out iterative training on the combined segmentation and registration training model based on the target loss function to obtain an initial combined segmentation and registration model; inputting the verification set into an initial joint segmentation and registration model to obtain an optimal joint segmentation and registration training model; inputting the test set into the joint segmentation and registration training model to obtain brain tumor segmentation result images and brain tumor registration images, and performing similarity calculation to obtain and judge image registration effects; and judging the effect of image segmentation. By the method and the device, more robust and accurate registration results can be provided in brain tumor images.

Description

Combined segmentation and registration method suitable for brain tumor image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method suitable for joint segmentation and registration of brain tumor images.
Background
Diffuse gliomas, particularly the wild-type glioblastomas of Isocitrate Dehydrogenase (IDH) according to the world health organization central nervous system tumor classification (grade 4), are the most common and invasive malignant adult brain tumors that severely and unevenly infiltrate into the brain and surrounding brain tissue. Thus, accurate diagnosis and treatment of brain tumors and normal brain tissue is becoming increasingly important. Medical images typically encompass multiple modalities, such as sequences of T1, T2, FLAIR, etc. for MRI, which complicates the detailed analysis of brain tumors.
Current brain tumor image registration faces a number of challenges. Problems such as motion artifacts, patient head position changes, and tissue deformation limit the accuracy of conventional registration methods. In clinical practice, registration of images at different time points or different modalities is particularly difficult, preventing doctors from accurately assessing tumor development and treatment effects. The traditional brain tumor image segmentation method has the problems of blurred boundary, high similarity of different tissues and the like. This results in difficulties in surgical navigation, dynamic analysis of tumor growth, etc., which prevent the physician from understanding the patient's condition in depth. The prior art has certain limitation in processing multi-mode image information, and is difficult to fully integrate key information in different mode images. This limits the overall grasp of the overall tumor characteristics and affects the overall assessment of the patient's condition. Conventional image processing methods have some limitations in joint registration and segmentation, such as challenges in accuracy and robustness in processing images of different modalities at different points in time.
Existing algorithms fail severely in the event of appearance changes (e.g., data loss due to pathology such as tumors, myocardial scars, multiple sclerosis, etc.), as they have little control over these unknown variables. To address this problem, some image morphing algorithms have been developed to incorporate modeling of appearance changes into the registration function. The existing deformed image registration methods are mainly divided into two types: (i) The appearance change is excluded by manually dividing the anomaly region, and (ii) the appearance change is treated as an unknown variable estimated from the image. These methods either rely heavily on manual segmentation labels of the 3D voxel data, which is time consuming and laborious, or it is difficult to balance the effects of appearance effects with the effects of geometric variations. Recent work has developed an automatic encoder to estimate distortion and appearance changes by decoupling geometric and appearance representations in potential space. However, this model is very sensitive to parameter adjustments, as it is difficult to distinguish between geometrical transformations and appearance-induced changes.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: solves the problems of inaccurate brain tumor image segmentation and inaccurate registration.
In order to solve the technical problems, the invention provides the following technical scheme:
a method for joint segmentation and registration of brain tumor images, comprising the steps of:
s100, acquiring brain tumor image data of two different time points of a patient, and preprocessing the brain tumor image data to acquire preprocessed brain tumor image data; dividing the preprocessed brain tumor image data into a training set, a verification set and a test set;
s200, constructing a joint segmentation and registration training model, and constructing a target loss function of the joint segmentation and registration training model;
s300, taking brain tumor image data of two different time points of the training set as input, and carrying out iterative training on the joint segmentation and registration training model based on the target loss function to obtain an initial joint segmentation and registration model; inputting brain tumor image data of two different time points of the verification set into the initial joint segmentation and registration module, verifying model precision, and obtaining an optimal joint segmentation and registration training model;
s400, inputting brain tumor image data of two different time points of the test set into the joint segmentation and registration training model, and obtaining brain tumor segmentation result images and brain tumor registration images;
s500, judging an image registration effect by adopting dice similarity coefficients; and performing similarity calculation on the segmented tumor mask and the brain tumor segmentation label, and judging the image segmentation effect.
The advantages are that: using brain tumor images as raw data, a predictive registration algorithm of appearance changes is simulated by using co-learned segmentation maps. And a new appearance-aware regularization is learned that can constrain image intensity variations separated from appearance variations by geometric transformations in segments, the joint learning scheme maximizing the reciprocal reciprocity of deformed image registration and segmentation.
Compared with the prior method, the method adds a new regularization into the network loss function, and the regularization enforces segmentation constraint on the geometric transformation field. Such constraints will be learned simultaneously from the jointly optimized segmentation task. Furthermore, we effectively enhance the segmentation labels by exploiting the transformations learned during training. This not only greatly improves segmentation performance, but also reduces the need for a large number of ground truth segmentation tags.
In one embodiment of the present invention, acquiring pre-processed brain tumor image data comprises:
s110, acquiring brain tumor image data of any two time points of a plurality of patients;
s120, performing skull dissection by brain tumor image data of any two time points of a plurality of patients;
s130, carrying out space resampling on the brain tumor image data pair after skull peeling, and unifying image resolution;
s140, carrying out rigid registration on the brain tumor image data after spatial resampling;
and S150, normalizing the brain tumor image data after rigid registration, and dividing the normalized brain tumor image data into a training set, a verification set and a test set.
In one embodiment of the invention, normalization is performed by the following formula:
where N is the number of pixels of the image, x is the pixel matrix of the image, μ is the mean of the image, σ is the standard deviation of the image, and max is the maximum.
In one embodiment of the invention, labeling the image with brain tumor data in the standardized brain tumor image data; and carrying labeled image data in the training set, the verification set and the test set.
In an embodiment of the invention, the joint segmentation and registration model comprises a segmentation model and a registration training model; and taking the brain tumor image data with the previous time point as a source image, taking the brain tumor image data with the subsequent time point as a target image, taking the brain tumor image data as the input of the segmentation model, outputting a brain tumor segmentation label from the registration training model, registering the source image to a deformation field of the target image, and carrying out image transformation on the source image by the deformation field.
In an embodiment of the present invention, the segmentation model adopts three different sub-segmentation modules, including a standard Unet, a convolutional neural network R2-Unet module based on a cyclic residual error, and an Unet R module constructed by a transform, and evaluates the brain tumor segmentation effect of the three different sub-segmentation modules by calculating a Dice score, and transmits the brain tumor image segmentation data output by the sub-segmentation module with the best segmentation effect to the registration training model.
In one embodiment of the invention, the Dice score is obtained by the following formula:
in the method, in the process of the invention,and y is the true label of the tumor for the predicted tumor mask.
In one embodiment of the present invention, the objective loss function of the segmentation model is constructed based on the Dice score, and the objective loss function of the segmentation model is obtained by the following formula:
wherein, I seg For the target loss function of the segmentation model,is the Dice score of the segmentation model.
In one embodiment of the present invention, the registration training model employs an improved VTN network consisting essentially of only an encoder, a decoder, and a jump connection is set from the encoder to the decoder;
wherein, at the deepest level of the decoder, dense connection is set, and two different operations are carried out on the dense connection and are fused with the characteristics of the last layer of the decoder; and the deepest hierarchy of the decoder converts the feature dimension of the network output from N 'to the N' dimension of the deformation field, where N '> N'.
In an embodiment of the present invention, the objective loss function of the registration training model is obtained by the following formula:
wherein L is the target loss function of the registration training model, L dist For image dissimilarity, the dissimilarity between the deformed image and the target area is measured,and->For the source image and the target image whose appearance is masked,/>for the purpose of speed field->The deformation field obtained by integral transformation, gamma is a weight parameter for balancing segmentation and registration loss, L seg For the target loss function of the segmentation model, l reg Regularizing the term for the image, +.>The symbols are calculated for the similarity.
Compared with the prior art, the invention has the beneficial effects that: the invention is based on the registration model of joint segmentation and registration, and the constructed registration model of joint segmentation and registration is trained by utilizing a data set to obtain an optimal model. The optimal model combines the advantages of a segmentation network and a registration network, the segmentation network can extract the omnibearing features of the tumor so as to assist the training of the subsequent registration network, and the learned segmentation graph can simulate the appearance change to realize the dynamic treatment of the tumor region. The method uses brain tumor images as raw data, by using a jointly learned segmentation map to simulate a predictive registration algorithm for appearance changes. And a new appearance perception regularization is learned, the image intensity change separated from the appearance change caused by geometric transformation can be restrained in a segmented mode, and the combined learning scheme maximizes the reciprocity of registration and segmentation of the deformed image.
The registration network in this approach is specially treated, and we enhance feature fusion inside the network due to the mass effect of the tumor and the common influence of the edema region, so that it can effectively deal with detailed features at the tumor margin and some large deformations.
The invention is simple to realize, only the brain tumor images of any two time points of the patient are needed, and the segmentation network learns the segmentation labels in the training process to assist the subsequent registration network to carry out finer registration. The method is beneficial to being put into practical application as soon as possible, and solves the problem of brain tumor registration in the existing clinical medicine.
Drawings
Fig. 1 is a flowchart of a method for joint segmentation and registration of brain tumor images according to an embodiment of the present invention.
FIG. 2 is a flow chart of a training process of a joint segmentation and registration training model according to an embodiment of the present invention.
FIG. 3 is a frame diagram of a joint segmentation and registration training model in accordance with an embodiment of the present invention.
Fig. 4 is a schematic diagram of a registration training model according to an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding of the technical scheme of the present invention by those skilled in the art, the technical scheme of the present invention will be further described with reference to the accompanying drawings.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Referring to fig. 1, the present invention provides a method for joint segmentation and registration of brain tumor images, comprising the following steps:
s100, acquiring brain tumor image data of two different time points of a patient, and preprocessing the brain tumor image data to acquire preprocessed brain tumor image data; and dividing the preprocessed brain tumor image data into a training set, a validation set and a test set.
In this embodiment, acquiring the preprocessed brain tumor image data includes:
s110, acquiring brain tumor image data of any two time points of a plurality of patients.
S120, performing skull dissection by using brain tumor image data of any two time points of a plurality of patients.
In this embodiment, freeStuffer is a software configuration environment variable installed on the ubuntu20.04 system version. And (3) putting all the brain tumor image data pairs at any time point into the same directory folder, and arranging the brain tumor image data pairs in parallel. And executing a record-all statement on the images in the folder, so that the cranium part of the cranium image can be removed, and the complete and clear brain can be obtained.
S130, spatial resampling is carried out on the brain tumor image data pair after skull peeling, and image resolution is unified.
In this embodiment, the unified image resolution is 155×240×240, and the isotropic resolution is 1.25mm×1.25mm.
And S140, carrying out rigid registration on the brain tumor image data after spatial resampling.
In this embodiment, the brain tumor image data of two different time points to be registered are aligned to the MNI standard brain template space, and the matched image pair to be registered is rigidly registered by using an ANTs registration package.
And S150, normalizing the brain tumor image data after rigid registration, and dividing the normalized brain tumor image data into a training set, a verification set and a test set.
In an embodiment, all normalized pairs of images to be registered are formed into a dataset, 70% of the dataset is used as a training set, 20% is used as a validation set, and the remaining 10% is used as a test set. Further, the brain tumor image with tumor label is correspondingly divided into labels, the same processing of data set division is carried out in S150, 70% of the data set of the label image with brain tumor data is used as a training set, 20% is used as a verification set, and the rest 10% is used as a test set.
S200, constructing a joint segmentation and registration training model, and building an objective loss function of the joint segmentation and registration training model.
Referring to fig. 1 to 3, in the present embodiment, the joint segmentation and registration model includes a segmentation model and a registration training model. And taking the brain tumor image data with the previous time point as a source image, taking the brain tumor image data with the subsequent time point as a target image and taking the brain tumor image data as the input of the segmentation model, additionally setting the iteration number q as an additional input, controlling the algebra of training, outputting a brain tumor segmentation label and a deformation field for registering the source image to the target image from the registration training model, and taking the image of the source image after the deformation field transformation.
Specifically, firstly, network input is processed by a segmentation model, the segmentation model adopts several network structures to segment, then an objective loss function of a segmentation network is constructed, and the objective function of the segmentation network is minimized in the training process. The predicted tumor label output by the segmentation model is combined with the real tumor label to be used as the appearance change of the brain tumor image.
And then constructing a registration training model sensitive to appearance change, wherein the registration training model is composed of an improved VTN network, can effectively process larger deformation in the image, constructs a target loss function of the registration training model, minimizes the target loss function of the registration training model in the training process, and outputs a predicted speed field and a source target image after deformation. And (5) iteratively training for q generations until the network converges.
In this embodiment, three different sub-segmentation modules are adopted in the construction of the segmentation network, the standard Unet, the convolutional neural network R2-Unet based on the cyclic residual error and the Unet R constructed by the transform are adopted, the effect of brain tumor segmentation is evaluated by calculating the Dice score, and brain tumor image segmentation data output by the sub-segmentation module with the best segmentation effect are transmitted to the registration training model. And, additionally visualizing the predicted segmentation simultaneously with all segmented test images.
Wherein the Dice score is obtained by the following formula:
in the method, in the process of the invention,and y is the true label of the tumor for the predicted tumor mask.
The target loss function of the segmentation model is constructed based on the Dice score, and the target loss function of the segmentation model is obtained through the following formula:
wherein, I seg For the target loss function of the segmentation model,is the Dice score of the segmentation model.
Referring to fig. 1 to 4, in the present embodiment, the registration training model adopts an improved VTN network, where the improved VTN network is mainly composed of an encoder and a decoder, and jump connection is set from the encoder to the decoder, so that information of different levels can be fused better. The method comprises the steps of setting dense connection at the deepest level of the decoder, carrying out two different operations on the dense connection and fusing the dense connection with the characteristics of the last layer of the decoder, and realizing multi-dimensional and multi-layer characteristic fusion. And the deepest hierarchy of the decoder converts the feature dimension of the network output from N 'to the N' dimension of the deformation field, where N '> N'. In this embodiment, the decoder has three full connection layers, and the characteristic dimension of the network output is converted from 16 to 3 dimensions of the deformation field at the last connection layer.
Specifically, the encoder has five layers, the network input is used for splicing images to be registered (brain tumor image data of any two time points after standardized processing), the first layer is used for enabling the feature with the dimension of 2 to pass through the convolution layer and the maximum pooling layer, the resolution is halved, the number of feature channels is 16, the second layer is the same as that of the first layer, the resolution is halved again and is halved, the resolution is halved, the number of feature channels is 16, the operation of the third layer is pure convolution operation, the number of feature channels is 64, the fourth layer is the same as the first two layers, the number of feature channels is unchanged, and the resolution is halved to be eighth of the full resolution. The fifth layer is the same as the first, second, and fourth layers, and the number of characteristic channels becomes 128. The resolution is 1/16 of the full resolution.
Specifically, in this embodiment, the decoder has three layers, and three inputs of each layer are input, where two operations of two different operations are performed for the previous layer, that is, up-sampling operation, convolution operation, and up-sampling operation, which are respectively performed for the previous layer, so that it is beneficial to strengthen detail features, another input is a feature of jump connection from the decoder, and the three inputs are spliced to obtain the input of the next layer, where the operations have three layers. And three full-connection layers are linked at the back, and output is converted into a deformation field with the channel number of 3.
In this embodiment, the pair of images to be registered in the training set is used as input, the registration model to be trained is iteratively trained based on the objective loss function, the weight coefficient of the registration model is continuously updated through back propagation in the training process, the updating is stopped until the measure function of the similarity obtains the maximum value, and the weight coefficient of the registration model is fixed, so as to obtain the optimized model. The target loss function of the registration training model is obtained through the following formula:
where l is the target loss function of the registration training model, l dist For image dissimilarity, the dissimilarity between the deformed image and the target area is measured,and->For source and target images whose appearance is masked, < +.>For the purpose of speed field->The deformation field obtained by integral transformation, gamma is a weight parameter for balancing segmentation and registration loss, and l seg For the target loss function of the segmentation model, l reg For the purpose of regularization of images, the aim is to constrain irregular transformations, +.>The symbols are calculated for the similarity.
Registering the target loss function of the training model, wherein the third term considers the segmentation labels, regards the appearance change of the brain tumor image as a variable of the segmentation network, and jointly optimizes in the best registered solution, the optimization scheme minimizes the loss function defined in the equation, and jointly optimizes all network parameters by alternating segmentation training and image registration.
S300, taking brain tumor image data of two different time points of the training set as input, and carrying out iterative training on the joint segmentation and registration training model based on the target loss function to obtain an initial joint segmentation and registration model; and inputting the brain tumor image data of two different time points of the verification set into the initial joint segmentation and registration module, verifying the model precision, and obtaining an optimal joint segmentation and registration training model.
S400, inputting brain tumor image data of two different time points of the test set into the joint segmentation and registration training model to obtain brain tumor segmentation result images and brain tumor registration images,
s500, judging an image registration effect by adopting dice similarity coefficients; and performing similarity calculation on the segmented tumor mask and the brain tumor segmentation label, and judging the image segmentation effect.
In the present embodiment, the registration effect and the segmentation effect are judged by setting a scoring mechanism.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The above-described embodiments merely represent embodiments of the invention, the scope of the invention is not limited to the above-described embodiments, and it is obvious to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (10)

1. A method for joint segmentation and registration of brain tumor images, comprising the steps of:
s100, acquiring brain tumor image data of two different time points of a patient, and preprocessing the brain tumor image data to acquire preprocessed brain tumor image data; dividing the preprocessed brain tumor image data into a training set, a verification set and a test set;
s200, constructing a joint segmentation and registration training model, and constructing a target loss function of the joint segmentation and registration training model;
s300, taking brain tumor image data of two different time points of the training set as input, and carrying out iterative training on the joint segmentation and registration training model based on the target loss function to obtain an initial joint segmentation and registration model; inputting brain tumor image data of two different time points of the verification set into the initial joint segmentation and registration module, verifying model precision, and obtaining an optimal joint segmentation and registration training model;
s400, inputting brain tumor image data of two different time points of the test set into the joint segmentation and registration training model, and obtaining brain tumor segmentation result images and brain tumor registration images;
s500, judging an image registration effect by adopting dice similarity coefficients; and performing similarity calculation on the segmented tumor mask and the brain tumor segmentation label, and judging the image segmentation effect.
2. The method for joint segmentation and registration of brain tumor images according to claim 1, wherein acquiring pre-processed brain tumor image data comprises:
s110, acquiring brain tumor image data of any two time points of a plurality of patients;
s120, performing skull dissection by brain tumor image data of any two time points of a plurality of patients;
s130, carrying out space resampling on the brain tumor image data pair after skull peeling, and unifying image resolution;
s140, carrying out rigid registration on the brain tumor image data after spatial resampling;
and S150, normalizing the brain tumor image data after rigid registration, and dividing the normalized brain tumor image data into a training set, a verification set and a test set.
3. The method for joint segmentation and registration of brain tumor images according to claim 2, characterized in that the normalization is performed by the following formula:
where N is the number of pixels of the image, x is the pixel matrix of the image, μ is the mean of the image, σ is the standard deviation of the image, and max is the maximum.
4. The method for joint segmentation and registration of brain tumor images according to claim 2, wherein the normalized brain tumor image data is labeled with the brain tumor data; and carrying labeled image data in the training set, the verification set and the test set.
5. The method for joint segmentation and registration of brain tumor images according to claim 1, wherein the joint segmentation and registration model comprises a segmentation model and a registration training model; and taking the brain tumor image data with the previous time point as a source image, taking the brain tumor image data with the subsequent time point as a target image, taking the brain tumor image data as the input of the segmentation model, outputting a brain tumor segmentation label from the registration training model, registering the source image to a deformation field of the target image, and carrying out image transformation on the source image by the deformation field.
6. The method for joint segmentation and registration of brain tumor images according to claim 5, wherein the segmentation model adopts three different sub-segmentation modules, including a standard Unet, a convolutional neural network based on cyclic residual error R2-Unet module and a Unet R module constructed by a transducer, and the effect of brain tumor segmentation of the three different sub-segmentation modules is evaluated by calculating a Dice score, and brain tumor image segmentation data output by the sub-segmentation module with the best segmentation effect is transmitted to the registration training model.
7. The method for joint segmentation and registration of brain tumor images according to claim 6, wherein the Dice score is obtained by the following formula:
in the method, in the process of the invention,and y is the true label of the tumor for the predicted tumor mask.
8. The method of joint segmentation and registration for brain tumor images according to claim 6, wherein the objective loss function of the segmentation model is constructed based on a Dice score, the objective loss function of the segmentation model being obtained by the following formula:
wherein, I seg For the target loss function of the segmentation model,is the Dice score of the segmentation model.
9. The method for joint segmentation and registration of brain tumor images according to claim 5, wherein the registration training model employs a modified VTN network consisting essentially of only an encoder, a decoder, and a jump connection is set from the encoder to the decoder;
wherein, at the deepest level of the decoder, dense connection is set, and two different operations are carried out on the dense connection and are fused with the characteristics of the last layer of the decoder; and the deepest hierarchy of the decoder converts the feature dimension of the network output from N 'to the N' dimension of the deformation field, where N '> N'.
10. The method for joint segmentation and registration of brain tumor images according to claim 9, wherein the target loss function of the registration training model is obtained by the following formula:
where l is the target loss function of the registration training model, l dist For image dissimilarity, the dissimilarity between the deformed image and the target area is measured,and->For source and target images whose appearance is masked, < +.>For the purpose of speed field->The deformation field obtained by integral transformation, gamma is a weight parameter for balancing segmentation and registration loss, and l seg For the target loss function of the segmentation model, l reg Regularizing the term for the image, +.>The symbols are calculated for the similarity.
CN202410023147.6A 2024-01-05 2024-01-05 Combined segmentation and registration method suitable for brain tumor image Pending CN117853543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410023147.6A CN117853543A (en) 2024-01-05 2024-01-05 Combined segmentation and registration method suitable for brain tumor image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410023147.6A CN117853543A (en) 2024-01-05 2024-01-05 Combined segmentation and registration method suitable for brain tumor image

Publications (1)

Publication Number Publication Date
CN117853543A true CN117853543A (en) 2024-04-09

Family

ID=90535798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410023147.6A Pending CN117853543A (en) 2024-01-05 2024-01-05 Combined segmentation and registration method suitable for brain tumor image

Country Status (1)

Country Link
CN (1) CN117853543A (en)

Similar Documents

Publication Publication Date Title
US11645748B2 (en) Three-dimensional automatic location system for epileptogenic focus based on deep learning
Akkus et al. Deep learning for brain MRI segmentation: state of the art and future directions
Eppenhof et al. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks
Wang et al. Knowledge-guided robust MRI brain extraction for diverse large-scale neuroimaging studies on humans and non-human primates
Wu et al. S‐HAMMER: Hierarchical attribute‐guided, symmetric diffeomorphic registration for MR brain images
Cheng et al. Cortical surface registration using unsupervised learning
Wu et al. Registration of longitudinal brain image sequences with implicit template and spatial–temporal heuristics
CN111079901A (en) Acute stroke lesion segmentation method based on small sample learning
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
Li et al. BrainK for structural image processing: creating electrical models of the human head
CN113658721A (en) Alzheimer disease process prediction method
CN116258732A (en) Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images
Liu et al. Robust cortical thickness morphometry of neonatal brain and systematic evaluation using multi-site MRI datasets
CN117422788B (en) Method for generating DWI image based on CT brain stem image
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
JP6785976B2 (en) Brain image normalization device, method and program
CN113362360B (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field
Li et al. OrbitNet—A fully automated orbit multi-organ segmentation model based on transformer in CT images
Ghazi et al. Deep Learning Methods for Identification of White Matter Fiber Tracts: Review of State-of-the-Art and Future Prospective
He et al. Learning-based template synthesis for groupwise image registration
CN117853543A (en) Combined segmentation and registration method suitable for brain tumor image
CN110276414B (en) Image feature extraction method and expression method based on dictionary learning and sparse representation
Yuan et al. Brain CT registration using hybrid supervised convolutional neural network
Marias et al. Image analysis for assessing molecular activity changes in time-dependent geometries
Qu et al. Multiple classifier fusion and optimization for automatic focal cortical dysplasia detection on magnetic resonance images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination