CN114119558B - Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report - Google Patents
Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report Download PDFInfo
- Publication number
- CN114119558B CN114119558B CN202111429938.1A CN202111429938A CN114119558B CN 114119558 B CN114119558 B CN 114119558B CN 202111429938 A CN202111429938 A CN 202111429938A CN 114119558 B CN114119558 B CN 114119558B
- Authority
- CN
- China
- Prior art keywords
- image
- nasopharyngeal carcinoma
- mri
- template
- deformation field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 208000002454 Nasopharyngeal Carcinoma Diseases 0.000 title claims abstract description 55
- 206010061306 Nasopharyngeal cancer Diseases 0.000 title claims abstract description 55
- 201000011216 nasopharynx carcinoma Diseases 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000003745 diagnosis Methods 0.000 title claims abstract description 14
- 210000003484 anatomy Anatomy 0.000 claims abstract description 13
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 13
- 230000009545 invasion Effects 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 230000009466 transformation Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000012952 Resampling Methods 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000011524 similarity measure Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 241000257303 Hymenoptera Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 210000001989 nasopharynx Anatomy 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/32—Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses a structured report method for automatically generating nasopharyngeal carcinoma image diagnosis, which comprises the steps of firstly establishing a non-rigid registration model of nasopharyngeal carcinoma based on a convolutional neural network, selecting a group of healthy human head MRI images and a plurality of groups of MRI images containing nasopharyngeal carcinoma, preprocessing, and then inputting the preprocessed images as a learning sample to a network for iterative training to obtain a registration model; predicting and registering a deformation field of the template MRI image, and inputting the MRI image of the patient and the template MRI image into a trained registration model to obtain a model predicted deformation field; registering the nasopharyngeal carcinoma ROI image to a template MRI image, and registering the nasopharyngeal carcinoma ROI image of the corresponding patient by using the obtained deformation field to obtain a registered nasopharyngeal carcinoma ROI image with the template image as a target; generating a structured report, calculating the voxel invasion rate of the medical anatomical structure in the registered nasopharyngeal carcinoma ROI image and the template MRI image, and generating the structured report. The method has the advantages of simple model, quick generation, high accuracy and good integrity.
Description
Technical Field
The invention relates to the field of medical image processing and medical image reporting, in particular to a structured report method for automatically generating nasopharyngeal carcinoma image diagnosis.
Background
Clinically, interpretation of medical images is done by a doctor who reviews the medical images, analyzes and describes the medical findings therein, and presents image diagnostics and reports. However, as medical imaging technology advances, medical image data increases exponentially, bringing enormous diagnostic pressure and report writing load to doctors, so many automated medical image reporting methods using structured formats and standardized terms have been proposed and applied in various medical imaging fields. In the diagnosis of nasopharyngeal carcinoma, the accurate automatic medical image report of nasopharyngeal carcinoma MRI can help clinical and radiologists to reduce the time of reading and writing image report, make disease diagnosis accurately, improve the image diagnosis precision, avoid missing diagnosis and misdiagnosis.
Disclosure of Invention
The invention aims to overcome the defects of the traditional medical image report and provide a method for automatically generating the structural report of nasopharyngeal carcinoma image diagnosis.
The technical scheme for realizing the aim of the invention is as follows:
a method for automatically generating a structural report of nasopharyngeal carcinoma image diagnosis comprises the following steps:
1) Establishing a non-rigid registration model of nasopharyngeal carcinoma based on a convolutional neural network, selecting a group of healthy human head MRI images and a plurality of groups of MRI images containing nasopharyngeal carcinoma, preprocessing, and inputting the MRI images and the MRI images as learning samples into the non-rigid registration model of nasopharyngeal carcinoma constructed by the convolutional neural network and a space conversion module for iterative training to obtain a registration model after training is completed;
2) Predicting a deformation field registered to the template MRI image: inputting the MRI image of the patient and the template MRI image into the trained registration model in the step 1), registering the non-rigid body image once, predicting the deformation field of which the size is the same as that of the input, wherein each point corresponds to the displacement components of pixels x, y and z, and obtaining the deformation field predicted by the model;
3) Registering the nasopharyngeal carcinoma ROI image to a template MRI image, resampling the deformation field of the nasopharyngeal carcinoma ROI image of the corresponding patient by using the deformation field obtained in the step 2), registering the nasopharyngeal carcinoma ROI image of the patient to a space taking a normal person template image as a public domain, and obtaining a registered nasopharyngeal carcinoma ROI image taking the template image as a target;
4) Generating a structured report: and (3) respectively calculating overlapped voxels of the registered nasopharyngeal carcinoma ROI and 32 specific medical anatomical structures obtained in the step (3) by adopting an image processing mode, dividing the overlapped voxels by the total voxels of each anatomical structure to obtain the voxel invasion rate of the corresponding anatomical structure, and counting the invaded anatomical structure to generate a structured report.
In the step 1), the non-rigid registration model of the nasopharyngeal carcinoma consists of a convolutional neural network and a space conversion module; the convolutional neural network comprises an encoder for extracting features, a decoder for restoring features and a cross-connection structure for fusing four pieces of feature information, wherein the selected MRI image of a healthy person is used as a template image, the MRI image containing nasopharyngeal carcinoma is used as a floating image, the MRI image containing nasopharyngeal carcinoma is preprocessed and then respectively synthesized into a double-channel image pair, the double-channel image pair is input into the convolutional neural network, and corresponding transformation parameters are output after analysis; the space conversion module generates a deformation field according to the transformation parameters and resamples the floating image to obtain a deformed floating image, calculates normalized cross-correlation of the deformed floating image and the template image to obtain a loss function value, and updates network parameters after back propagation until similarity measure is maximized, and model training is completed.
The invention provides an automatic generation structured report method for nasopharyngeal carcinoma image diagnosis, which applies an unsupervised non-rigid registration method based on deep learning to the field of medical image reporting. After the nasopharyngeal carcinoma MRI image is registered to a space taking a normal person template image as a public domain, the lesion area is quantitatively analyzed, and the invasion condition of the nasopharyngeal carcinoma ROI to 32 specific medical anatomical structures is accurately calculated, so that a medical image report with a structured format, standardized terms and consistency is automatically generated.
Drawings
FIG. 1 is a flow chart of a method for automatically generating a structured report of nasopharyngeal carcinoma imaging diagnosis;
FIG. 2 is a schematic diagram of a convolutional neural network of a non-rigid registration model of nasopharyngeal carcinoma;
Fig. 3 is a frame diagram of a nasopharyngeal carcinoma image registration process.
Detailed Description
The present invention will now be further illustrated with reference to the drawings and examples, but is not limited thereto.
Examples:
As shown in fig. 1, a method for automatically generating a structural report of nasopharyngeal carcinoma image diagnosis includes the following steps:
1) Selecting a group of healthy person MRI head images as template images, taking a plurality of groups of nasopharyngeal carcinoma MRI images as floating images, downsampling the MRI images to 16X 256 pixels, wherein the downsampled part comprises nasopharynx and 32 specific medical anatomical structure parts, pre-registering the floating images and the template images by adopting an Affine algorithm in an ANTs package, and finally normalizing pixel values of the images;
2) And respectively synthesizing each floating image and the template image into a double-channel image pair, and training the double-channel image pair as a training set input model to obtain a non-rigid nasopharyngeal carcinoma registration model.
The non-rigid registration model of nasopharyngeal carcinoma based on convolutional neural network is established, as shown in fig. 2, the structural part specifically refers to the encoder and decoder structures in U-net, to the jump connection structure in FCN, and to the spatial transformation structure in STN. The encoder structure is used for extracting the characteristics of the network, increasing the network receptive field, the decoder is used for the up-sampling process of the network, the characteristic diagram is restored to the original diagram size, and the displacement prediction is carried out pixel by pixel. The jump connection structure fuses the deep and abstract characteristic information with the specific characteristic information at a shallow level. The spatial transformation structure enables the network to have space free deformation, and the model can be trained end to end through back propagation by embedding the spatial transformation structure into the network, so that the spatial transformation of the floating image closest to the template image is found.
The model forward propagation calculates a non-similarity index loss function value of the floating image and the template image, and the backward propagation adopts an Adam small-batch gradient descent method to minimize the loss function, so that the iterative updating of the model weight parameters is completed, and the aim of self-training is achieved;
In the model coding stage, the step length of the convolution layer is 2, and the step length of the convolution kernel is 4 multiplied by 4 is convolved, so that the down-sampling effect is achieved; in the model decoding stage, a network alternately uses an up-sampling layer and a convolution layer with the step length of 1 and the convolution kernel size of 3 multiplied by 3, and meanwhile, a characteristic diagram of the encoding stage is transferred to the decoding stage across a connecting part to help predict a deformation field of a floating image registered to a template image. And in the space transformation stage, the deformation field generated by the deformation field is used for carrying out deformation and interpolation on the floating image, so as to obtain the deformation field with the optimal registration effect with the template image and the deformed floating image.
The registration model is obtained by training the pair of the head nasopharyngeal carcinoma MRI image and the template image, and can predict the deformation field of the nasopharyngeal carcinoma MRI image registered to the template image.
3) The new nasopharyngeal carcinoma MRI image and the template MRI image are input into the trained model, and non-rigid registration can be realized through one forward propagation calculation, so that a deformation field is predicted.
4) And (3) registering the new ROI image of the nasopharyngeal carcinoma by using the deformation field with the template MRI image as a target to obtain the ROI with the normal template image as a public domain, wherein a frame diagram of the registering process is shown in figure 3.
5) And calculating voxel invasion rates of 32 specific medical anatomical structures in the registered nasopharyngeal carcinoma ROI and the template image, counting the invaded anatomical structures, and generating a structural report.
Most of the above processes are non-rigid body registration model training and image registration processes, and in actual operation, a standardized nasopharyngeal carcinoma structured report can be automatically obtained only by inputting an MRI image of a nasopharyngeal carcinoma patient.
Claims (2)
1. The method for automatically generating the structural report of nasopharyngeal carcinoma image diagnosis is characterized by comprising the following steps of:
1) Establishing a non-rigid registration model of nasopharyngeal carcinoma based on a convolutional neural network, selecting a group of healthy human head MRI images and a plurality of groups of MRI images containing nasopharyngeal carcinoma, preprocessing, and inputting the MRI images and the MRI images as learning samples into the non-rigid registration model of nasopharyngeal carcinoma constructed by the convolutional neural network and a space conversion module for iterative training to obtain a registration model after training is completed;
2) Predicting a deformation field registered to the template MRI image: inputting the MRI image of the patient and the template MRI image into the trained registration model in the step 1), registering the non-rigid body image once, predicting the deformation field of which the size is the same as that of the input, wherein each point corresponds to the displacement components of pixels x, y and z, and obtaining the deformation field predicted by the model;
3) Registering the nasopharyngeal carcinoma ROI image to a template MRI image, resampling the deformation field of the nasopharyngeal carcinoma ROI image of the corresponding patient by using the deformation field obtained in the step 2), registering the nasopharyngeal carcinoma ROI image of the patient to a space taking a normal person template image as a public domain, and obtaining a registered nasopharyngeal carcinoma ROI image taking the template image as a target;
4) Generating a structured report: and (3) respectively calculating overlapped voxels of the registered nasopharyngeal carcinoma ROI and 32 specific medical anatomical structures obtained in the step (3) by adopting an image processing mode, dividing the overlapped voxels by the total voxels of each anatomical structure to obtain the voxel invasion rate of the corresponding anatomical structure, and counting the invaded anatomical structure to generate a structured report.
2. The method for automatically generating a structured report for diagnosing nasopharyngeal carcinoma according to claim 1, wherein in step 1), said non-rigid registration model for nasopharyngeal carcinoma is composed of a convolutional neural network and a spatial conversion module; the convolutional neural network comprises an encoder for extracting features, a decoder for restoring features and a cross-connection structure for fusing four pieces of feature information, wherein the selected MRI image of a healthy person is used as a template image, the MRI image containing nasopharyngeal carcinoma is used as a floating image, the MRI image containing nasopharyngeal carcinoma is preprocessed and then respectively synthesized into a double-channel image pair, the double-channel image pair is input into the convolutional neural network, and corresponding transformation parameters are output after analysis; the space conversion module generates a deformation field according to the transformation parameters and resamples the floating image to obtain a deformed floating image, calculates normalized cross-correlation of the deformed floating image and the template image to obtain a loss function value, and updates network parameters after back propagation until similarity measure is maximized, and model training is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111429938.1A CN114119558B (en) | 2021-11-29 | 2021-11-29 | Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111429938.1A CN114119558B (en) | 2021-11-29 | 2021-11-29 | Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114119558A CN114119558A (en) | 2022-03-01 |
CN114119558B true CN114119558B (en) | 2024-05-03 |
Family
ID=80370852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111429938.1A Active CN114119558B (en) | 2021-11-29 | 2021-11-29 | Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114119558B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115132314B (en) * | 2022-09-01 | 2022-12-20 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Examination impression generation model training method, examination impression generation model training device and examination impression generation model generation method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019135234A1 (en) * | 2018-01-03 | 2019-07-11 | Ramot At Tel-Aviv University Ltd. | Systems and methods for the segmentation of multi-modal image data |
CN111128328A (en) * | 2019-10-25 | 2020-05-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Nasopharyngeal carcinoma structured image report and data processing system and method |
CN111602174A (en) * | 2018-01-08 | 2020-08-28 | 普罗热尼奇制药公司 | System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network |
CN112419338A (en) * | 2020-12-08 | 2021-02-26 | 深圳大学 | Head and neck endangered organ segmentation method based on anatomical prior knowledge |
EP3855391A1 (en) * | 2020-01-23 | 2021-07-28 | GE Precision Healthcare LLC | Methods and systems for characterizing anatomical features in medical images |
CN113557714A (en) * | 2019-03-11 | 2021-10-26 | 佳能株式会社 | Medical image processing apparatus, medical image processing method, and program |
-
2021
- 2021-11-29 CN CN202111429938.1A patent/CN114119558B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019135234A1 (en) * | 2018-01-03 | 2019-07-11 | Ramot At Tel-Aviv University Ltd. | Systems and methods for the segmentation of multi-modal image data |
CN111602174A (en) * | 2018-01-08 | 2020-08-28 | 普罗热尼奇制药公司 | System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network |
CN113557714A (en) * | 2019-03-11 | 2021-10-26 | 佳能株式会社 | Medical image processing apparatus, medical image processing method, and program |
CN111128328A (en) * | 2019-10-25 | 2020-05-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Nasopharyngeal carcinoma structured image report and data processing system and method |
EP3855391A1 (en) * | 2020-01-23 | 2021-07-28 | GE Precision Healthcare LLC | Methods and systems for characterizing anatomical features in medical images |
CN112419338A (en) * | 2020-12-08 | 2021-02-26 | 深圳大学 | Head and neck endangered organ segmentation method based on anatomical prior knowledge |
Non-Patent Citations (1)
Title |
---|
医学图像分析深度学习方法研究与挑战;田娟秀;刘国才;谷珊珊;鞠忠建;刘劲光;顾冬冬;;自动化学报;20180315(03);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114119558A (en) | 2022-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145170B (en) | Medical image segmentation method based on deep learning | |
CN112150425B (en) | Unsupervised intravascular ultrasound image registration method based on neural network | |
CN111784671B (en) | Pathological image focus region detection method based on multi-scale deep learning | |
CN111738363B (en) | Alzheimer disease classification method based on improved 3D CNN network | |
JP2023540910A (en) | Connected Machine Learning Model with Collaborative Training for Lesion Detection | |
CN112950639B (en) | SA-Net-based MRI medical image segmentation method | |
CN113223005B (en) | Thyroid nodule automatic segmentation and grading intelligent system | |
CN113393469A (en) | Medical image segmentation method and device based on cyclic residual convolutional neural network | |
CN111951288A (en) | Skin cancer lesion segmentation method based on deep learning | |
CN114092439A (en) | Multi-organ instance segmentation method and system | |
CN111161271A (en) | Ultrasonic image segmentation method | |
CN113744271A (en) | Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method | |
CN115578427A (en) | Unsupervised single-mode medical image registration method based on deep learning | |
CN115512110A (en) | Medical image tumor segmentation method related to cross-modal attention mechanism | |
CN117456183A (en) | Medical image segmentation method for multi-level feature extraction and attention mechanism fusion | |
CN114119558B (en) | Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report | |
CN117274147A (en) | Lung CT image segmentation method based on mixed Swin Transformer U-Net | |
CN116823613A (en) | Multi-mode MR image super-resolution method based on gradient enhanced attention | |
CN116433654A (en) | Improved U-Net network spine integral segmentation method | |
CN117422788B (en) | Method for generating DWI image based on CT brain stem image | |
Schwarz et al. | A deformable registration method for automated morphometry of MRI brain images in neuropsychiatric research | |
CN114581459A (en) | Improved 3D U-Net model-based segmentation method for image region of interest of preschool child lung | |
CN113902738A (en) | Heart MRI segmentation method and system | |
CN114494289A (en) | Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network | |
CN118037791A (en) | Construction method and application of multi-mode three-dimensional medical image segmentation registration model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |