CN115187577B - Automatic drawing method and system for breast cancer clinical target area based on deep learning - Google Patents

Automatic drawing method and system for breast cancer clinical target area based on deep learning Download PDF

Info

Publication number
CN115187577B
CN115187577B CN202210937016.XA CN202210937016A CN115187577B CN 115187577 B CN115187577 B CN 115187577B CN 202210937016 A CN202210937016 A CN 202210937016A CN 115187577 B CN115187577 B CN 115187577B
Authority
CN
China
Prior art keywords
layer
image
mask
clinical target
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210937016.XA
Other languages
Chinese (zh)
Other versions
CN115187577A (en
Inventor
邓秀文
蔡文培
江萍
赵红梅
王俊杰
贺树荫
赵紫婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lianying Intelligent Imaging Technology Research Institute
Peking University Third Hospital Peking University Third Clinical Medical College
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Beijing Lianying Intelligent Imaging Technology Research Institute
Peking University Third Hospital Peking University Third Clinical Medical College
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lianying Intelligent Imaging Technology Research Institute, Peking University Third Hospital Peking University Third Clinical Medical College, Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Beijing Lianying Intelligent Imaging Technology Research Institute
Priority to CN202210937016.XA priority Critical patent/CN115187577B/en
Publication of CN115187577A publication Critical patent/CN115187577A/en
Application granted granted Critical
Publication of CN115187577B publication Critical patent/CN115187577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/502Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Multimedia (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Pulmonology (AREA)
  • Physiology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

A breast cancer clinical target area automatic sketching method and system based on deep learning, the method includes: acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask; classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image; training a classification neural network model based on the type of each CT image in the sample set and each layer in the image to obtain a CT image layer classification model; training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model; inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image.

Description

Automatic drawing method and system for breast cancer clinical target area based on deep learning
Technical Field
The invention relates to the technical field of clinical target area sketching, in particular to a method and a system for automatically sketching a breast cancer clinical target area based on deep learning.
Background
Breast cancer is the most common malignancy in women and is the leading cause of cancer death in women. In recent years the life expectancy of breast cancer patients has been extended, and radiation therapy has played an important role therein. The rapid development of computer technology and image technology in the past 20 years has driven the radiation therapy technology to comprehensively enter the three-dimensional radiation therapy age from the two-dimensional radiation therapy age. For patients after breast cancer operation, because there may be sub-clinical lesions invisible to naked eyes around the lesions and in the lymph node drainage areas, most patients still need auxiliary radiotherapy after operation, and the three-dimensional area of the sub-clinical lesions is defined as a clinical target area (clinical target volume, CTV) of radiotherapy. The accurate delineation of CTVs based on localization CT images is a central task for radiotherapeutic doctors. Standard CT scout images typically contain tens of slices, and the physician needs to delineate CTVs in each slice individually, which is time consuming and laborious. In addition, the target areas delineated by doctors with different experience levels are greatly different, and the treatment effect is affected.
In recent decades, with the development of machine learning, computer-aided target delineation methods have begun to be applied clinically. The existing automatic sketching methods mainly comprise two types: the first is a atlas-based approach. This approach requires the physician to select the template image and target area in advance as the atlas. When sketching, a doctor firstly selects a template image closest to the image to be sketched for registration, and then the target area of the template is generated into the target area of the image to be sketched through a deformation field matrix. The second is automatic target region sketching based on deep learning, the technology needs to collect a certain amount of images and target region data in advance, train an automatic sketching network model based on the data, and utilize the trained model to carry out automatic target region sketching.
CTV includes the chest wall and adjacent lymph node drainage areas in breast cancer patients undergoing mastectomy. The target area is large in size, irregular in morphology and susceptible to body position. Atlas-based approaches are limited by the accuracy of the templates selected and the matches. The existing breast cancer clinical target area sketching method based on deep learning mainly comprises the steps of collecting chest walls manually sketched by doctors and target area dividing outlines or masks of drainage areas of all lymph nodes, respectively training models, and integrating the models into complete CTV. Disadvantages are difficult data collection, inefficient delineation, and the easy omission of overall target continuity.
Disclosure of Invention
In view of the above analysis, the embodiment of the invention aims to provide a method and a system for automatically sketching a breast cancer clinical target area based on deep learning, which are used for solving the problems of low efficiency and low accuracy of the existing method.
In one aspect, the embodiment of the invention provides a method for automatically delineating a breast cancer clinical target area based on deep learning, which comprises the following steps:
acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
training a classification neural network model based on the type of each CT image in the sample set and each layer in the image to obtain a CT image layer classification model;
training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image.
Based on a further improvement of the above technical solution, classifying each slice of the CT image in the training sample set based on the morphological differences includes:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
Further, the dimensional change rate includes a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure BDA0003783949770000031
length change rate= (l) i -l i-1 )/l i
Width change rate= (w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) A center point of the minimum circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
Further, the input channel number of the multichannel neural network model is the same as the layer type number;
training the multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a target region segmentation model, wherein the training comprises the following steps:
inputting the same layer of each CT image into the same input channel to train the multi-channel neural network model.
Further, the loss of the multi-channel neural network model is calculated using the following formula:
L=αL dice +βL ce
Figure BDA0003783949770000041
Figure BDA0003783949770000042
wherein P is k A mask matrix corresponding to the kth layer type of the representation model prediction, G k The gold standard mask matrix corresponding to the kth layer type is represented, C represents the layer class number, alpha and beta represent weight parameters, and L represents the total loss of the sample.
Further, generating a training sample set based on the CT images and corresponding clinical target masks, comprising:
resampling the CT image and the corresponding clinical target area mask to standard voxels by adopting a linear difference method;
normalizing voxel values of the CT image;
and (3) carrying out image segmentation on the normalized CT image to obtain a body mask, calculating the minimum external cuboid of the body mask, extracting the CT image in the minimum external cuboid and the corresponding clinical target area mask, and generating a training sample set.
Further, the multichannel neural network model comprises a multichannel convolution layer, a downsampling convolution module and an upsampling convolution module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by convolution; the downsampling convolution module is used for extracting features of different levels from the multi-channel feature image, and the upsampling convolution module is used for upsampling the extracted features and outputting a segmentation result;
the downsampling convolution module comprises a plurality of downsampling convolution units, wherein each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are sequentially connected;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposed convolution layer which are sequentially connected.
On the other hand, the embodiment of the invention provides a breast cancer clinical target area automatic sketching system based on deep learning, which comprises the following modules:
the sample set generation module is used for acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the layer classification module is used for classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training the classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training the multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
the automatic sketching module is used for inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image.
Further, the slice classification module classifies each slice of the CT images in the training sample set by:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
Further, the dimensional change rate includes a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure BDA0003783949770000061
length change rate= (l) i -l i-1 )/l i
Width change rate= (w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) Representing the current layerThe previous layer of the surface corresponds to the center point of the minimum circumscribed rectangle of the mask, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
Compared with the prior art, the method classifies the layers of CT images in the training set based on morphological differences, only the whole breast cancer clinical target area of the CT images is required to be collected when the samples are collected, doctors are not required to delineate the target area, sample processing time is shortened, efficiency is improved, the layers of the CT images can be automatically classified through a trained classification model, and multi-channel neural network model training is performed based on layer types, so that morphological characteristics are combined, and model segmentation is more accurate.
In the invention, the technical schemes can be mutually combined to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a flow chart of a method for automatically delineating a clinical target area of breast cancer based on deep learning according to an embodiment of the invention;
FIG. 2 is a block diagram of an embodiment of a deep learning based breast cancer clinical target automatic delineation system;
FIG. 3 is a schematic view of CT slice types according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a multi-channel neural network model according to an embodiment of the present invention;
FIG. 5 is a diagram showing the change of the level Dice index according to the different methods of the present invention.
Detailed Description
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and together with the description serve to explain the principles of the invention, and are not intended to limit the scope of the invention.
The invention discloses a method for automatically sketching a clinical target area of breast cancer based on deep learning, which is shown in fig. 1 and comprises the following steps:
s1, acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
s2, classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
s3, training a classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
s4, training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
s5, inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image.
The CT image is classified on the aspect of the CT image in the training set based on morphological differences, and only the whole breast cancer clinical target area of the CT image is required to be collected when the sample is collected, so that doctors are not required to delineate the target area, the sample processing time is shortened, the efficiency is improved, the CT image can be automatically classified on the aspect of the CT image through a trained classification model, the multi-channel neural network model training is performed based on the aspect type, and the morphological characteristics are combined, so that the model segmentation is more accurate.
Through the trained CT image layer classification model, the layer type classification can be directly carried out on the CT image to be sketched, and the automatic segmentation is more accurate based on the layer type. Meanwhile, the results of the layer classification can be fed back to the doctor, the doctor can directly adjust the layer types according to experience, the purpose of quickly modifying the sketch is achieved, interaction between the doctor and the automatic sketch model is achieved, and the efficiency, the interpretability and the reliability of the automatic sketch are improved.
In implementation, a CT image of a patient and a corresponding clinical target area mask are acquired, wherein the clinical target area mask is an integral target area mask for radiotherapy after radical mastectomy. The CT image is a positioning CT image, which is three-dimensional image data, and the target mask is three-dimensional image data of the same size as the CT image, and represents a list of corresponding points on the CT image, for example, 0 represents that the point does not belong to the target region, and 1 represents that the point belongs to the target region. The slice plane of the CT image is a sectional plane view (layer in the direction of the head and foot) of the CT image.
Specifically, in step S1, a training sample set is generated based on the CT image and the corresponding clinical target mask, including:
s11, resampling the CT image and the corresponding clinical target area mask to a standard voxel by adopting a linear difference method;
since there may be a problem of voxel inconsistency for different CT images, for more accurate segmentation, the CT image and the corresponding clinical target mask procedure are first resampled, normalizing the image voxels. In implementation, resampling is performed by adopting a linear difference method, and the standard voxel is the median of the voxel spacing of all CT sample data. Voxel spacing is the distance between two voxels of an image.
S12, normalizing voxel values of the CT image;
to facilitate data, voxel values of the CT image are normalized. Firstly, according to CT images in a sample set and corresponding clinical target area masks, voxel values of target area parts of the CT images are ordered according to ascending order, in order to eliminate the influence of extremum on normalization, 5 micrometer digits in a voxel value ordering sequence are taken as a lower limit lim_down of the voxel values, and 995 micrometer digits are taken as an upper limit lim_up of the voxel values.
The voxel values of the CT image are normalized according to the formula normalized voxel value= (pre-normalization voxel value-lim_down)/(lim_up-lim_down).
S13, performing image segmentation on the normalized CT image to obtain a body mask, calculating a minimum circumscribed cuboid of the body mask, extracting the CT image in the minimum circumscribed cuboid and a corresponding clinical target area mask, and generating a training sample set.
When the method is implemented, a segmentation method based on a threshold value is adopted to segment the normalized CT image to obtain a body mask, a maximum connected domain algorithm is adopted to calculate a three-dimensional maximum connected domain of the body mask, a minimum external cuboid of the body mask is obtained according to the three-dimensional maximum connected domain of the body mask, the CT image in the minimum external cuboid of the body mask and a corresponding clinical target area mask are extracted, preprocessed sample data are obtained, and a training sample set is generated.
Specifically, in step S2, classifying each slice of the CT image in the training sample set based on the morphological difference includes:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
That is, for a CT image, the first slice is traversed from the first slice to the last slice, the first slice with the corresponding target mask is of a first type, e.g., labeled morphology class 1, and the differences in shape, i.e., center offset distance and rate of change in size, between the second slice and the first slice are calculated. If the center offset distance is larger than a preset threshold value or the size change rate is larger than a preset threshold value, the second layer surface is marked as a form type 2 because the difference between the second layer surface and the first layer surface is larger, otherwise, the second layer surface is marked as a form type 1 because the form difference between the second layer surface and the first layer surface is not larger, and the method is adopted to sequentially carry out classification marking on each layer surface. As shown in fig. 3, the CT slices are classified into 4 classes according to morphology differences.
The CT image is classified by calculating the morphological differences among the layers, so that manual labeling of doctors is avoided, time is saved, particularly for some small areas, the standard error of manual sketching is large, the segmentation effect is influenced, classification is carried out according to the morphological differences, labeling errors of the small areas are avoided, and the segmentation accuracy is improved.
Specifically, the dimensional change rate includes a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure BDA0003783949770000111
length change rate= (l) i -l i-1 )/l i
Width change rate= (w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) A center point of the minimum circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
The method comprises the steps of carrying out morphological difference measurement through a center offset distance, a length change rate and a width change rate, and considering that a current layer and a previous layer are layers with larger morphological difference if any index passes a threshold value, and otherwise, judging that the current layer and the previous layer are layers with similar morphologies. In implementation, the threshold corresponding to the center offset distance, the length change rate and the width change rate can be obtained by counting corresponding data between two adjacent layers respectively belonging to different sub-target regions in the clinical target region. For example, the threshold corresponding to the center offset distance may be used to count the center offset distances between two adjacent layers respectively belonging to different target areas in the clinical target area, and the average value and the standard deviation may be obtained after eliminating the abnormal value, and the threshold may be set to be the average value+3.
After classifying the layers of each CT image in the sample, training a classification neural network model based on each CT image in the sample set and the type of each layer in the image to obtain a CT image layer classification model. In practice, the classification model may use a machine learning method such as random forest or XGboost, adaboost, or may use a convolutional neural network. The patent uses the encoder part of the unet network, followed by the classification network built by the full connectivity layer. Cross entropy loss is used.
Through the trained CT image layer classification model, the layer type classification can be directly carried out on the CT image to be sketched, and the automatic segmentation is more accurate based on the layer type. Meanwhile, the results of the layer classification can be fed back to doctors, the doctors can directly adjust the types of the layers according to experience, the purpose of quickly modifying the target area is achieved, interaction between the doctors and the automatic sketching model is achieved, and the efficiency, the interpretability and the reliability of the automatic sketching are improved.
And obtaining the type of each layer of the CT image through morphological difference, and training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target region segmentation model.
Specifically, the number of input channels of the multichannel neural network model is the same as the number of layer types;
training a multichannel neural network model based on each CT image in a sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a target region segmentation model, wherein the training comprises the following steps:
inputting the same layer of each CT image into the same input channel to train the multi-channel neural network model.
It should be noted that, inputting the same layer of the same type in each CT image into the same input channel to train the multi-channel neural network model means that each input channel only retains the layer of the corresponding type, and sets the pixel value of the other layers to 0. I.e. the data-scale CT image samples for each input channel are identical. For example, the first input channel corresponds to the aspect of the form type 1, and the second input channel corresponds to the aspect of the form type 2, so that for one CT image sample, the pixel values of the aspect of the form type 1 are kept unchanged, the pixel values of the other types of aspects are set to 0, so as to obtain the input data of the first input channel, the pixel values of the aspect of the form type 2 are kept unchanged, and the pixel values of the other types of aspects are set to 0, so as to obtain the input data of the second input channel.
The built multichannel neural network model comprises a multichannel convolution layer, a downsampling convolution module and an upsampling convolution module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by convolution; the multi-channel input data is concatenated by a multi-channel convolutional layer. The multi-channel convolution layer carries out convolution operation on the input data of each channel respectively, and carries out weighted addition on the convolution results of a plurality of channels to obtain a multi-channel characteristic image, wherein the weights of the channels are obtained through network training. Different channels are gradually assigned with different weights in back propagation, so that correlation between spatial information and morphological information of related positions is determined.
The downsampling convolution module is used for extracting features of different levels from the multi-channel feature image, and the upsampling convolution module is used for upsampling the extracted features to output segmentation results;
the downsampling convolution module comprises a plurality of downsampling convolution units, wherein each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are sequentially connected;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposed convolution layer which are sequentially connected.
As shown in fig. 4, the downsampling convolution module includes 4 downsampling convolution units, the upsampling convolution module includes 4 upsampling convolution units, the downsampling convolution units are in one-to-one correspondence with the upsampling convolution units, one upsampling convolution unit is connected with the downsampling convolution unit of the previous layer of the upsampling convolution unit and the downsampling convolution unit of the downsampling convolution module, which has the same dimension as the characteristic channel of the previous layer of the convolution unit, and features extracted by the upsampling convolution unit of the previous layer and features extracted by the downsampling convolution unit are spliced in the channel dimension, so that more dimension/position information is reserved, and the separation is more accurate.
The model convergence speed is accelerated by adopting the LeakyReLU and the batch normalization layer, and the gradient disappearance problem is reduced.
And calculating model loss according to the prediction result of the multichannel neural network model, and carrying out back propagation adjustment on model parameters according to the model loss. And after the model loss is smaller than the threshold value and is stable, finishing training to obtain the clinical target region segmentation model.
Specifically, the following formula is used to calculate the loss of the multichannel neural network model:
L=αL dice +βL ce
Figure BDA0003783949770000131
Figure BDA0003783949770000141
wherein P is k A mask matrix corresponding to the kth layer type of the representation model prediction, G k The gold standard mask matrix corresponding to the kth layer type is represented, C represents the layer class number, alpha and beta represent weight parameters, and L represents the total loss of the sample. Wherein L isRepresenting the total loss of one sample, calculating the average value of the total loss of all training samples to obtain the training total loss, and adjusting the model parameters.
Wherein L is dice Representing a Dice loss function for measuring similarity between a model predicted mask and a gold standard mask, L ce The cross entropy loss function is represented, whether the statistical distribution of the mask predicted by the model is consistent with the distribution of the gold standard mask is measured, and the combination of the Dice loss function and the cross entropy loss function is adopted, so that the loss calculation is more accurate, and the prediction accuracy and the training efficiency are improved.
Note that, the gold standard mask matrix G corresponding to the kth layer type k The mask matrix is obtained by reserving mask values corresponding to the kth layer surface type and taking 0 for mask values corresponding to other layer surfaces in a clinical target area mask corresponding to the CT image.
In order to effectively utilize sample data and improve segmentation accuracy, the sample data may be subjected to data augmentation prior to the multi-channel neural network model training, for example: scaling the CT image in the sample and the corresponding mask in the same proportion, wherein the scale range can be 0.7-1.4; adopting gamma transformation to adjust a contrast diagram of the CT image; the CT image and the corresponding mask are rotated by the same angle, for example, 0-30 degrees in three directions of a three-dimensional space respectively; and randomly cutting out the region of interest with the size of [28,256,256] on the data and the mask subjected to data augmentation, wherein the data is smaller than [28,256,256], and then supplementing 0 around the data. If the region of interest does not include the target region, then the crop is re-cropped until the region of interest contains the target region.
After training a clinical target region segmentation model, inputting a CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image. When the method is implemented, if voxels of the CT image to be sketched are inconsistent with standard voxels or voxel values are not in a normalized numerical range, preprocessing such as normalization can be performed on the CT image to be sketched according to the processing procedure of the steps S11-S13, image segmentation is performed on the normalized CT image to obtain a body mask, the minimum circumscribed cuboid of the body mask is calculated, the CT image in the minimum circumscribed cuboid is extracted to obtain a model input image corresponding to the CT image to be sketched, the obtained model input image is input into a classification model for layer classification, the corresponding CT layer image is input into a corresponding channel of a clinical target segmentation model according to a layer classification result, a segmentation result, namely a clinical target mask is obtained, broken points are removed from the maximum connected domain of the mask, and the clinical target mask corresponding to the model input image is obtained. Resampling the clinical target area mask to the voxel size of the original CT image, and restoring the resampled clinical target area mask to the size of the original CT image according to the size of the body mask, so as to obtain the clinical target area mask corresponding to the original CT image to be sketched.
The effect of the clinical target segmentation model of the present invention is described below by experimental data.
As shown in fig. 5, the change of the level Dice index in the direction of the head and foot by adopting the two-dimensional unet model, the three-dimensional unet model and the breast cancer clinical target area automatic sketching method provided by the invention is shown, and the Dice of the model in the direction of the head and foot is closer to 1, so that the effect of the level in the direction of the head and foot is particularly improved obviously.
The Dice index: the value range is 0,1, and the coincidence ratio between two input binary images (usually applied to a prediction mask and a gold standard mask) is reflected, and the higher the coincidence ratio is, the better the effect is.
Table 1 shows that the DICE average value, 95 Hastell distance and average surface distance between the mask prediction result and the gold standard mask (i.e. the original mask sketched by doctor) obtained by different methods are larger than the average value of the DICE value of the automatic sketching method of the invention, and the 95 Hastell distance and average surface distance are smaller compared with the two-dimensional unet and the three-dimensional unet, which shows that the automatic sketching method of the invention has better effect, smaller standard deviation and more stable effect.
TABLE 1 DICE averages, 95 Hastedor distances, and average surface distances for different methods
Figure BDA0003783949770000151
Figure BDA0003783949770000161
In one embodiment of the invention, an automatic drawing system for a breast cancer clinical target area based on deep learning is disclosed, as shown in fig. 2, and comprises the following modules:
the sample set generation module is used for acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the layer classification module is used for classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training the classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training the multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
the automatic sketching module is used for inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be sketched into a clinical target segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target mask of the CT image.
Preferably, the slice classification module classifies each slice of the CT images in the training sample set by:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
Preferably, the dimensional change rate includes a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure BDA0003783949770000171
length change rate= (l) i -l i-1 )/l i
Width change rate= (w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) A center point of the minimum circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
The method embodiment and the system embodiment are based on the same principle, and the related parts can be mutually referred to and can achieve the same technical effect. The specific implementation process refers to the foregoing embodiment, and will not be described herein.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program to instruct associated hardware, where the program may be stored on a computer readable storage medium. Wherein the computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory, etc.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (8)

1. The automatic drawing method for the breast cancer clinical target area based on deep learning is characterized by comprising the following steps of:
acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
training a classification neural network model based on the type of each CT image in the sample set and each layer in the image to obtain a CT image layer classification model;
training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; inputting the CT image to be sketched into a clinical target area segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target area mask of the CT image;
classifying each slice of the CT image in the training sample set based on the morphology differences, comprising:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
2. The method for automatically delineating a clinical target area of breast cancer based on deep learning of claim 1, wherein the dimensional change rate comprises a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure FDA0004081457140000021
length change rate= (l) i -l i-1 )/l i
Width change rate= (w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) A center point of the minimum circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
3. The automatic delineating method of breast cancer clinical target area based on deep learning according to claim 1, wherein the number of input channels of the multichannel neural network model is the same as the number of layer types;
training the multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a target region segmentation model, wherein the training comprises the following steps:
inputting the same layer of each CT image into the same input channel to train the multi-channel neural network model.
4. The method for automatically delineating a clinical target area of breast cancer based on deep learning according to claim 1, wherein the loss of the multichannel neural network model is calculated by adopting the following formula:
L=αL dice +βL ce
Figure FDA0004081457140000022
Figure FDA0004081457140000031
wherein P is k A mask matrix corresponding to the kth layer type of the representation model prediction, G k The gold standard mask matrix corresponding to the kth layer type is represented, C represents the layer class number, alpha and beta represent weight parameters, and L represents the total loss of the sample.
5. The method of automatic delineating of a clinical target region of breast cancer based on deep learning of claim 1, wherein generating a training sample set based on the CT images and corresponding clinical target region masks comprises:
resampling the CT image and the corresponding clinical target area mask to standard voxels by adopting a linear difference method;
normalizing voxel values of the CT image;
and (3) carrying out image segmentation on the normalized CT image to obtain a body mask, calculating the minimum external cuboid of the body mask, extracting the CT image in the minimum external cuboid and the corresponding clinical target area mask, and generating a training sample set.
6. The automatic delineating method of a breast cancer clinical target area based on deep learning according to claim 1, wherein the multichannel neural network model comprises a multichannel convolution layer, a downsampling convolution module and an upsampling convolution module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by convolution; the downsampling convolution module is used for extracting features of different levels from the multi-channel feature image, and the upsampling convolution module is used for upsampling the extracted features and outputting a segmentation result;
the downsampling convolution module comprises a plurality of downsampling convolution units, wherein each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are sequentially connected;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposed convolution layer which are sequentially connected.
7. The automatic drawing system for the breast cancer clinical target area based on deep learning is characterized by comprising the following modules:
the sample set generation module is used for acquiring a CT image of a patient and a corresponding clinical target area mask, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the layer classification module is used for classifying each layer of the CT images in the training sample set based on the morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training the classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training the multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and the mask corresponding to the CT image to obtain a clinical target region segmentation model;
the automatic sketching module is used for inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; inputting the CT image to be sketched into a clinical target area segmentation model according to the type of each layer of the CT image to be sketched to obtain a clinical target area mask of the CT image;
the layer classification module classifies each layer of the CT images in the training sample set by adopting the following steps:
for each CT image, taking a first layer surface with a corresponding mask as a first class; traversing each layer in sequence, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on a clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is greater than the threshold value, the current layer surface and the previous layer surface are of different types; otherwise, the current layer is of the same type as the previous layer.
8. The deep learning based breast cancer clinical target automatic delineation system of claim 7, wherein the dimensional change rate comprises a length change rate and a width change rate;
calculating a center offset distance and a dimensional change rate between a current slice and a previous slice based on a clinical target area mask corresponding to the CT image, including:
calculating the minimum circumscribed rectangle of each layer of the CT image corresponding to the mask;
the center offset distance and the dimensional change rate between the current layer and the previous layer are calculated by adopting the following formulas:
Figure FDA0004081457140000051
length change rate= (l) i -l i-1 )/l i
Rate of width change=(w i -w i-1 )/w i
Wherein, (x) i ,y i ) The coordinates of the center point of the minimum circumscribed rectangle of the mask corresponding to the current layer (x) i-1 ,y i-1 ) A center point of the minimum circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the current layer i Representing the width of the minimum circumscribed rectangle of the mask corresponding to the current layer, l i-1 Representing the length, w, of the minimum circumscribed rectangle of the mask corresponding to the previous layer of the current layer i-1 Representing the width of the smallest circumscribed rectangle of the mask corresponding to the previous layer of the current layer.
CN202210937016.XA 2022-08-05 2022-08-05 Automatic drawing method and system for breast cancer clinical target area based on deep learning Active CN115187577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210937016.XA CN115187577B (en) 2022-08-05 2022-08-05 Automatic drawing method and system for breast cancer clinical target area based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210937016.XA CN115187577B (en) 2022-08-05 2022-08-05 Automatic drawing method and system for breast cancer clinical target area based on deep learning

Publications (2)

Publication Number Publication Date
CN115187577A CN115187577A (en) 2022-10-14
CN115187577B true CN115187577B (en) 2023-05-09

Family

ID=83520846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210937016.XA Active CN115187577B (en) 2022-08-05 2022-08-05 Automatic drawing method and system for breast cancer clinical target area based on deep learning

Country Status (1)

Country Link
CN (1) CN115187577B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409837B (en) * 2022-11-01 2023-02-17 北京大学第三医院(北京大学第三临床医学院) Endometrial cancer CTV automatic delineation method based on multi-modal CT image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017185A (en) * 2020-10-30 2020-12-01 平安科技(深圳)有限公司 Focus segmentation method, device and storage medium
CN114722925A (en) * 2022-03-22 2022-07-08 北京安德医智科技有限公司 Lesion classification device and nonvolatile computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB202007256D0 (en) * 2020-05-15 2020-07-01 Univ Oxford Innovation Ltd Functional imaging features from computed tomography images
CN111951276A (en) * 2020-07-28 2020-11-17 上海联影智能医疗科技有限公司 Image segmentation method and device, computer equipment and storage medium
CN112132917A (en) * 2020-08-27 2020-12-25 盐城工学院 Intelligent diagnosis method for rectal cancer lymph node metastasis
CN114202545A (en) * 2020-08-27 2022-03-18 东北大学秦皇岛分校 UNet + + based low-grade glioma image segmentation method
CN112862808A (en) * 2021-03-02 2021-05-28 王建 Deep learning-based interpretability identification method of breast cancer ultrasonic image
CN113706441A (en) * 2021-03-15 2021-11-26 腾讯科技(深圳)有限公司 Image prediction method based on artificial intelligence, related device and storage medium
CN113288193B (en) * 2021-07-08 2022-04-01 广州柏视医疗科技有限公司 Automatic delineation system of CT image breast cancer clinical target area based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017185A (en) * 2020-10-30 2020-12-01 平安科技(深圳)有限公司 Focus segmentation method, device and storage medium
CN114722925A (en) * 2022-03-22 2022-07-08 北京安德医智科技有限公司 Lesion classification device and nonvolatile computer readable storage medium

Also Published As

Publication number Publication date
CN115187577A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN112270660B (en) Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network
CN109544510B (en) Three-dimensional lung nodule identification method based on convolutional neural network
US8199985B2 (en) Automatic interpretation of 3-D medicine images of the brain and methods for producing intermediate results
CN111311592A (en) Three-dimensional medical image automatic segmentation method based on deep learning
CN105719278B (en) A kind of medical image cutting method based on statistics deformation model
CN110310281A (en) Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN106683104B (en) Prostate Magnetic Resonance Image Segmentation method based on integrated depth convolutional neural networks
CN107077736A (en) System and method according to the Image Segmentation Methods Based on Features medical image based on anatomic landmark
CN109906470A (en) Use the image segmentation of neural network method
CN109859184B (en) Real-time detection and decision fusion method for continuously scanning breast ultrasound image
CN109300136B (en) Automatic segmentation method for organs at risk based on convolutional neural network
CN107680107B (en) Automatic segmentation method of diffusion tensor magnetic resonance image based on multiple maps
CN111008984A (en) Method and system for automatically drawing contour line of normal organ in medical image
CN109410188A (en) System and method for being split to medical image
CN110866905B (en) Rib recognition and labeling method
CN111862021B (en) Deep learning-based automatic head and neck lymph node and drainage area delineation method
CN111028914A (en) Artificial intelligence guided dose prediction method and system
CN110619641A (en) Automatic segmentation method of three-dimensional breast cancer nuclear magnetic resonance image tumor region based on deep learning
CN110120048A (en) In conjunction with the three-dimensional brain tumor image partition method for improving U-Net and CMF
CN112085113B (en) Severe tumor image recognition system and method
Maitra et al. Detection of abnormal masses using divide and conquer algorithmin digital mammogram
CN115187577B (en) Automatic drawing method and system for breast cancer clinical target area based on deep learning
Chen et al. MAU-Net: Multiple attention 3D U-Net for lung cancer segmentation on CT images
CN110728685B (en) Brain tissue segmentation method based on diagonal voxel local binary pattern texture operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant