CN111862021B - Deep learning-based automatic head and neck lymph node and drainage area delineation method - Google Patents

Deep learning-based automatic head and neck lymph node and drainage area delineation method Download PDF

Info

Publication number
CN111862021B
CN111862021B CN202010670160.2A CN202010670160A CN111862021B CN 111862021 B CN111862021 B CN 111862021B CN 202010670160 A CN202010670160 A CN 202010670160A CN 111862021 B CN111862021 B CN 111862021B
Authority
CN
China
Prior art keywords
lymph
drainage area
deep learning
image
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010670160.2A
Other languages
Chinese (zh)
Other versions
CN111862021A (en
Inventor
孙颖
陆遥
林丽
陈海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perception Vision Medical Technology Co ltd
Original Assignee
Perception Vision Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perception Vision Medical Technology Co ltd filed Critical Perception Vision Medical Technology Co ltd
Priority to CN202010670160.2A priority Critical patent/CN111862021B/en
Publication of CN111862021A publication Critical patent/CN111862021A/en
Application granted granted Critical
Publication of CN111862021B publication Critical patent/CN111862021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The embodiment of the invention provides a deep learning-based automatic head and neck lymph node and drainage area delineation method, which utilizes the symmetry of a human body structure to divide a head area into a left part and a right part for training and prediction, thereby indirectly increasing the data volume of model training and simultaneously reducing the scale of a deep learning model. The invention uses a delineation strategy of stepwise processing optimization, firstly adopts a deep learning model to realize a lymph drainage area which is easy to be segmented, and then adopts a multi-task deep learning model to optimize the lymph drainage area and segment lymph nodes, thereby fully utilizing the relevance between the lymph drainage area and the lymph nodes and improving the segmentation accuracy of the lymph drainage area and the lymph nodes. The invention implements an Artificial Intelligence (AI) assisted contour delineation method in the radiation therapy planning work flow, can effectively improve the delineation consistency of the work efficiency of medical workers, and improves the radiation therapy accuracy of head and neck tumors.

Description

Deep learning-based automatic head and neck lymph node and drainage area delineation method
Technical Field
The invention relates to the field of medical images, in particular to a deep learning-based automatic head and neck lymph node and drainage area delineation method.
Background
Due to the fact that the lymphatic network is widely distributed under the mucosa of the head and the neck, regional lymph node metastasis risk of head and neck tumors is high, and the probability of occult metastasis is as high as 30%. In clinical radiotherapy of patients with head and neck tumors, in addition to high dose irradiation of tumor primary foci (GTV), prophylactic irradiation of lymph nodes (GTVn) and lymph drainage areas (CTVn) with metastasis around the tumor is required. At present, radiotherapy is one of important treatment modes for head and neck tumors, the head and neck have a plurality of important and fine structures, and the precise radiotherapy technology has important clinical significance on the postoperative life quality of patients and is the urgent clinical demand at present. Therefore, the contour delineation in the radiation therapy is the basic guarantee of accurate radiation therapy, the accuracy of the contour delineation directly determines the reliability of radiation therapy dose distribution, and the wrong delineation may bring serious radiation therapy accidents, endangering the life safety of patients. Lymph drainage areas have better contrast and boundary on CT, but lymph nodes are difficult to distinguish on positioning CT or enhanced CT of radiotherapy, and are particularly easy to be confused with adjacent muscles and blood vessels, so that false delineation or missed delineation is caused.
At present, the boundaries of the metastatic lymph nodes and the lymph drainage area are clinically drawn by doctors manually or image segmentation methods based on deep learning, wherein the manual drawing has low efficiency and poor repeatability and depends heavily on the experience level of the drawing doctors and the drawing standard reference. In addition, the lymph node metastasis risk of different types of tumors and tumors in different positions of the head and the neck is greatly different, and the clinical lymph node metastasis delineation is more challenging.
The conventional image segmentation method based on deep learning depends on image labeling and training of a large amount of data, and clinically acquired data usually has the problems of incomplete labeling and non-uniform labeling schemes, so that the amount of practically available training data is very small. Therefore, the existing deep learning-based technology cannot support automatic delineation of low-contrast and irregularly-distributed lymph nodes. In addition, the head and neck lymph drainage area has a large image coverage range, and the delineation depends on image information between layers, so that the training of a segmentation model under the input of a large three-dimensional image cannot be supported under the general hardware configuration.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method for automatically delineating a head and neck lymph node and a drainage area based on deep learning.
The embodiment of the invention provides a deep learning-based automatic head and neck lymph node and drainage area delineating method, which comprises the following steps:
step (1): retrospectively collecting planned CT image data of a head and neck tumor patient before radiotherapy, and preprocessing the planned CT image data;
step (2): extracting delineation contours of a lymph drainage area and a lymph node which are delineated on a planned CT image of a head and neck tumor patient collected retrospectively in clinic, assigning a region inside the contour line to be 1 and assigning a region outside the contour line to be 0, and obtaining a binary mask image of each organ at risk;
and (3): identifying the head region of a patient from a head neck tumor patient plan CT image to obtain a binary mask image of the head; calculating the center line of the head region along the left and right directions as left and right head region left and right dividing lines, and dividing the plan CT image and the binary mask image of the lymph drainage region and the lymph node left and right;
and (4): three-dimensionally clipping the divided plan CT image and the binary mask image of the lymph drainage area and the lymph node, and extracting regions of interest in the left-right, front-back and head-foot directions based on the delineation result of the lymph drainage area;
and (5): carrying out five-fold cross training and verification on the deep learning model A by using the extracted normalized planned CT image in the region of interest and the binary mask image of the lymph drainage area to obtain a trained lymph drainage area segmentation model A and a delineation result of lymph drainage area verification of all training cases;
and (6): taking the segmentation result of the lymph drainage area in the step (5) and the planned CT image in the region of interest as the input of a deep learning model B, taking the binary mask images of the lymph drainage area and the lymph node as the output of the deep learning model B, and training the multitask deep learning model B to obtain a trained lymph node and lymph drainage area segmentation optimization model B;
and (7): acquiring planned CT image data of a new head and neck tumor patient, preprocessing the planned CT image data according to the step (1-4) to obtain planned CT images in the interested areas on the left side and the right side, and respectively inputting the planned CT images into the trained lymph drainage area segmentation model A to obtain a primary segmentation result of the left lymph drainage area and the right lymph drainage area;
and (8): inputting the planned CT images in the interested areas on the left side and the right side obtained in the step (7) and the preliminary segmentation results of the left lymph drainage area and the right lymph drainage area into a trained lymph node and lymph drainage area segmentation optimization model B to obtain the segmentation results of the lymph nodes on the left side and the right side and the lymph drainage area.
Further, the planning CT image data before radiotherapy of the head and neck tumor patient is preprocessed in the step (1) as follows: the image CT values are truncated to the range of-150,250 and normalized to the range of-1, 1.
Furthermore, in the step (3), a threshold segmentation method is adopted to identify the head region of the patient from the planned CT image of the patient with the head and neck tumor, and the closed region with the CT value larger than-200 is taken as the head region to obtain a binary mask image of the head.
Further, in the step (4), the size of the region of interest in the left-right, front-back and head-foot directions is extracted to be 128 × 192 × 96 based on the delineation result of the lymph drainage region, and the unit of each direction is pixel.
Further, the deep learning model a adopted in the step (5) adopts a five-fold cross training and verification method, which specifically comprises the following steps:
step (a): dividing the extracted planning CT image and the extracted binary mask image of the lymph drainage area in the region of interest into five groups by taking a patient as a unit;
step (b): establishing a deep learning model A, inputting planned CT images in four groups of interested areas in the five groups of data divided in the step (a) as the deep learning model A, and training the deep learning model A-1 by taking a binary mask image of a corresponding lymph drainage area as output to obtain a trained deep learning model A-1;
step (c): taking the planned CT image in the interested area of the remaining group of data as the input of a deep learning model A-1 to obtain the primary segmentation result of the lymph drainage area;
step (d): replacing the verification data, repeating the steps (b) and (c), and respectively training the deep learning models A-2, A-3, A-4 and A-5 to obtain the sketching result of the lymph drainage area corresponding to the verification data.
Further, the training of the deep learning model in the step (6) is performed in two stages, specifically including the following steps:
step i: constructing a deep learning model B, wherein the number of input channels is 2, the first input channel is a planned CT image of an interested area, and the second channel is a preliminary lymph drainage area segmentation result; the number of output channels is 2, the first output channel is the segmentation result of the lymph nodes, and the second output channel is the segmentation result of the lymph drainage area;
step ii: setting a second input channel of the model B as all-zero data, training the deep learning model B, extracting the characteristics of the CT image, segmenting lymph nodes and lymph drainage areas, and training until the model is stable;
step iii: (iii) setting the second channel of model B as the preliminary segmentation result of lymphatic drainage region, and retraining the stable model trained in step (ii) until the stable model is stable.
The method for automatically delineating the head and neck lymph nodes and the drainage area based on deep learning provided by the embodiment of the invention has the following advantages:
1. the invention utilizes the symmetry of the human body structure to divide the head area into a left part and a right part for training and predicting, thereby indirectly increasing the data volume of model training and simultaneously reducing the scale of a deep learning model.
2. The invention uses a delineation strategy of stepwise processing optimization, firstly adopts a deep learning model to realize a lymph drainage area which is easy to be segmented, and then adopts a multi-task deep learning model to optimize the lymph drainage area and segment lymph nodes, thereby fully utilizing the relevance between the lymph drainage area and the lymph nodes and improving the segmentation accuracy of the lymph drainage area and the lymph nodes.
3. The invention implements the AI-assisted contour delineation method in the radiotherapy plan work flow, can effectively improve the delineation consistency of the work efficiency of medical workers, and improves the precision of the radiotherapy of the head and neck tumors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a deep learning-based method for automatically delineating a head and neck lymph node and a drainage area according to an embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
Fig. 1 is a flowchart of an automatic head and neck lymph node and drainage area delineation method based on deep learning according to an embodiment of the present invention, and as shown in fig. 1, the automatic head and neck lymph node and drainage area delineation method based on deep learning according to the present invention includes the following steps:
step (1): the planned CT image data before radiotherapy of head and neck tumor patients (such as nasopharyngeal carcinoma, oropharyngeal cancer, parotid gland cancer, etc.) is collected retrospectively and preprocessed as follows: the CT value of the image is intercepted in the range of-150,250, and then is normalized to the range of-1, so that the image data of each patient is more uniform;
step (2): extracting delineation outlines of a lymph drainage area and a lymph node which are delineated on a planned CT image of a patient with head and neck tumor collected retrospectively in clinic, assigning a region inside the outline to be 1 and a region outside the outline to be 0, and obtaining a binary mask image of each organ at risk;
and (3): and identifying the head region of the patient from the planned CT image of the head-neck tumor patient by adopting a threshold segmentation method, and taking the closed region with the CT value of more than-200 as the head region to obtain a binary mask image of the head. Because the head and neck tumor patients need to adopt a mask to fix the head and neck clinically, the body position is fixed, the central line of the head area along the left and right directions can be calculated and used as a left and right dividing line of the left and right head areas, and the left and right division is carried out on the planning CT image and the binary mask image of the lymph drainage area and the lymph node;
and (4): and performing three-dimensional clipping on the divided planning CT image and the binary mask image of the lymph drainage area and the lymph node. Considering that the resolution of a conventional CT image is about 1mm by 3mm generally, regions of interest with the sizes of the left side, the right side, the front side, the back side and the head and foot directions of 128 by 192 by 96 are extracted based on the delineation result of a lymph drainage region, and the unit of each direction is pixel;
and (5): carrying out five-fold cross training and verification on the deep learning model A by using the extracted normalized planned CT image (input) in the region of interest and the binary mask image (output) of the lymph drainage area to obtain a trained lymph drainage area segmentation model A and a delineation result of lymph drainage area verification of all training cases;
and (4) adopting a five-fold cross training and verification method for the deep learning model A adopted in the step (5). The method specifically comprises the following steps:
step (a): dividing the extracted planning CT image and the extracted binary mask image of the lymph drainage area in the region of interest into five groups by taking a patient as a unit;
step (b): establishing a deep learning model A, inputting planned CT images in four groups of interested areas in the five groups of data divided in the step (a) as the deep learning model A, and training the model A-1 by taking a binary mask image of a corresponding lymph drainage area as output to obtain a trained deep learning model A-1;
a step (c): taking the planned CT image in the region of interest of the remaining group of data as the input of the model A-1 to obtain the primary segmentation result of the lymph drainage area;
a step (d): replacing the verification data, repeating the steps (b) and (c), and respectively training the models A-2, A-3, A-4 and A-5 to obtain the sketching result of the lymph drainage area corresponding to the verification data.
And (6): taking the segmentation result of the lymph drainage area in the step (5) and the planned CT image in the region of interest as the input of a deep learning model B, taking the binary mask images of the lymph drainage area and the lymph node as the output of the deep learning model B, and training the multitask deep learning model B to obtain a trained lymph node and lymph drainage area segmentation optimization model B;
the training of the deep learning model in the step (6) is performed in two stages, and the method specifically comprises the following steps:
step i: and constructing a deep learning model B, wherein the number of input channels is 2, the first input channel is a planned CT image of the region of interest, and the second channel is a preliminary lymph drainage area segmentation result. The number of output channels is 2, the first output channel is the segmentation result of the lymph nodes, and the second output channel is the segmentation result of the lymph drainage area;
step ii: setting a second input channel of the model B as all-zero data, training the deep learning model B, extracting the characteristics of the CT image, segmenting lymph nodes and lymph drainage areas, and training until the model is stable;
step iii: setting the second channel of the model B as the primary segmentation result of the lymph drainage area, and retraining the stable model trained in the step (ii) until the stable model is stable.
And (7): acquiring planned CT image data of a new head and neck tumor patient, preprocessing the planned CT image data according to the step (1-4) to obtain planned CT images in the interested areas on the left side and the right side, and respectively inputting the planned CT images into the trained lymph drainage area segmentation model A to obtain a primary segmentation result of the left lymph drainage area and the right lymph drainage area;
and (8): inputting the planned CT images in the interested areas on the left side and the right side obtained in the step (7) and the preliminary segmentation results of the left lymph drainage area and the right lymph drainage area into a trained lymph node and lymph drainage area segmentation optimization model B to obtain the segmentation results of the lymph nodes on the left side and the right side and the lymph drainage area.
In summary, compared with the prior art, the method provided by the invention utilizes the symmetry of the human body structure to divide the head region into the left part and the right part for training and prediction, thereby indirectly increasing the data volume of model training and simultaneously reducing the scale of the deep learning model. The invention uses a delineation strategy of stepwise processing optimization, firstly adopts a deep learning model to realize a lymph drainage area which is easy to be segmented, and then adopts a multi-task deep learning model to optimize the lymph drainage area and segment lymph nodes, thereby fully utilizing the relevance between the lymph drainage area and the lymph nodes and improving the segmentation accuracy of the lymph drainage area and the lymph nodes. The invention implements the AI-assisted contour delineation method in the radiotherapy plan work flow, can effectively improve the delineation consistency of the work efficiency of medical workers, and improves the precision of the radiotherapy of the head and neck tumors.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. A method for automatically delineating a head-neck lymph node and a drainage area based on deep learning is characterized by comprising the following steps:
step (1): retrospectively collecting planned CT image data of a patient with head and neck tumor before radiotherapy, and preprocessing the planned CT image data;
step (2): extracting delineation outlines of a lymph drainage area and a lymph node which are delineated on a planned CT image of a patient with head and neck tumor collected retrospectively in clinic, assigning a region inside the outline to be 1 and a region outside the outline to be 0, and obtaining a binary mask image of each organ at risk;
and (3): identifying the head region of a patient from a head neck tumor patient plan CT image to obtain a binary mask image of the head; calculating the center line of the head region along the left and right directions as left and right head region left and right dividing lines, and dividing the plan CT image and the binary mask image of the lymph drainage region and the lymph node left and right;
and (4): performing three-dimensional cutting on the divided plan CT image and the binary mask image of the lymph drainage area and the lymph node, and extracting the interested areas in the left-right, front-back and head-foot directions based on the sketching result of the lymph drainage area;
and (5): carrying out five-fold cross training and verification on the deep learning model A by using the extracted normalized planned CT image in the region of interest and the binary mask image of the lymph drainage area to obtain a trained lymph drainage area segmentation model A and a delineation result of lymph drainage area verification of all training cases; the deep learning model A adopted in the step (5) adopts a five-fold cross training and verification method, which specifically comprises the following steps:
step (a): dividing the extracted planning CT image and the extracted binary mask image of the lymph drainage area in the region of interest into five groups by taking a patient as a unit;
step (b): establishing a deep learning model A, inputting planned CT images in four groups of interested areas in the five groups of data divided in the step (a) as the deep learning model A, and training the deep learning model A-1 by taking a binary mask image of a corresponding lymph drainage area as output to obtain a trained deep learning model A-1;
step (c): taking the planned CT image in the interested area of the remaining group of data as the input of a deep learning model A-1 to obtain the primary segmentation result of the lymph drainage area;
a step (d): replacing verification data, repeating the steps (b) and (c), and respectively training deep learning models A-2, A-3, A-4 and A-5 to obtain a delineation result of the lymph drainage area corresponding to the verification data;
and (6): taking the delineation result of the lymph drainage area in the step (5) and the planned CT image in the region of interest as the input of a deep learning model B, taking the binary mask images of the lymph drainage area and the lymph node as the output of the deep learning model B, and training the multitask deep learning model B to obtain a trained lymph node and lymph drainage area segmentation optimization model B; the training of the deep learning model in the step (6) is performed in two stages, and specifically comprises the following steps:
step i: constructing a deep learning model B, wherein the number of input channels is 2, the first input channel is a planned CT image of an interested area, and the second input channel is a preliminary lymph drainage area segmentation result; the number of output channels is 2, the first output channel is the segmentation result of lymph nodes, and the second output channel is the segmentation result of lymph drainage areas;
step ii: setting a second input channel of the model B as all-zero data, training the deep learning model B, extracting the characteristics of the CT image, segmenting lymph nodes and lymph drainage areas, and training until the model is stable;
step iii: setting a second input channel of the model B as a preliminary lymph drainage area segmentation result, and retraining the trained stable model in the step (ii) until the model is stable;
and (7): acquiring planned CT image data of a new head and neck tumor patient, preprocessing the planned CT image data according to the steps (1) to (4) to obtain planned CT images in the interested areas on the left side and the right side, and respectively inputting the planned CT images into the trained lymph drainage area segmentation model A to obtain a primary segmentation result of the left lymph drainage area and the right lymph drainage area;
and (8): inputting the planned CT images in the interested areas on the left side and the right side obtained in the step (7) and the preliminary segmentation results of the left lymph drainage area and the right lymph drainage area into a trained lymph node and lymph drainage area segmentation optimization model B to obtain the segmentation results of the lymph nodes on the left side and the right side and the lymph drainage area.
2. The method for automatically delineating a cervical lymph node and drainage area based on deep learning of claim 1, wherein the planned CT image data of a patient with head and neck tumor before radiotherapy in step (1) is pre-processed as follows: the image CT values are truncated to the range of-150,250 and normalized to the range of-1, 1.
3. The method for automatically delineating a cervical lymph node and a drainage region based on deep learning according to claim 1, wherein a threshold segmentation method is adopted in the step (3) to identify the head region of the patient from the planned CT image of the patient with cervical tumor, and a closed region with a CT value greater than-200 is taken as the head region to obtain a binary mask image of the head.
4. The method for automatically delineating a cervical lymph node and drainage area based on deep learning according to claim 1, wherein the size of the region of interest extracted in the left-right, front-back and head-foot directions based on the delineation result of the lymph drainage area in step (4) is 128 × 192 × 96, and the unit of each direction is pixel.
CN202010670160.2A 2020-07-13 2020-07-13 Deep learning-based automatic head and neck lymph node and drainage area delineation method Active CN111862021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010670160.2A CN111862021B (en) 2020-07-13 2020-07-13 Deep learning-based automatic head and neck lymph node and drainage area delineation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010670160.2A CN111862021B (en) 2020-07-13 2020-07-13 Deep learning-based automatic head and neck lymph node and drainage area delineation method

Publications (2)

Publication Number Publication Date
CN111862021A CN111862021A (en) 2020-10-30
CN111862021B true CN111862021B (en) 2022-06-24

Family

ID=72982928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010670160.2A Active CN111862021B (en) 2020-07-13 2020-07-13 Deep learning-based automatic head and neck lymph node and drainage area delineation method

Country Status (1)

Country Link
CN (1) CN111862021B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986213B (en) * 2020-08-21 2022-12-23 四川大学华西医院 Processing method, training method and device of slice image and storage medium
CN112790782B (en) * 2021-02-02 2022-06-24 广州柏视医疗科技有限公司 Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning
CN113012144A (en) * 2021-04-08 2021-06-22 湘南学院附属医院 Automatic delineation method and system for lung tumor, computing device and storage medium
CN113488146B (en) * 2021-07-29 2022-04-01 广州柏视医疗科技有限公司 Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma
CN115409739B (en) * 2022-10-31 2023-01-24 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Method and system for automatically sketching organs at risk

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104524697A (en) * 2014-12-24 2015-04-22 昆明市延安医院 Image-guided nasopharyngeal carcinoma intensity modulated radiation therapy position error method
CN110705565A (en) * 2019-09-09 2020-01-17 西安电子科技大学 Lymph node tumor region identification method and device
CN111105424A (en) * 2019-12-19 2020-05-05 广州柏视医疗科技有限公司 Lymph node automatic delineation method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102048550B (en) * 2009-11-02 2013-07-17 上海交通大学医学院附属仁济医院 Method for automatically generating liver 3D (three-dimensional) image and accurately positioning liver vascular domination region
CN108257134B (en) * 2017-12-21 2022-08-23 深圳大学 Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104524697A (en) * 2014-12-24 2015-04-22 昆明市延安医院 Image-guided nasopharyngeal carcinoma intensity modulated radiation therapy position error method
CN110705565A (en) * 2019-09-09 2020-01-17 西安电子科技大学 Lymph node tumor region identification method and device
CN111105424A (en) * 2019-12-19 2020-05-05 广州柏视医疗科技有限公司 Lymph node automatic delineation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Coarse-to-Fine Stacked Fully Convolutional Nets for Lymph Node Segmentation in Ultrasound Images;Yizhe Zhang et al.;《2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)》;20170119;第443-448页 *
鼻咽癌调强放疗中对危及器官勾画及计划优化改进的剂量学探讨;徐林 等;《中山大学学报(医学科学版)》;20150930;第36卷(第5期);第745-752页 *

Also Published As

Publication number Publication date
CN111862021A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111862021B (en) Deep learning-based automatic head and neck lymph node and drainage area delineation method
US10769791B2 (en) Systems and methods for cross-modality image segmentation
US11455732B2 (en) Knowledge-based automatic image segmentation
US11100647B2 (en) 3-D convolutional neural networks for organ segmentation in medical images for radiotherapy planning
Egger et al. Pituitary adenoma volumetry with 3D Slicer
CN107077736B (en) System and method for segmenting medical images based on anatomical landmark-based features
WO2023221954A1 (en) Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
Mahdavi et al. Semi-automatic segmentation for prostate interventions
CN111028914B (en) Artificial intelligence guided dose prediction method and system
CN112419338B (en) Head and neck endangered organ segmentation method based on anatomical prior knowledge
CN111862022B (en) Automatic delineation method for organs at risk in whole body multi-part radiotherapy
US9098912B2 (en) Method, system and computer readable medium for automatic segmentation of a medical image
CN109636806B (en) Three-dimensional nuclear magnetic resonance pancreas image segmentation method based on multi-step learning
US9727975B2 (en) Knowledge-based automatic image segmentation
CN111738989A (en) Organ delineation method and device
Lei et al. Male pelvic multi‐organ segmentation on transrectal ultrasound using anchor‐free mask CNN
Garg et al. A survey of prostate segmentation techniques in different imaging modalities
Luximon et al. Machine‐assisted interpolation algorithm for semi‐automated segmentation of highly deformable organs
Ananth et al. Graph Cutting Tumor Images
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
Wang et al. Multi-view fusion segmentation for brain glioma on CT images
Zhang et al. Automatic parotid gland segmentation in MVCT using deep convolutional neural networks
CN117244181A (en) Dose analysis method and device based on radiotherapy risk organ outline
CN110120052A (en) A kind of target area image segmenting system and device
CN115187577B (en) Automatic drawing method and system for breast cancer clinical target area based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211129

Address after: 510530 room 306, phase I office building, 12 Yuyan Road, Huangpu District, Guangzhou City, Guangdong Province

Applicant after: PERCEPTION VISION MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 510275 No. 135 West Xingang Road, Guangzhou, Guangdong, Haizhuqu District

Applicant before: SUN YAT-SEN University

GR01 Patent grant
GR01 Patent grant