WO2022061840A1 - Systèmes et procédés de génération de plan de radiothérapie - Google Patents
Systèmes et procédés de génération de plan de radiothérapie Download PDFInfo
- Publication number
- WO2022061840A1 WO2022061840A1 PCT/CN2020/118205 CN2020118205W WO2022061840A1 WO 2022061840 A1 WO2022061840 A1 WO 2022061840A1 CN 2020118205 W CN2020118205 W CN 2020118205W WO 2022061840 A1 WO2022061840 A1 WO 2022061840A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- boundary
- preliminary
- sample
- objective function
- segmentation model
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20096—Interactive definition of curve of interest
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present disclosure generally relates to systems and methods for radiation therapy, and more particularly, to systems and methods for generating a radiation therapy plan (also referred to as treatment planning systems and treatment planning methods, respectively) .
- Radiation therapy is widely used in cancer treatment and other treatments.
- a radiation therapy plan also referred to as a treatment plan
- a radiation therapy plan for a cancer patient is generated before treatment starts.
- an image needs to be obtained in which the tumor and healthy tissue needs to be distinguished from each other.
- a system for treatment planning may be provided.
- the system may include one or more storage devices and one or more processors configured to communicate with the one or more storage devices.
- the one or more storage devices may store executable instructions.
- the one or more processors execute the executable instructions, the one or more processors may be directed to cause the system to perform one or more of the following operations.
- the system may obtain a medical image of an object.
- the object may include a region of interest (ROI) to which a radiation therapy treatment is directed.
- the system may also obtain an image segmentation model having been trained based on an objective function.
- the objective function may relate to error edge information of an output of the image segmentation model.
- the system may further generate a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the system may further plan the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- the system may obtain a preliminary image segmentation model and a plurality of training data sets.
- Each of the plurality of training data sets may include a sample medical image and a sample segmentation result of the sample medical image.
- the sample segmentation result may include a sample boundary of a sample target region in the sample medical image.
- the system may further train the preliminary image segmentation model based on the plurality of training data sets to generate the image segmentation model.
- the system may perform one or more of the following operations. For each of the plurality of training data sets, the system may execute the preliminary image segmentation model based on the sample medical image to generate a preliminary segmentation result.
- the preliminary segmentation result may include a preliminary boundary of the sample target region.
- the system may also determine a value of the objective function based on the sample boundary and the preliminary boundary.
- the system may further update the preliminary image segmentation model by minimizing the objective function to generate the image segmentation model.
- the system may determine one or more error edges from a plurality of edges of the preliminary boundary. The system may further determine the value of the objective function based on the one or more error edges and the sample boundary.
- the system may determine one or more error points from a plurality of preliminary boundary points on the preliminary boundary.
- the system may determine the one or more error edges from the plurality of edges on the preliminary boundary.
- Each of the one or more error edges may traverse at least one of the one or more error points.
- the system may perform one or more of the following operations. For each of the preliminary boundary points on the preliminary boundary, the system may determine whether the distance from the preliminary boundary point to the sample boundary exceeds a distance threshold. In response to determining that the distance from the preliminary boundary point to the sample boundary exceeds the distance threshold, the system may determine that the preliminary boundary point is an error point.
- the distance threshold may be one pixel.
- the system may perform one or more of the following operations. For each of the plurality of edges of the preliminary boundary, the system may determine whether an angle formed by the edge and the sample boundary exceeds an angle threshold. In response to determining that the angle exceeds the angle threshold, the system may determine that the edge is one of the one or more error edges.
- the value of the objective function may be determined further based on areas of regions bounded by the sample boundary and the preliminary boundary.
- the sample target region may include at least one sub-sample target region.
- the sample boundary may include a sub-sample boundary of each of the at least one sub-sample target region.
- the preliminary boundary may include at least one sub-preliminary boundary.
- the value of the objective function may be determined further based on a difference between a count of the at least one sub-sample boundary and a count of the at least one sub-preliminary boundary.
- the at least one processor may further be directed to cause the system to perform one or more of the following operations.
- the system may obtain a discriminative model, and receive an input from a user to change the boundary of the target region to a target boundary.
- the system may also update the discriminative model based on a difference between the boundary of the target region and the target boundary.
- the system may further update the image segmentation model based on the updated discriminative model.
- the objective function may relate to an amount of modification to modify a boundary of the output of the image segmentation model to a sample boundary.
- a system for treatment planning may be provided.
- the system may include one or more storage devices and one or more processors configured to communicate with the one or more storage devices.
- the one or more storage devices may store executable instructions.
- the one or more processors execute the executable instructions, the one or more processors may be directed to cause the system to perform one or more of the following operations.
- the system may obtain a medical image of an object.
- the object may include an ROI to which a radiation therapy treatment is directed.
- the system may obtain an image segmentation model having been trained based on an objective function.
- the objective function may relate to boundary information of an output of the image segmentation model.
- the system may also generate a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the system may further plan the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- the system may perform one or more of the following operations. For each of the plurality of training data sets, the system may execute the preliminary image segmentation model based on the sample medical image to generate a preliminary segmentation result.
- the preliminary segmentation result may include a preliminary boundary of the sample target region.
- the system may also determine the objective function based on the sample boundary and the preliminary boundary.
- the system may further update the preliminary image segmentation model by minimizing the objective function to generate the image segmentation model.
- the objective function may relate to distances from preliminary boundary points on the preliminary boundary to the sample boundary.
- the objective function may relate to an average value, a minimum value, or a maximum value of the distances from the preliminary boundary points on the preliminary boundary to the sample boundary.
- the objective function may relate to one or more error points in the preliminary boundary points.
- the system may perform one or more of the following operations. For each of the preliminary boundary points on the preliminary boundary, the system may determine whether the distance from the preliminary boundary point to the sample boundary exceeds a distance threshold. In response to determining that the distance from the preliminary boundary point to the sample boundary exceeds the distance threshold, the system may determine that the preliminary boundary point is an error point.
- the system may determine one or more error edges from a plurality of edges on the preliminary boundary. Each of the one or more error edge may traverse at least one of the one or more error points. The system may also determine a value of the objective function based on lengths of the one or more error edges and a length of the sample boundary.
- a method for treatment planning may be provided.
- the method may include obtaining a medical image of an object.
- the object may include an ROI to which a radiation therapy treatment is directed.
- the method may include obtaining an image segmentation model having been trained based on an objective function.
- the objective function may relate to error edge information of an output of the image segmentation model.
- the method may also include generating a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the method may further include planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- a system for treatment planning may be provided.
- the system may include an obtaining module configured to obtain a medical image of an object and an image segmentation model having been trained based on an objective function.
- the object may include an ROI to which a radiation therapy treatment is directed.
- the objective function may relate to error edge information of an output of the image segmentation model.
- the system may include a processing module configured to generate a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the system may also include a plan determination module configured to planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- a non-transitory computer readable medium may be provided.
- the non-transitory computer readable may include at least one set of instructions for treatment planning. When executed by one or more processors of a computing device, the at least one set of instructions may cause the computing device to perform a method.
- the method may include obtaining a medical image of an object, the object including an ROI to which a radiation therapy treatment is directed.
- the method may include obtaining an image segmentation model having been trained based on an objective function, the objective function relating to error edge information of an output of the image segmentation model.
- the method may also include generating a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the method may further include planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- a method for treatment planning may be provided.
- the method may include obtaining a medical image of an object, the object including an ROI to which a radiation therapy treatment is directed.
- the method may include obtaining an image segmentation model having been trained based on an objective function.
- the objective function may relate to boundary information of an output of the image segmentation model.
- the method may also include generating a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the method may further include planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- a system for treatment planning may be provided.
- the system may include an obtaining module configured to obtain a medical image of an object and an image segmentation model having been trained based on an objective function.
- the object may include an ROI to which a radiation therapy treatment is directed.
- the objective function may relate to boundary information of an output of the image segmentation model.
- the system may include a processing module configured to generate a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the system may also include a plan determination module configured to planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- a non-transitory computer readable medium may be provided.
- the non-transitory computer readable may include at least one set of instructions for treatment planning. When executed by one or more processors of a computing device, the at least one set of instructions may cause the computing device to perform a method.
- the method may include obtaining a medical image of an object, the object including an ROI to which a radiation therapy treatment is directed.
- the method may include obtaining an image segmentation model having been trained based on an objective function, the objective function relating to boundary information of an output of the image segmentation model.
- the method may also include generating a segmentation result by executing the image segmentation model based on the medical image.
- the segmentation result may include a boundary of a target region in the medical image corresponding to the ROI of the object.
- the method may further include planning the radiation therapy treatment directed to the ROI of the object according to the segmentation result.
- FIG. 1 is a schematic diagram illustrating an exemplary radiation therapy system according to some embodiments of the present disclosure
- FIG. 2 is a schematic diagram illustrating hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
- FIG. 3 is a schematic diagram illustrating hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure
- FIG. 4 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
- FIG. 5 is a flowchart illustrating an exemplary process for determining a target radiation therapy plan according to some embodiments of the present disclosure
- FIG. 6 is a flowchart illustrating an exemplary process for training an image segmentation model according to some embodiments of the present disclosure
- FIG. 7 is a flowchart illustrating an exemplary process for determining an objective function according to some embodiments of the present disclosure
- FIG. 8A and FIG. 8B are schematic diagrams illustrating exemplary sample segmentation results and preliminary segmentation results
- FIG. 9 is a schematic diagram illustrating exemplary segmentation results
- FIG. 10A and FIG. 10B are schematic diagrams illustrating exemplary segmentation results.
- FIG. 11 is a schematic diagram illustrating an exemplary process of training an image segmentation model.
- system, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
- module, ” or “block, ” as used herein refers to logic embodied in hardware or firmware, or to a collection of software instructions.
- a module, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or another storage device.
- a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts.
- Software modules/units/blocks configured for execution on computing devices (e.g., the processor 210 as illustrated in FIG.
- a computer-readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
- a computer-readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
- Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device.
- Software instructions may be embedded in firmware, such as an Electrically Programmable Read-Only-Memory (EPROM) .
- EPROM Electrically Programmable Read-Only-Memory
- modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors.
- the modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware.
- the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof.
- module or block when a module or block is referred to as being “connected to, ” or “coupled to, ” another module, or block, it may be directly connected or coupled to, or communicate with the other module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise.
- the term “and/or” includes any and all combinations of one or more of the associated listed items.
- the medical system may include an imaging system.
- the imaging system may include a single modality imaging system and/or a multi-modality imaging system.
- the single modality imaging system may include, for example, a magnetic resonance imaging (MRI) system, an X-ray system, a computed tomography (CT) system, a positron emission computed tomography (PET) system, an ultrasonic system, or the like, or any combination thereof.
- MRI magnetic resonance imaging
- CT computed tomography
- PET positron emission computed tomography
- ultrasonic system or the like, or any combination thereof.
- the multi-modality imaging system may include, for example, a computed tomography-magnetic resonance imaging (CT-MRI) system, a positron emission tomography-magnetic resonance imaging (PET-MRI) system, a single photon emission computed tomography-magnetic resonance imaging (SPECT-MRI) system, a digital subtraction angiography-magnetic resonance imaging (DSA-MRI) system, a positron emission tomography-computed tomography (PET-CT) etc.
- CT-MRI computed tomography-magnetic resonance imaging
- PET-MRI positron emission tomography-magnetic resonance imaging
- SPECT-MRI single photon emission computed tomography-magnetic resonance imaging
- DSA-MRI digital subtraction angiography-magnetic resonance imaging
- PET-CT positron emission tomography-computed tomography
- the medical system may include a treatment system.
- the treatment system may include a treatment plan system (TPS) , image-
- the image-guide radiotherapy may include a treatment device and an imaging device.
- the treatment device may include a linear accelerator, a cyclotron, a synchrotron, etc., configured to perform a radio therapy on a subject.
- the treatment device may include an accelerator of species of particles including, for example, photons, electrons, protons, or heavy ions.
- the imaging device may include an MRI scanner, a CT scanner (e.g., cone beam computed tomography (CBCT) scanner) , a digital radiology (DR) scanner, an electronic portal imaging device (EPID) , etc.
- CBCT cone beam computed tomography
- DR digital radiology
- EPID electronic portal imaging device
- an image, or a portion thereof corresponding to an object (e.g., a tissue, an organ, a tumor, etc., of a subject (e.g., a patient, etc. ) may be referred to as an image, or a portion thereof (e.g., an ROI) . including the object, or the object itself.
- an ROI corresponding to the image of a tumor may be described as that the ROI includes a tumor.
- an image of or including a liver may be referred to a liver image, or simply liver.
- an image segmentation model described in the present disclosure may refer to a trained segmentation model unless the context clearly states that the image segmentation model is a preliminary segmentation model, an updated segmentation model, or a model during a training process.
- equations and variables in the equations e.g., distances, areas, lengths, angles
- equations or variables can also be converted into forms that comply with a three-dimensional (3-D) space.
- a 2-D area bounded by two curves may be converted to a 3-D volume bounded by two curved planes.
- an image in which the lesion and healthy tissue needs to be distinguished from each other in developing and/or executing a radiation therapy plan.
- a doctor may manually segment a pre-scanned medical image of the cancer patient into several regions by, for example, labelling the locations and/or boundaries of the tumors, locations or boundaries of healthy tissue, etc. on the medical image.
- a radiation therapy plan may then be determined according to the segmentation result.
- segmentation process is both inconvenient and time-consuming. It also highly depends on the skills and experience of the doctor.
- Automated segmentation techniques are recently developed. Such techniques include generating a rough segmentation result.
- the rough segmentation result can be modified by a user, e.g., a doctor, to provide a desired segmentation result according to the user’s experience.
- these automated segmentation techniques usually ignore the boundaries between a lesion and healthy tissue in the segmentation result, leading to unsatisfactory efficacy in radiation therapy.
- the location of the lesion in the rough segmentation result may be identified, the boundary of the lesion in the segmentation result may be erroneous or inaccurate.
- the user may have to spend a long time modifying the segmentation result in order for the segmentation result to be used in a radiation therapy plan.
- an image segmentation model is used to segment the image.
- the image segmentation model is trained by a plurality of training data sets each of which includes a sample medical image of a sample object and a sample segmentation result of the sample medical image.
- the value of an objective function is determined and used to evaluate the progress of the training.
- the objective function is configured to reflect an amount of subsequent modification needed to be performed by a user on the output of the image segmentation model.
- the objective function relates to boundary information (e.g., error edge information) of the segmentation result.
- boundary information e.g., error edge information
- FIG. 1 is a schematic diagram illustrating an exemplary radiation therapy system 100 according to some embodiments of the present disclosure.
- the radiation therapy system 100 may include a radiation device 110, a network 120, one or more terminals 130, a processing device 140, and a storage device 150.
- Radiation therapy is therapy using ionizing radiation, generally as part of cancer treatment to control or kill malignant cells (or tumors) .
- Radiation therapy may be delivered by a linear accelerator (e.g., the radiation device 110 in the radiation therapy system 100 in FIG. 1) .
- the radiation therapy may include an external radiation therapy, a Brachytherapy, an intraoperative radiotherapy, a radioisotope therapy, a deep inspiration breath-hold (DIBH) , etc.
- the external beam radiation therapy may include a conventional external beam radiation therapy (2DXRT) , a stereotactic radiation therapy (e.g., a stereotactic radiosurgery, a stereotactic body radiation therapy, etc.
- 2DXRT conventional external beam radiation therapy
- stereotactic radiation therapy e.g., a stereotactic radiosurgery, a stereotactic body radiation therapy, etc.
- IMRT intensity-modulated radiation therapy
- VMAT volumetric modulated arc therapy
- AT Auger therapy
- the radiation device 110 may emit radioactive rays to a subject (e.g., a patient) to perform a treatment to control or kill malignant cells.
- the radioactive rays may include ⁇ rays, ⁇ rays, ⁇ rays, X rays, neutrons, etc.
- the radiation device 110 may include a medical linear accelerator, a Cobalt-60 device, a Gamma knife, X knife, a proton accelerator, a brachytherapy device, or the like, or any combination thereof.
- an imaging scan may be performed to identify the lesion (e.g., a tumor) and the surrounding normal anatomy of the subject.
- the radiation therapy system 100 may further include an imaging device (not shown in FIG. 1) such as an X-ray device, a computed tomography (CT) device, a positron emission computed tomography (PET) device, a magnetic resonance imaging (MRI) device, or the like, or any combination thereof, to perform such imaging scan.
- an imaging device such as an X-ray device, a computed tomography (CT) device, a positron emission computed tomography (PET) device, a magnetic resonance imaging (MRI) device, or the like, or any combination thereof, to perform such imaging scan.
- CT computed tomography
- PET positron emission computed tomography
- MRI magnetic resonance imaging
- the imaging scan may be a two-dimensional (2-D) X-ray scan.
- the imaging scan may be a three-dimensional (3-D) CT scan.
- the image generated by the imaging scan may be a 2-D image or a 3-D image.
- the 3-D image may refer to a single image in a 3-D space or multiple layers of 2-D images stacking in a dimension perpendicular to the plane of 2-D images.
- calculations in present disclosure may refer to 2-D calculations and/or 3-D calculations or a series of calculations of 2-D images in a 2-D space that may be used collectively to determine variables or parameters in a 3-D space.
- the image generated by the imaging scan may be segmented and the segmentation result may be used to generate a radiation therapy plan.
- the radiation device 110 may deliver the radiation therapy based on the radiation therapy plan. Descriptions regarding the generation of the radiation therapy plan may be found elsewhere in present disclosure. See, e.g., FIG. 5 and descriptions thereof.
- a trained image segmentation model may be used to segment the image generated by the imaging scan. Descriptions regarding the training of the image segmentation model may be found elsewhere in present disclosure, e.g., FIG. 6 and descriptions thereof.
- an objective function may be used during the training of the image segmentation model as an evaluation criterion on how well the image segmentation model is trained (or the progress of the training) . The objective function may be configured to reflect an amount of subsequent modification of the doctor on the output of the image segmentation model. Descriptions regarding the determination of the objective function may be found elsewhere in present disclosure, e.g., FIG. 7 and descriptions thereof.
- the object may include a patient, a man-made object, etc.
- the object may include a specific portion, organ, and/or tissue of a patient.
- the object may include head, brain, neck, body, shoulder, arm, thorax, cardiac, stomach, blood vessel, soft tissue, knee, feet, or the like, or any combination thereof.
- the network 120 may include any suitable network that can facilitate exchange of information and/or data for the radiation therapy system 100.
- one or more components of the radiation therapy system 100 e.g., the radiation device 110, the terminal 130, the processing device 140, the storage device 150, etc.
- the processing device 140 may obtain information related to a radiation therapy plan or information related to images from the radiation device 110 via the network 120.
- the processing device 140 may obtain user instructions from the terminal 130 via the network 120.
- the processing device 140 may obtain a trained image segmentation model from an external storage device (e.g., a cloud-based server) or the storage device 150 via the network 120.
- an external storage device e.g., a cloud-based server
- the processing device 140 may obtain a preliminary (untrained) image segmentation model from the external storage device or the storage device 150 via the network 120.
- the processing device 140 may train the preliminary image segmentation model and transmit the trained image segmentation model to the storage device 150 or the radiation device 110 via the network 120.
- the network 120 may be and/or include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) ) , etc. ) , a wired network (e.g., an Ethernet network) , a wireless network (e.g., an 802.11 network, a Wi-Fi network, etc. ) , a cellular network (e.g., a Long Term Evolution (LTE) network) , a frame relay network, a virtual private network ( "VPN" ) , a satellite network, a telephone network, routers, hubs, witches, server computers, and/or any combination thereof.
- a public network e.g., the Internet
- a private network e.g., a local area network (LAN) , a wide area network (WAN) ) , etc.
- a wired network e.g., an Ethernet network
- the network 120 may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee TM network, a near field communication (NFC) network, or the like, or any combination thereof.
- the network 120 may include one or more network access points.
- the network 120 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the radiation therapy system 100 may be connected to the network 120 to exchange data and/or information.
- the terminal (s) 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, or the like, or any combination thereof.
- the mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
- the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof.
- the wearable device may include a bracelet, a footgear, eyeglasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof.
- the mobile device may include a mobile phone, a personal digital assistant (PDA) , a gaming device, a navigation device, a point of sale (POS) device, a laptop, a tablet computer, a desktop, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a Google Glass TM , an Oculus Rift TM , a Hololens TM , a Gear VR TM , etc.
- the terminal (s) 130 may be part of the processing device 140.
- the processing device 140 may process data and/or information obtained from the radiation device 110, the terminal 130, and/or the storage device 150. For example, the processing device 140 may generate and/or update a radiation therapy plan. As another example, the processing device 140 may train a preliminary image segmentation model. In some embodiments, the processing device 140 may be a single server or a server group. The server group may be centralized or distributed. In some embodiments, the processing device 140 may be local or remote. For example, the processing device 140 may access information and/or data stored in the radiation device 110, the terminal 130, and/or the storage device 150 via the network 120. As another example, the processing device 140 may be directly connected to the radiation device 110, the terminal 130 and/or the storage device 150.
- the processing device 140 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the processing device 140 may be implemented by a computing device 200 having one or more components as illustrated in FIG. 2.
- the storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the radiation device 110, the terminal 130 and/or the processing device 140. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 140 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 150 may store a preliminary image segmentation model and/or a trained image segmentation model. In some embodiments, the storage device 150 may store a radiation therapy plan. In some embodiments, the storage device 150 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
- ROM read-only memory
- Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc.
- Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- Exemplary volatile read-and-write memory may include a random access memory (RAM) .
- Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- DRAM dynamic RAM
- DDR SDRAM double date rate synchronous dynamic RAM
- SRAM static RAM
- T-RAM thyristor RAM
- Z-RAM zero-capacitor RAM
- Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- the storage device 150 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the storage device 150 may be connected to the network 120 to communicate with one or more other components in the radiation therapy system 100 (e.g., the processing device 140, the terminal 130, etc. ) .
- One or more components in the radiation therapy system 100 may access the data or instructions stored in the storage device 150 via the network 120.
- the storage device 150 may be directly connected to or communicate with one or more other components in the radiation therapy system 100 (e.g., the processing device 140, the terminal 130, etc. ) .
- the storage device 150 may be part of the processing device 140.
- the processing device 140 may be connected to or communicate with the radiation device 110 via the network 120, or at the backend of the processing device 140.
- FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device 200 on which the processing device 140 may be implemented according to some embodiments of the present disclosure.
- the computing device 200 may include a processor 210, a storage device 220, an input/output (I/O) 230, and a communication port 240.
- I/O input/output
- the processor 210 may execute computer instructions (e.g., program code) and perform functions of the processing device 140 in accordance with techniques described herein.
- the computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein.
- the processor 210 may process image data or training data sets obtained from the radiation device 110, the terminal 130, the storage device 150, and/or any other component of the radiation therapy system 100.
- the processor 210 may segment the image (s) to generate a segmentation result.
- the processor 210 may train an image segmentation model based on the training data sets.
- the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
- RISC reduced instruction set computer
- ASICs application specific integrated circuits
- ASIP application-specific instruction-set processor
- CPU central processing unit
- GPU graphics processing unit
- PPU physics processing unit
- DSP digital signal processor
- FPGA field programmable gate array
- ARM advanced RISC machine
- step X and step Y may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step X and a second processor executes step Y, or the first and second processors jointly execute steps X and Y) .
- the storage 220 may store data/information obtained from the radiation device 110, the terminal 130, the storage device 150, and/or any other component of the radiation therapy system 100.
- the storage 220 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
- the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
- the removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- the volatile read-and-write memory may include a random access memory (RAM) .
- the RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- the ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure.
- the storage device 220 may store a program for the processing device 140 for training an image segmentation model and/or generating a radiation therapy plan.
- the I/O 230 may input and/or output signals, data, information, etc. In some embodiments, the I/O 230 may enable a user interaction with the processing device 140. In some embodiments, the I/O 230 may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof.
- Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
- LCD liquid crystal display
- LED light-emitting diode
- CRT cathode ray tube
- the communication port 240 may be connected to a network (e.g., the network 120) to facilitate data communications.
- the communication port 240 may establish connections between the processing device 140 and the radiation device 110, the terminal 130, and/or the storage device 150.
- the connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
- the wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof.
- the wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc. ) , or the like, or a combination thereof.
- the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc.
- the communication port 240 may be a specially designed communication port.
- the communication port 240 may be designed in accordance with digital imaging and communications in medicine (DICOM) protocol.
- DICOM digital imaging and communications in medicine
- FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device 300 on which the terminal 130 may be implemented according to some embodiments of the present disclosure.
- the mobile device 300 may include a communication platform 310, a display 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390.
- any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
- a mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM , etc.
- the applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device 140. User interactions with the information stream may be achieved via the I/O 350 and provided to the processing device 140 and/or other components of the radiation therapy system 100 via the network 120.
- computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
- a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
- PC personal computer
- a computer may also act as a server if appropriately programmed.
- FIG. 4 is a schematic block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
- the processing device 140 may include an obtaining module 410, a model training module 420, a processing module 430, a determination module 440, and a plan determination module 450.
- the obtaining module 410 may be configured to obtain data or information from other modules or units inside or outside the processing device 140.
- the obtaining module 410 may obtain medical image of an object from an imaging device in the radiation device 110 or an external imaging device.
- the obtaining module 410 may obtain an image segmentation model from a storage device (e.g., the storage device 150, an external storage device) .
- the image segmentation model may be a trained image segmentation model or an untrained (or preliminary) image segmentation model.
- the obtaining module 410 may further obtain a plurality of training data sets and the processing module 430 and/or the model training module 420 may train the image segmentation model based on the training data sets.
- the model training module 420 may be configured to train a preliminary model to generate a trained model. For example, the model training module 420 may execute a preliminary image segmentation model based on a sample medical image to generate a preliminary segmentation result. The model training module 420 may determine a value of an objective function based on the preliminary segmentation result and the sample segmentation result. The model training module 420 may iteratively update or train the preliminary image segmentation model based on the preliminary segmentation result, the sample segmentation result, and/or the value of the objective function until a termination condition is met. When the termination condition is met, the training of the preliminary image segmentation model may terminate and a trained image segmentation model may be generated.
- the termination condition may include the value of the objective function being less than a threshold, the difference between values of the objective function in two successive iterations being less than a threshold, all the training data sets being traversed, the number of iterations reaching a threshold, etc.
- the processing module 430 may be configured to process data and/or information in the radiation therapy system 100. For example, the processing module 430 may execute an image segmentation model based on the medical image to generate a segmentation result. As another example, the processing module 430 may determine distances from preliminary boundary points on the preliminary boundary to a sample boundary. As a further example, the processing module 430 may determine a value of an objective function based on a total length of error edges and the length of the sample boundary. As yet another example, the processing module 430 may determine one or more error edges from a plurality of edges of the preliminary boundary, and determine the value of the objective function based on the error edge (s) .
- the determination module 440 may be configured to make determinations. For example, the determination module 440 may determine whether a termination condition is satisfied during the training of an image segmentation model. Specifically, the determination module 440 may compare a value of an objective function with a threshold. As another example, the determination module 440 may determine error points in a preliminary segmentation result. Specifically, the determination module 440 may determine whether a distance from a preliminary boundary point to a sample boundary exceeds a distance threshold. In response to determining that the distance from the preliminary boundary point to the sample boundary exceeds the distance threshold, the determination module 440 may determine that the preliminary boundary point is an error point; otherwise, the determination module 440 may determine that the preliminary boundary point is not an error point.
- the plan determination module 450 may be configured to plan a radiation therapy treatment according to a segmentation result. Specifically, the plan determination module 450 may generate a radiation therapy plan based on the segmentation result and the radiation device 110 may deliver a radiation therapy treatment according to the radiation therapy plan.
- the radiation therapy plan may include same or different protocols, same or different doses, same or different radiation durations, etc. for different regions in the segmentation result.
- the modules in the processing device 140 may be connected to or communicate with each other via a wired connection or a wireless connection.
- the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
- the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
- LAN Local Area Network
- WAN Wide Area Network
- NFC Near Field Communication
- Two or more of the modules may be combined as a single module, and any one of the modules may be divided into two or more units.
- the obtaining module 410 may be divided into two units. One of the two units may be configured to obtain the medical image and the trained image segmentation model, and the other one of the two units may be configured to obtain the preliminary image segmentation model and the training data sets.
- the processing device 140 may further include a storage module (not shown in FIG. 4) .
- the storage module may be configured to store data generated during any process performed by any component of in the processing device 140.
- each of the components of the processing device 140 may include a storage device. Additionally, or alternatively, the components of the processing device 140 may share a common storage device.
- the model training module 420 may be a module outside the processing device 140.
- the model training module 420 may train the image segmentation model outside the processing device 140 and transmit the trained image segmentation model to the processing device 140 via a network (e.g., the network 120) .
- FIG. 5 is a flowchart illustrating an exemplary process for determining a target radiation therapy plan according to some embodiments of the present disclosure.
- the process 500 may be performed by the processing device 140 (implemented in, for example, the computing device 200 shown in FIG. 2) .
- the process 500 may be stored in a storage device (e.g., the storage device 150 and/or the storage 220) in the form of instructions (e.g., an application) , and invoked and/or executed by the processing device 140 (e.g., the processor 210 illustrated in FIG. 2 and/or or one or more modules in the processing device 140 illustrated in FIG. 4) .
- the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 as illustrated in FIG. 5 and described below is not intended to be limiting.
- the processing device 140 may obtain a medical image of an object.
- the medical image of the object may be obtained from an imaging device in the radiation device 110 or an external imaging device.
- the medical image may include an X-ray image, an MRI image, a PET image, an ultrasonic image, a CT image, etc.
- the medical image may be a 2-D image or a 3-D image.
- the medical image may be an image that includes information about the internal structure of the object.
- the medical image may include multiple regions including different tissues and/or organs.
- the multiple regions may include at least one target region corresponding to a region of interest (ROI) (e.g., a tumor) of the object and at least one region corresponding to healthy tissues of the object.
- ROI region of interest
- the multiple regions may also include at least one fragile region corresponding to tissues that may be easily damaged by radiations.
- the processing device 140 may obtain an image segmentation model.
- the image segmentation model may be a trained image segmentation model.
- the image segmentation model may be trained based on a plurality of training data sets and/or an objective function.
- the plurality of training data sets may each include a sample image and a sample segmentation result of the sample image.
- the corresponding sample segmentation result may be referred to as a “gold standard” (e.g., a desirable or acceptable (or referred to as correct) segmentation result of the sample image) .
- the sample segmentation result may be generated by experienced doctors or experts.
- a radiation therapy treatment planned using the sample segmentation result may have led to a good result (e.g., show efficacy in tumor treatment with little or no harm to healthy tissue) .
- a good result e.g., show efficacy in tumor treatment with little or no harm to healthy tissue.
- the objective function may be used to evaluate how well the image segmentation model is trained (or the training progress of the image segmentation model) .
- the objective function may be configured to reflect an amount of modification by a user on the output of the image segmentation model.
- the objective function may relate to boundary information of the output of the image segmentation model and the boundary information of the sample segmentation result. More descriptions regarding the determination of the objective function may be found elsewhere in present disclosure, e.g., FIG. 7 and descriptions thereof.
- the boundary information may include error edge information of the output of the image segmentation model.
- one or more error edges may be determined based on the output of the image segmentation model, and information relating to the error edge (s) may be referred to as the error edge information. More descriptions regarding the determination of the error edge (s) may be found elsewhere in present disclosure, e.g., operation 750 and descriptions thereof.
- the image segmentation model may be any type of machine learning model and shall not be limiting.
- the image segmentation model may include an artificial neural network (ANN) , a random forest model, a support vector machine, a decision tree, a convolutional neural network (CNN) , a Recurrent Neural Network (RNN) , a deep learning model, a Bayesian network, a K-nearest neighbor (KNN) model, a generative adversarial network (GAN) model, etc.
- ANN artificial neural network
- CNN convolutional neural network
- RNN Recurrent Neural Network
- deep learning model a Bayesian network
- KNN K-nearest neighbor
- GAN generative adversarial network
- the processing device 140 may execute the image segmentation model based on the medical image to generate a segmentation result.
- the medical image of the object may be inputted into the image segmentation model.
- the image segmentation model may generate a segmentation result that includes labels of locations and/or boundaries of regions on the inputted medical image as an output.
- only target regions e.g., tumors
- both target regions and regions corresponding to healthy tissues may be labelled in the segmentation result.
- target regions, regions corresponding to healthy tissues, regions corresponding to fragile tissues e.g., tissues that may be easily damaged by radiations
- the processing device 140 may plan a radiation therapy treatment according to the segmentation result.
- the processing device 140 may generate a radiation therapy plan based on the segmentation result and the radiation device 110 may deliver a radiation therapy treatment according to the radiation therapy plan.
- the radiation therapy plan may include same or different protocols, the same or different doses, the same or different radiation durations, etc., for different regions in the segmentation result.
- a high dose and/or a long radiation duration may be planned for a target region
- a low dose and/or a low radiation duration may be planned for a fragile or healthy region (e.g., an organ at risk (OAR) ) .
- OAR organ at risk
- the image segmentation model may be a generative model of a generative adversarial network (GAN) model.
- the GAN model may also include a discriminative model configured to evaluate the generative model.
- the discriminative model may receive an input from a user to change the boundary of the target region of the segmentation result to a target boundary. The user’s input is now treated as the gold standard and the discriminative model may be updated based on the boundary of the target region and the target boundary. The image segmentation model may then be updated based on the updated discriminative model.
- the discriminative model may be updated based on a treatment result that uses the radiation therapy plan associated with the segmentation result.
- a user or a machine may rate a quality of the treatment result of the radiation therapy.
- the rating of treatment result (together with the segmentation result) may be used to train the discriminative model.
- the discriminative model may be locally updated by the model training module 420.
- the updated discriminative model may be transmitted to a storage (e.g., the storage device 120) via a network (e.g., the network 120) .
- the generative model and the discriminative model may be updated and/or stored locally or remotely and shall not be limiting.
- the generative model may be stored in a remote server and the discriminative model may be stored in a local storage device.
- the generative model may be a universal model that is used for a plurality of users while the discriminative model may be a specialized model that is used for a certain group of users, or vice versa.
- both the generative model and the discriminative model may be stored together in a remote server or the local storage device.
- FIG. 6 is a flowchart illustrating an exemplary process for training an image segmentation model according to some embodiments of the present disclosure.
- the processing device 140 may be implemented in, for example, the computing device 200 shown in FIG. 2) .
- the process 600 may be stored in a storage device (e.g., the storage device 150 and/or the storage 220) in the form of instructions (e.g., an application) , and invoked and/or executed by the processing device 140 (e.g., the processor 210 illustrated in FIG. 2 and/or or one or more modules in the processing device 140 illustrated in FIG. 4) .
- the processing device 140 e.g., the processor 210 illustrated in FIG. 2 and/or or one or more modules in the processing device 140 illustrated in FIG. 4 .
- the operations of the illustrated process presented below are intended to be illustrative.
- the process 600 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600 as illustrated in FIG. 6 and described below is not intended to be limiting. In some embodiments, the image segmentation model trained by the process 600 may be obtained and used in operations 520 and 530.
- the processing device 140 may obtain a preliminary image segmentation model.
- the preliminary image segmentation model may be obtained from the storage device 150 or an external device via the network 120.
- the preliminary model may include a plurality of classifiers and/or neurons that each has one or more preliminary parameters or weights.
- the preliminary parameters may be default values such as zeros, ones or any numerals.
- a user or the processing device 140 may set at least some of the preliminary parameters to increase a converging speed (e.g., the speed of being completely trained) .
- the image segmentation model may include an artificial neural network (ANN) , a random forest model, a support vector machine, a decision tree, a convolutional neural network (CNN) , a Recurrent Neural Network (RNN) , a deep learning model, a Bayesian network, a K-nearest neighbor (KNN) model, a generative adversarial network (GAN) model, etc.
- ANN artificial neural network
- CNN convolutional neural network
- RNN Recurrent Neural Network
- deep learning model e.g., a Bayesian network
- KNN K-nearest neighbor
- GAN generative adversarial network
- the preliminary model may be predefined.
- the inner structure or the preliminary parameters of the preliminary model may be predefined according to one or more characteristics (e.g., size, thickness, complexity, gender, body shape, type of cancer) of a specific object (e.g., the chest, the head) that the preliminary model is associated with.
- characteristics e.g., size, thickness, complexity, gender, body shape, type of cancer
- the preliminary parameters may be predefined according to the characteristics of the lung.
- the processing device 140 may obtain a plurality of training data sets.
- the obtaining module 410 may obtain the plurality of training data sets from a storage device (e.g., the storage device 150, an external storage device) .
- Each training data set may include a sample image and a sample segmentation result of the sample image. Descriptions regarding exemplary sample images and sample segmentation results may be found elsewhere in present disclosure. See, e.g., sample image 1110, sample segmentation result 1120, and the descriptions thereof.
- the sample image and the sample segmentation result in a same training data set may correspond to a same object or a same region of an object.
- the plurality of training data sets may be associated with same or different objects or same or different regions of one or more objects.
- the sample segmentation result (also referred to as gold standard) may include a sample boundary of a target region (or referred to as a sample target region) in the sample image of an object.
- the target region may correspond to an ROI (e.g., tumors) of the object.
- the processing device 140 may execute the preliminary image segmentation model based on the sample medical image to generate a preliminary segmentation result.
- Descriptions regarding exemplary preliminary segmentation result may be found elsewhere in the present disclosure. See, e.g., preliminary segmentation result 1130 in FIG. 11 and the descriptions thereof.
- the sample image may be inputted into the preliminary image segmentation model.
- the preliminary image segmentation model may generate a preliminary segmentation result that includes labels of locations and/or boundaries of regions on the inputted sample image as an output.
- the processing device 140 may determine a value of an objective function based on the preliminary segmentation result (or the updated segmentation result) and the sample segmentation result. In the first iteration, the processing device 140 may determine the value of the objective function based on the preliminary segmentation result and the sample segmentation result. In subsequent iterations, the processing device 140 may determine the value of the objective function based on the updated segmentation result in a current iteration and the sample segmentation result. In some embodiments, the objective function is used to evaluate how well the image segmentation model is trained (or the training progress of the image segmentation model) .
- the objective function may be configured to reflect an amount of modification of the doctor on the output of the image segmentation model in each iteration.
- the objective function may relate to boundary information (e.g., error edge information) of the output of the image segmentation model (e.g., the segmentation result in a current iteration) and the sample segmentation result. More descriptions regarding the determination of the objective function may be found elsewhere in present disclosure, e.g., FIG. 7 and descriptions thereof.
- the processing device 140 may determine whether a termination condition is satisfied.
- the termination condition may be an indicator of whether the image segmentation model is trained sufficiently.
- the termination condition may include the value of the objective function being less than a threshold, the difference between values of the objective function in two successive iterations being less than a threshold, all the training data sets being traversed, the number or count of iterations reaching a threshold, etc.
- process 600 may proceed to 680; otherwise, process 600 may proceed to 660.
- the processing device 140 may update the image segmentation model.
- the processing device 140 may update the image segmentation model based on the sample image, the sample segmentation result of the sample image, the preliminary segmentation result or updated segmentation result generated in current iteration, and/or the objective function.
- the updating of the image segmentation model may include updating or adjusting at least one weight or parameter of neurons or classifiers in the image segmentation model, changing the ways of connection between the neurons or classifiers, updating or adjusting weight of each layers in the image segmentation model, etc.
- the processing device 140 may execute the updated image segmentation model based on the sample medical image to generate an updated segmentation result. Operation 670 is similar to operation 630 and is not repeated herein. After the updated segmentation result is generated in 670, the process may proceed back to 640. In 640, a new value of the objective function may be determined based on the updated segmentation result and the sample segmentation result.
- the processing device 140 may designate the updated image segmentation model as a trained image segmentation model.
- the processing device 140 may store the updated image segmentation model in the current iteration as the trained image segmentation model in a storage device (e.g., storage device 150, an external storage device) .
- the trained image segmentation model may be obtained and used in operations 520 and 530 to generate the radiation therapy plan.
- FIG. 7 is a flowchart illustrating an exemplary process for determining an objective function according to some embodiments of the present disclosure.
- the processing device 140 may be implemented in, for example, the computing device 200 shown in FIG. 2) .
- the process 700 may be stored in a storage device (e.g., the storage device 150 and/or the storage 220) in the form of instructions (e.g., an application) , and invoked and/or executed by the processing device 140 (e.g., the processor 210 illustrated in FIG. 2 and/or or one or more modules in the processing device 140 illustrated in FIG. 4) .
- the operations of the illustrated process presented below are intended to be illustrative.
- process 700 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 700 as illustrated in FIG. 7 and described below is not intended to be limiting. In some embodiments, process 700 may correspond to 640 in FIG. 6. For brevity, process 700 is described by taking a first iteration of training as an example. Values of the objective function in subsequent iterations may be calculated in a similar way.
- the processing device 140 may obtain a sample segmentation result and a preliminary segmentation result (or updated segmentation result in subsequent iterations) of a sample image.
- the sample segmentation result may include a sample boundary and the preliminary segmentation result may include a preliminary boundary.
- the sample boundary and the preliminary boundary may include a target region corresponding to an ROI of an object.
- the processing device 140 may determine preliminary boundary points on the preliminary boundary.
- the preliminary boundary points may refer to vertices only.
- the preliminary boundary points may refer to any point on the preliminary boundary.
- the processing device 140 may generate a smoothed preliminary boundary by performing a smoothing operation on the preliminary boundary.
- the processing device 140 may further determine the preliminary boundary points on the smoothed preliminary boundary.
- the sample image may include a plurality of slice images.
- the processing device 140 may determine preliminary boundary points on the preliminary boundary in each of the slice images.
- the processing device 140 may generate a middle slice by performing a shape-based interpolation, and determine interpolated boundary points in the middle slice.
- the interpolated boundary points may be designated as preliminary boundary points corresponding to the middle slice.
- the processing device 140 may determine distances from preliminary boundary points on the preliminary boundary to the sample boundary.
- the distance from a preliminary boundary point to the sample boundary may be calculated as a minimum value of distances from the preliminary boundary point to every sample boundary point on the sample boundary.
- the sample boundary point may refer to vertices only or points on the sample boundary.
- the distance from the preliminary boundary point E to the sample boundary may be calculated as L2, e.g., the shortest distance from E to the edge D’E’.
- the processing device 140 may determine an objective function based on the distances from preliminary boundary points on the preliminary boundary to the sample boundary.
- the objective function may relate to an average value, a minimum value, or a maximum value of the distances from the preliminary boundary points on the preliminary boundary to the sample boundary.
- the objective function may relate to a minimum value (e.g., 0) , an average value (e.g., (L1+L2) /9) , a maximum value (e.g., L1) of the distances from the preliminary boundary points (e.g., A, B, C, D, E, F, G, H, I) to the sample boundary.
- the processing device 140 may determine one or more error edges from a plurality of edges of the preliminary boundary.
- the processing device 140A may determine one or more error points in the preliminary boundary points of the preliminary boundary. In some embodiments, for each of the preliminary boundary points on the preliminary boundary, the processing device 140 may determine whether the distance from the preliminary boundary point to the sample boundary exceeds a distance threshold. In response to determining that the distance from the preliminary boundary point to the sample boundary exceeds the distance threshold, the processing device 140 may determine that the preliminary boundary point is an error point; otherwise, the processing device 140 may determine that the preliminary boundary point is not an error point. As exemplified in FIG. 9, as L1 and L2 exceeds the distance threshold, D and E may be deemed as error points. In some embodiments, the distance threshold may be one pixel. Alternatively, the distance threshold may be any value, including but not limited to, 2 pixels, 3 pixels, 5 pixels, 10 pixels, 20 pixels, 0.1 mm, 0.2 mm, 0.5 mm, 1 mm, 2 mm, etc.
- the processing device 140 may determine the error edge (s) based on the error point (s) .
- an error edge may be defined as an edge on the preliminary boundary that traverses at least one of the error points.
- points D and E may be determined as error points and edges CD, DE, and EF may be determined as error edges.
- an error edge may be defined as an edge on the preliminary boundary that traverses two or more error points (e.g., two adjacent error points) .
- the processing device 140 may determine whether a preliminary edge is an error edge based on whether an angle formed between the preliminary edge and the sample boundary exceeds an angle threshold. As exemplified in FIG. 9, if an angle ⁇ formed between edge EF and the sample boundary exceeds an angle threshold, the processing device 140 may determine that edge EF is an error edge.
- the processing device 140 may determine a value of an objective function based on the error edge (s) .
- the value of the objective function may be determined based on lengths of the error edge (s) and a length of the sample boundary.
- the value of the objective function may be determined based on a total length of error edges and the length of the sample boundary.
- the objective function may be expressed as:
- L i, i-1 denotes distance between i th boundary point and i-1 th boundary point
- N A denotes number of boundary points in preliminary boundary
- N R denotes number of boundary points in sample boundary
- a i may be expressed as:
- d (p i , R) denotes distance between the i th preliminary boundary point to the sample boundary
- ⁇ denotes the distance threshold
- the objective function may be expressed as:
- the objective function may be directly related to the error points.
- the objective function may be related to the count of the error points, the locations of the error points, the density of the error points, or the like, or any combination thereof.
- the processing device 140 may determine areas of regions bounded by the sample boundary and the preliminary boundary.
- the areas of regions bounded by the sample boundary of segmentation result 1010 and the preliminary boundary of segmentation result 1020 may be S2+S3.
- the areas of regions bounded by the sample boundary of segmentation result 1030 and the preliminary boundary of segmentation result 1040 may be S2.
- the processing device 140 may determine the objective function based on the areas of regions and the length of the sample boundary.
- the objective function may be expressed as:
- an objective function may relate to the count of boundaries in the preliminary segmentation result.
- the sample target region may include at least one sub-sample target region
- the sample boundary may include a sub-sample boundary of each sub-sample target region
- the preliminary boundary may include at least one sub-preliminary boundary.
- the value of the objective function may be determined further based on a difference between the count of the at least one sub-sample boundary and the count of the at least one sub-preliminary boundary.
- the sample target region may include a region corresponding to the left lung and a region corresponding to the right lung
- the sample boundary may include a first sub-sample boundary corresponding to the left lung and a second sub-sample boundary corresponding to the right lung. If the preliminary boundary includes only one sub-preliminary boundary or more than two sub-preliminary boundaries, the preliminary segmentation result may be erroneous or inaccurate.
- the objective function may be determined based on the difference between the count of the at least one sub-sampled boundary and the count of the at least one sub-preliminary boundary so as to evaluate the recall level of the image segmentation model.
- FIG. 8A and FIG. 8B are schematic diagrams illustrating exemplary sample segmentation results and preliminary segmentation results.
- segmentation results in solid lines may be sample segmentation results.
- the sample segmentation results 810 and 830 may be referred to as “gold standard” (e.g., correct segmentation results) .
- the sample segmentation results 810 and 830 may be generated by experienced doctors or experts.
- radiation therapy treatments planned using the sample segmentation results 810 and 830 may have led to a desirable or acceptable result (e.g., showing efficacy in treating a tumor with little or no harm to healthy tissue) .
- the sample segmentation results 810 and 830 may be obtained from two training data sets, respectively.
- the segmentation results in broken lines may be segmentation results generated by the image segmentation model during the training of the image segmentation model.
- the segmentation results 820 and 840 may be referred to as preliminary segmentation results or updated segmentation results depending on the iteration of the training of the image segmentation model.
- the segmentation results 810 and 820 may correspond to a same target region of an object.
- the segmentation results 830 and 840 may correspond to a same target region of an object.
- the objective function only considers the overlapping area of the segmentation results.
- a model trained using such objective function may determine that the segmentation result 820 and the segmentation result 840 are as close to their respective “gold standard” as each other.
- the segmentation result 820 needs a much larger amount of modification by a user to reach the gold standard than the segmentation result 840.
- the conventional objective function is suboptimal.
- the objective function described in present disclosure e.g., operations 740, 760, and 780 in FIG. 7) takes the boundary information into consideration and the image segmentation model trained by the objective function described in present disclosure may determine that segmentation result 840 is much better than segmentation result 820.
- the image segmentation model relating to the segmentation result 820 may be trained further by performing several more iterations until the segmentation result 820 is as good as or better than the segmentation result 840, or a termination condition is satisfied.
- FIG. 9 is a schematic diagram illustrating exemplary segmentation results.
- polygon A’B’C’D’E’F’G’H’I’ may be a sample boundary of a sample segmentation result and the polygon ABCDEFGHI may be a preliminary boundary of a preliminary segmentation result or an updated boundary of an updated segmentation result.
- distances from preliminary boundary points on the preliminary boundary to the sample boundary may be calculated.
- the distance from a preliminary boundary point to the sample boundary may be calculated as a minimum value of distances from the preliminary boundary point to every point or every vertex on the sample boundary. For example, the distance from the preliminary boundary point A to the sample boundary may be zero because it coincides with the sample boundary point A’.
- the distances from the preliminary boundary points B, C, F, G, H, I to the sample boundary are zero.
- the distance from the preliminary boundary point E to the sample boundary may be calculated as L2 and the distance from the preliminary boundary point D to the sample boundary may be calculated as L1.
- points D and E may be determined as error points. If L1 and L2 both are greater than a distance threshold, points D and E may be determined as error points. If L1 is greater than the distance threshold while L2 is less than or equal to the distance threshold, only point D may be determined as the error point. Assuming that L1 and L2 both are greater than the distance threshold, points D and E may be determined as error points. As edges CD, DE, and EF each traverse at least one of the error points D and E, edges CD, DE, and EF may be determined as error edges.
- the objective function may be calculated as a ratio of a total length of error edges to the length of the sample boundary as:
- FIG. 10A and FIG. 10B are schematic diagrams illustrating exemplary segmentation results.
- segmentation results in solid lines e.g., segmentation results 1010 and 1030
- the segmentation results in broken lines e.g., segmentation results 1020 and 1040
- the segmentation results 1020 and 1040 may be referred to as preliminary segmentation results or updated segmentation results depending on the iteration of the training of the image segmentation model.
- the segmentation results 1010 and 1020 may correspond to a same target region of an object.
- the segmentation results 1030 and 1040 may correspond to a same target region of an object.
- the objective function only considers the overlapping area of the segmentation results.
- the objective function may be determined based on a Dice function.
- the Dice function may be expressed as:
- A denotes target region in preliminary (or updated) segmentation result
- R denotes target region in sample segmentation result
- ⁇ is an overlapping operator
- denotes an area of region X.
- S1 may be 100 (units are omitted)
- S3 may be 15, and S2 may be 20.
- the segmentation result 1040 is just slightly better than the segmentation result 1020.
- the FIGs 10A and 10B it may be noted from the FIGs 10A and 10B that about 40%of error edges have been corrected and the amount of subsequent modification of doctor should be reduced by about 40%accordingly. Therefore, a conventional objective function that considers only the overlapping area is suboptimal and often mistaken.
- the present application provides a method in operation 780 that the objective function is determined based on the length of the sample boundary and the areas of regions bounded by the sample boundary and the preliminary boundary.
- the objective function for the segmentation result 1020 may be calculated as 35/L1, wherein L1 is the length of sample boundary of the segmentation result 1010; the objective function for the segmentation result 1040 may be calculated as 20/L2, wherein L2 is the length of sample boundary 1030. It may be noticed from the FIGs 10A and 10B that L1 is very close to L2. Therefore, the objective function for the segmentation result 1040 is reduced by about 43% ( (35-20) /35) (or slightly less than 43%) than the objective function related to the segmentation result 1020, reflecting the amount of expected subsequent modification of the doctor.
- the term objective function may refer to a mathematical expression (e.g., Equation (4) or a specific value with respect to a specific segmentation result.
- FIG. 11 is a schematic diagram illustrating an exemplary process of training an image segmentation model.
- the segmentation model 1160 may include an input layer, a hidden layer and an output layer.
- the hidden layer may include a plurality of convolutional layers, a plurality of pooling layers, and/or a plurality of fully connected layers (not shown in FIG. 11) .
- a training data set including a sample image 1110 and a sample segmentation result 1120 of the sample image 1110 may be inputted into an input layer of a preliminary segmentation model 1160.
- the output layer may generate a preliminary segmentation result 1130.
- the image segmentation model may determine the value of an objective function based on the sample segmentation result 1120 and the preliminary segmentation result 1130.
- the image segmentation model may determine whether a termination condition is satisfied (e.g., whether the objective function is less than a termination threshold) .
- the sample segmentation result 1120 and the preliminary segmentation result 1130 may have different boundaries.
- the objective function configured according to some embodiments of the present disclosure relates to boundary information of the segmentation results
- the value of the objective function corresponding to the segmentation result may be greater than the threshold.
- the image segmentation model may be further updated based on the sample image 1110, the sample segmentation result 1120, the preliminary segmentation result 1130 and the value of the objective function.
- the termination condition may be satisfied (e.g., the value of the objective function may be less than the threshold, the boundaries of segmentation result 1140, 1150, etc., becoming the same as or sufficiently close to the sample boundary) , and the updated image segmentation model in the current iteration may be designated as a trained image segmentation model.
- aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
- LAN local area network
- WAN wide area network
- SaaS Software as a Service
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radiation-Therapy Devices (AREA)
Abstract
Un procédé de planification de traitement peut consister à obtenir une image médicale d'un objet, l'objet comprenant une ROI à laquelle un traitement de radiothérapie est dirigé. Le procédé peut également consister à obtenir un modèle de segmentation d'image ayant été entraîné sur la base d'une fonction objective, la fonction objective se rapportant à des informations de bord d'erreur d'une sortie du modèle de segmentation d'image. Le procédé peut également consister à générer un résultat de segmentation par exécution du modèle de segmentation d'image sur la base de l'image médicale, le résultat de segmentation comprenant une limite d'une région cible dans l'image médicale correspondant à la ROI de l'objet. Le procédé peut également comprendre la planification du traitement de radiothérapie dirigé vers la ROI de l'objet selon le résultat de la segmentation.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/118205 WO2022061840A1 (fr) | 2020-09-27 | 2020-09-27 | Systèmes et procédés de génération de plan de radiothérapie |
CN202080105608.4A CN116261743A (zh) | 2020-09-27 | 2020-09-27 | 用于生成放射治疗计划的系统和方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/118205 WO2022061840A1 (fr) | 2020-09-27 | 2020-09-27 | Systèmes et procédés de génération de plan de radiothérapie |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022061840A1 true WO2022061840A1 (fr) | 2022-03-31 |
Family
ID=80844873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/118205 WO2022061840A1 (fr) | 2020-09-27 | 2020-09-27 | Systèmes et procédés de génération de plan de radiothérapie |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116261743A (fr) |
WO (1) | WO2022061840A1 (fr) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170091574A1 (en) * | 2014-05-16 | 2017-03-30 | The Trustees Of The University Of Pennsylvania | Applications of automatic anatomy recognition in medical tomographic imagery based on fuzzy anatomy models |
WO2018039380A1 (fr) * | 2016-08-26 | 2018-03-01 | Elekta, Inc. | Systèmes et procédés de segmentation d'images à l'aide de réseaux neuronaux convolutionnels |
WO2018140596A2 (fr) * | 2017-01-27 | 2018-08-02 | Arterys Inc. | Segmentation automatisée utilisant des réseaux entièrement convolutifs |
CN111105424A (zh) * | 2019-12-19 | 2020-05-05 | 广州柏视医疗科技有限公司 | 淋巴结自动勾画方法及装置 |
CN111128340A (zh) * | 2019-12-25 | 2020-05-08 | 上海联影医疗科技有限公司 | 放射治疗计划生成设备、装置和存储介质 |
CN111311592A (zh) * | 2020-03-13 | 2020-06-19 | 中南大学 | 一种基于深度学习的三维医学图像自动分割方法 |
-
2020
- 2020-09-27 CN CN202080105608.4A patent/CN116261743A/zh active Pending
- 2020-09-27 WO PCT/CN2020/118205 patent/WO2022061840A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170091574A1 (en) * | 2014-05-16 | 2017-03-30 | The Trustees Of The University Of Pennsylvania | Applications of automatic anatomy recognition in medical tomographic imagery based on fuzzy anatomy models |
WO2018039380A1 (fr) * | 2016-08-26 | 2018-03-01 | Elekta, Inc. | Systèmes et procédés de segmentation d'images à l'aide de réseaux neuronaux convolutionnels |
WO2018140596A2 (fr) * | 2017-01-27 | 2018-08-02 | Arterys Inc. | Segmentation automatisée utilisant des réseaux entièrement convolutifs |
CN111105424A (zh) * | 2019-12-19 | 2020-05-05 | 广州柏视医疗科技有限公司 | 淋巴结自动勾画方法及装置 |
CN111128340A (zh) * | 2019-12-25 | 2020-05-08 | 上海联影医疗科技有限公司 | 放射治疗计划生成设备、装置和存储介质 |
CN111311592A (zh) * | 2020-03-13 | 2020-06-19 | 中南大学 | 一种基于深度学习的三维医学图像自动分割方法 |
Also Published As
Publication number | Publication date |
---|---|
CN116261743A (zh) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11369805B2 (en) | System and method for pretreatement imaging in adaptive radiation therapy | |
CN109060849B (zh) | 一种确定辐射剂量调制线的方法、系统和装置 | |
US20230330436A1 (en) | System and method for adaptive radiation therapy | |
US11813103B2 (en) | Methods and systems for modulating radiation dose | |
US11972575B2 (en) | Systems and methods for generating augmented segmented image set | |
US20230009625A1 (en) | Systems and methods for generating adaptive radiation therapy plan | |
WO2021190276A1 (fr) | Systèmes et procédés de simulation de données de projection | |
US20230290480A1 (en) | Systems and methods for clinical target contouring in radiotherapy | |
CN109077746B (zh) | 一种确定辐射剂量调制线的方法、系统和装置 | |
US12011612B2 (en) | Systems and methods for robust radiation treatment planning | |
US20210290979A1 (en) | Systems and methods for adjusting beam-limiting devices | |
US11244446B2 (en) | Systems and methods for imaging | |
US20220387822A1 (en) | Systems and methods for radiotherapy planning | |
US20230169668A1 (en) | Systems and methods for image registration | |
WO2022061840A1 (fr) | Systèmes et procédés de génération de plan de radiothérapie | |
WO2019091087A1 (fr) | Systèmes et procédés de correction d'images de projection dans une reconstruction d'image de tomodensitométrie | |
US20230191158A1 (en) | Systems and methods for radiotherapy | |
US20240335677A1 (en) | Systems and methods for robust radiation treatment planning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20954691 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20954691 Country of ref document: EP Kind code of ref document: A1 |