CN113041516B - Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image - Google Patents

Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image Download PDF

Info

Publication number
CN113041516B
CN113041516B CN202110330501.6A CN202110330501A CN113041516B CN 113041516 B CN113041516 B CN 113041516B CN 202110330501 A CN202110330501 A CN 202110330501A CN 113041516 B CN113041516 B CN 113041516B
Authority
CN
China
Prior art keywords
patient
image
virtual
target area
tissue organ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110330501.6A
Other languages
Chinese (zh)
Other versions
CN113041516A (en
Inventor
申国盛
李强
刘新国
戴中颖
金晓东
贺鹏博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Modern Physics of CAS
Original Assignee
Institute of Modern Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Modern Physics of CAS filed Critical Institute of Modern Physics of CAS
Priority to CN202110330501.6A priority Critical patent/CN113041516B/en
Publication of CN113041516A publication Critical patent/CN113041516A/en
Application granted granted Critical
Publication of CN113041516B publication Critical patent/CN113041516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1064Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
    • A61N5/1069Target adjustment, e.g. moving the patient support
    • A61N5/107Target adjustment, e.g. moving the patient support in real time, i.e. during treatment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1061Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
    • A61N2005/1062Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Radiation-Therapy Devices (AREA)

Abstract

The invention relates to a method, a system and a storage medium for guiding and positioning three-dimensional images, wherein the method comprises the following steps: automatically segmenting a tissue organ and a tumor target area of a 3D-CT image set of a patient by adopting an automatic segmentation algorithm, and reconstructing contour data of the tissue organ and the tumor target area of a treatment plan of the patient by adopting a tissue organ model reconstruction algorithm; generating a virtual 3D-CT image set of the patient by adopting an artificial intelligent network algorithm based on the real-time DR image of the patient; automatically segmenting a virtual tissue organ and a tumor target area of a patient by adopting an automatic segmentation algorithm in a virtual 3D-CT image set of the patient, and reconstructing contour data of the virtual tissue organ and the tumor target area of the patient; registering the contour data of the tissue organ and the tumor target area of the patient treatment plan and the contour data of the virtual tissue organ and the tumor target area, outputting a patient positioning offset parameter, and judging whether the radiotherapy condition is met: if not, guiding the patient to reposition; and if the conditions are met, finishing positioning.

Description

Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image
Technical Field
The invention relates to a method, a system and a storage medium for three-dimensional image guiding positioning based on an artificial intelligence technology and a DR system in radiotherapy, and relates to the field of positioning image guiding of radiotherapy patients.
Background
The positioning verification speed and precision of the patient in the radiation therapy are important factors influencing the treatment efficiency and the curative effect of the patient, and particularly in the particle precise radiation therapy technology, the positioning of the patient occupies more treatment time, so that the radiation therapy efficiency is greatly reduced, the treatment cost is increased, and the radiotherapy effect of the patient is influenced. Therefore, how to guide the patient to perform the positioning operation and verification quickly and effectively is one of the keys of the image-guided radiotherapy technology. The current image guidance system is often used as a stand-alone medical device, and most of the current image guidance systems use a digital X-ray imaging (DR) system, a cone beam CT (cbct) imaging system and a track CT (CT-on-rail) system to acquire the positioning position information of the patient, and perform positioning guidance and verification on the patient.
In a conventional guidance system based on a DR image, two DR imaging devices intersected at a large angle (close to or equal to 90-degree orthogonality) are required to be connected with a rotating DR device to generate two DR images intersected at a large angle, and the two DR images are registered with a digital reconstruction radiographic image (DRR) generated by a CT image of a patient treatment plan to obtain the deviation amount of the positioning position of the patient, so that the patient is guided to be positioned, and three-dimensional (3D) positioning guidance in a true sense is not realized. In addition, the CBCT and orbital CT imaging systems add additional radiation dose to the patient, increasing the risk of patient complications, and are expensive, and the obtained CBCT image has low density resolution and registration accuracy and speed with the patient plan CT.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a three-dimensional image-guided positioning method, system and storage medium based on artificial intelligence and DR system, which can realize accurate 3D image guidance and obtain positioning information of a patient.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for three-dimensional image-guided positioning, comprising:
automatically segmenting a tissue organ and a tumor target area of a 3D-CT image set of a patient by adopting an automatic segmentation algorithm, and reconstructing contour data of the tissue organ and the tumor target area of a treatment plan of the patient by adopting a tissue organ model reconstruction algorithm;
generating a virtual 3D-CT image set of a patient by adopting an artificial intelligent network algorithm based on a real-time DR image of the patient;
automatically segmenting a virtual tissue organ and a tumor target area of a virtual 3D-CT image set of a patient by adopting an automatic segmentation algorithm, and reconstructing contour data of the virtual tissue organ and the tumor target area of the patient by adopting a tissue organ model reconstruction algorithm;
registering the contour data of the tissue organ and the tumor target area of the patient treatment plan and the contour data of the virtual tissue organ and the tumor target area, outputting a patient positioning offset parameter, and judging whether the offset parameter meets the radiotherapy condition: if not, guiding the patient to reposition; and if the conditions are met, finishing positioning.
Further, a real-time DR image of the patient is acquired by employing a DR imaging apparatus.
Furthermore, the DR imaging equipment comprises a set of X-ray sources and imaging flat plates corresponding to the X-ray sources;
the X-ray source is arranged at the top of the treatment room, the imaging flat plate is arranged on the ground part of the treatment room, and the X-ray source and the imaging flat plate respectively use a small-angle orbit to move; or,
the X-ray source and the imaging flat plate are connected into a whole through a C-shaped arm to move at a small angle.
Further, the artificial intelligence network algorithm is obtained through training verification and comprises the following steps:
using a DR imaging device to shoot a DR image of a patient, simultaneously using a CT system to shoot a 3D-CT image of the same part of the same patient, enabling the DR image of the patient to correspond to the 3D-CT image one by one, and establishing the DR image and a 3D-CT image data set corresponding to the DR image; and establishing a neural network model to carry out training verification by taking part of the data in the established data set as a training data set and the other part of the data as a verification data set, and obtaining the weight and the parameters of the artificial intelligent network through continuous iteration of operation so as to obtain the trained artificial intelligent network model.
Furthermore, the automatic segmentation algorithm adopts a deep learning-based convolutional neural network model, and can automatically segment tissue organs and tumor target areas according to the input CT image.
Furthermore, the tissue and organ reconstruction algorithm can reconstruct a 3D model of all or specified tissue and organs, and can perform rendering display of different colors and modalities on different tissue and organs, so that a user can observe and distinguish the operation conveniently.
Further, the registration employs a tissue organ registration algorithm for manual and/or automatic 3D model registration.
In a second aspect, the present invention also provides a system for three-dimensional image-guided positioning, the system comprising:
the organ reconstruction unit is configured to automatically segment the tissue organ and the tumor target area of the 3D-CT image set of the patient by adopting an automatic segmentation algorithm, and reconstruct the contour data of the tissue organ and the tumor target area of the treatment plan of the patient through a tissue organ model reconstruction algorithm;
the virtual image generation unit is used for generating a virtual 3D-CT image set of the patient by adopting an artificial intelligent network algorithm based on the real-time DR image of the patient;
the virtual organ reconstruction unit automatically segments a virtual 3D-CT image set of a patient into a virtual tissue organ and a tumor target area by adopting an automatic segmentation algorithm, and reconstructs the contour data of the virtual tissue organ and the tumor target area of the patient through a tissue organ model reconstruction algorithm;
the positioning judging unit is used for registering the contour data of the tissue organ and the tumor target area of the treatment plan of the patient and the contour data of the virtual tissue organ and the tumor target area, outputting a positioning offset parameter of the patient and judging whether the offset parameter accords with the radiation treatment condition: if not, guiding the patient to reposition; and if the conditions are met, finishing positioning.
In a third aspect, the present invention further provides a processing device, which at least includes a processor and a memory, where the memory stores a computer program thereon, and the processor executes the computer program when executing the computer program to implement the method for three-dimensional image-guided positioning according to the first aspect of the present invention.
In a fourth aspect, the present invention also provides a computer storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the method for three-dimensional image-guided positioning according to the first aspect of the present invention.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. according to the method, a patient positioning virtual 3D-CT image is generated according to a small amount of DR images, 3D reconstruction and registration are carried out on treatment plan 3D-CT and virtual 3D-CT of a patient, patient positioning information is obtained, accurate 3D image-guided radiotherapy is realized, and defects and shortcomings in conventional DR images and CBCT image guidance are overcome;
2. the invention uses artificial intelligence technology to convert the real-time 2D-DR image into a virtual 3D-CT image, and carries out 3D reconstruction and registration on the virtual 3D-CT image and the treatment plan 3D-CT image, thus realizing real 3D guidance;
3. the invention only needs a single DR imaging device, and has lower cost; compared with the CBCT system, the device cost is reduced, 3D positioning guidance is realized, and meanwhile, the extra radiation dose to a patient is lower during imaging.
In summary, the present invention is suitable for patient positioning guidance in any radiation therapy system.
Drawings
Various additional advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Like reference numerals refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of a 3D image-guided positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of DR apparatus coordinates according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an artificial intelligence network algorithm according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It is to be understood that the terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," "including," and "having" are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order described or illustrated, unless specifically identified as an order of performance. It should also be understood that additional or alternative steps may be used.
For convenience of description, spatially relative terms, such as "inner", "outer", "lower", "upper", and the like, may be used herein to describe one element or feature's relationship to another element or feature as illustrated in the figures. Such spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures.
Computer technology, especially artificial intelligence technology, shows excellent performance in computer vision and medical image processing segmentation and multi-modal image generation, and multi-modal image generation and automatic segmentation technology are increasingly realized. Therefore, it is feasible and necessary to develop a method for realizing 3D high-precision patient setup image guidance and verification while reducing the price of an image guidance device based on artificial intelligence technology.
Example 1
As shown in fig. 1, the 3D image guided positioning method based on artificial intelligence technology and DR system provided in this embodiment includes:
s1: setting a single DR imaging system device
Specifically, according to the schematic diagram of the DR imaging apparatus shown in fig. 2, the DR imaging apparatus of the present embodiment includes a set of X-ray emission sources 1 and corresponding imaging panels 2 for acquiring real-time DR images of a patient. The system equipment can install the X-ray source 1 at the top of a treatment room, the imaging flat plate 2 is installed at the ground part of the treatment room, and the X-ray source and the imaging flat plate respectively use small-angle tracks to move, and the movement mode is controlled by a corresponding control system, so that the consistency of the movement direction and the accuracy of the position are ensured; of course, the X-ray source 1 and the imaging plate 2 may be connected together by a C-arm to perform a small angle motion as a whole, if necessary.
In some implementations, the DR imaging apparatus of this embodiment can perform small-angle rotation imaging with a center point of a treatment room as an origin to generate DR images at different angles, where in treatment room coordinate axis XYZ, the origin of the coordinate is a beam isocenter of the treatment room, an X axis is parallel to a treatment room ground and points to a zero-degree direction of a treatment couch, a Y axis is parallel to the treatment room ground and points to a 90 ° direction of the treatment couch, and a Z axis is perpendicular to the treatment room ground and points to a top of the treatment room.
In other implementations, the small angle of the present embodiment is defined between-15 degrees and +15 degrees.
S2: and constructing a DR image and a 3D-CT image data set corresponding to the DR image for the radiotherapy of the patient, wherein the 3D-CT image data set is used for training and verifying an artificial intelligent network algorithm model.
Specifically, a DR imaging device is used for shooting a DR image of a patient, a CT system is used for shooting a 3D-CT image of the same part of the same patient, the DR image and the 3D-CT image of the patient are in one-to-one correspondence, and a data set is established. And (3) taking 80% of data in the established data set as a training data set and 20% of data in the established data set as a verification data set, firstly constructing a model and then training and verifying.
S3: an artificial intelligent network algorithm model is constructed, the principle of the algorithm model is shown in figure 3, the algorithm is realized by adopting a neural network, a small number of DR images can be input, and a virtual 3D-CT data set is output;
s4: training and verifying the artificial intelligent network algorithm model in the step S3 by using the DR image set in the step S2 and the 3D-CT image corresponding to the DR image set to obtain the weight and parameters of the artificial intelligent network model, wherein the parameters comprise the weight of each neuron of the network model and neuron parameters;
specifically, the artificial intelligence network algorithm model inputs N DR images and M layers of 3D-CT images corresponding to the DR images during training and verification. The value range of the N is larger than or equal to 1, the shooting angle of each DR image is different, although the larger the value of the N is, the better the value is theoretically, the more DR images to be shot are, the more extra radiation dose is added to a patient, the more economic cost is generated, and therefore the value of the N is not suitable to exceed 8. The number of layers of M is determined with reference to the number of layers of the treatment plan CT, generally the number of layers of M is close to or equal to the number of layers of the treatment plan CT, and the layer thickness should also be the same as or as close as possible to the layer thickness of the treatment plan CT so as to register the virtual 3D-CT with the treatment plan 3D-CT.
S5: acquiring a real-time DR image of a patient, generating a virtual 3D-CT image of the current patient by using an artificial intelligent network algorithm and artificial intelligent network weights and parameters, wherein the tissue organ is generated according to the 3D-CT segmentation of the patient;
specifically, the real-time DR image refers to a DR image captured before or during the current fractionated treatment of the patient, and is used for guiding and verifying the current treatment positioning of the patient.
S6: the method comprises the steps that a tissue and organ automatic segmentation algorithm based on deep learning is constructed, and after a CT image and a doctor corresponding to the CT image are used for manually segmenting tissue and organs for training and verification, tissue and organs (such as skin, bones and the like) and a tumor target area on the CT image can be automatically and accurately segmented according to the CT image;
specifically, the deep learning-based automatic tissue organ segmentation algorithm uses a deep learning convolutional neural network model, can automatically segment tissue organs according to input CT images, has training verification data from the tissue organs manually delineated by physicians with abundant experience and corresponding CTs, and has the quality of a training data set which must be ensured.
S7: constructing a 3D tissue and organ model reconstruction algorithm, wherein the algorithm realizes a tissue and organ contour set automatically generated by inputting a 3D-CT image, the treatment plan CT data and/or the tissue and organ contour set of the current patient can reconstruct and output a 3D tissue and organ (such as skin, skeleton and the like) model, and all organs or specific organs in the CT can be selectively reconstructed according to the needs;
specifically, the 3D tissue and organ reconstruction algorithm can reconstruct all or specific specified tissue and organs, and perform rendering display of different colors and different modalities on different tissue and organs, so that a user can conveniently observe and distinguish operations.
S8: performing three-dimensional reconstruction according to the patient virtual CT data and/or the tissue organ contour set by using a 3D tissue organ reconstruction algorithm to obtain a model of the patient virtual 3D tissue organ such as skin, skeleton and the like;
s9: constructing a conventional 3D model registration algorithm, wherein the algorithm can register a plurality of input 3D models and output the offset of the registered 3D models;
specifically, the 3D tissue and organ registration algorithm can perform manual and/or automatic 3D model registration according to the reconstructed 3D tissue and organ model, and accurately output an offset parameter between the two models.
S10: using a 3D model registration algorithm, automatic and/or manual registration calculations are performed using as input a virtual 3D tissue organ of the patient and a patient plan and a real-time set of partially or fully treated 3D tissue organs and contours. In the invention, the positioning accuracy of the patient can be verified only by calculating once after the patient finishes positioning or before treatment and calculating once at an interval of 3-10 minutes in the treatment, and the positioning offset data of the current patient is output;
s11: it is determined whether the offset data output in step S10 meets the set radiation therapy requirement: if the patient does not meet the treatment requirement of the patient, guiding the patient to reappear the positioning according to the positioning offset data, transferring to the step S5 after the patient is repositioned, and starting to continue the positioning verification process; and if the positioning offset data meets the treatment requirement, finishing the positioning verification and starting to implement the treatment. Wherein, whether the standard meets the treatment requirement or not is determined by the doctor combined research and the engineering technical personnel according to the radiotherapy laws and regulations and the industry standard.
In conclusion, the invention carries out 3D reconstruction on the simple 2D-DR image by using an artificial intelligence technology to obtain a real-time virtual 3D-CT image of the patient, carries out three-dimensional reconstruction and registration on the reconstructed virtual 3D-CT image and the 3D-CT image of the treatment plan of the patient to obtain accurate 3D positioning offset parameters of the patient, guides and verifies the positioning of the patient and ensures the effect of radiotherapy.
Example 2
Based on the content of the above embodiment 1, this embodiment describes in detail a specific application process of the 3D image guided positioning method based on the artificial intelligence technology and the DR system, and the specific process is as follows:
first, a set of DR imaging system capable of moving at small angles of-15 degrees and +15 degrees is installed in the treatment room, and the device moves with the center point of treatment as the axis.
Secondly, the constructed artificial intelligence neural network is used, and the network can reconstruct the virtual 3D-CT image of the patient from N1-8 DR images. And performing training verification by using the marked DR image and the corresponding 3D-CT image to obtain the weight parameters of the artificial intelligent neural network model.
Thirdly, the constructed deep learning convolutional neural network is used, and the network can automatically and accurately segment the CT image to obtain the tissue organ and the tumor target area in the CT image. And (3) carrying out training verification by using the CT image manually segmented by the doctor with abundant experience to obtain the weight parameter of the network model.
Fourthly, when the patient starts the treatment or in the treatment process, the installed DR imaging system is used for shooting N1-8 real-time DR images, the images are led into the artificial intelligent neural network constructed and trained in the second step, and virtual 3D-CT images are output.
Fifthly, the virtual 3D-CT image and the patient treatment plan 3D-CT image output in the fourth step are subjected to automatic tissue and organ segmentation to obtain contour data of the tissue and organ and the tumor target area.
And sixthly, performing three-dimensional reconstruction on the organ of the tissue and the tumor target area output in the fifth step, performing three-dimensional registration calculation after reconstruction, and outputting a patient positioning offset parameter. Judging whether the radiotherapy conditions are met: if not, guiding the patient to reset according to the output parameters, and re-entering the fourth step; if the conditions are met, the positioning is finished and the treatment can be started.
Example 3
The embodiment 1 provides a 3D image-guided positioning method, and correspondingly, the embodiment provides a 3D image-guided positioning system. The guidance system provided in this embodiment may implement the 3D image guidance positioning method in embodiment 1, and the guidance system may be implemented by software, hardware, or a combination of software and hardware. For example, the guidance system may comprise integrated or separate functional modules or functional units to perform the corresponding steps in the methods of embodiment 1. Since the guidance system of the present embodiment is basically similar to the method embodiment, the description process of the present embodiment is relatively simple, and reference may be made to part of the description of embodiment 1 for relevant points.
The present embodiment provides a system for three-dimensional image guided positioning, including:
the organ reconstruction unit is configured to automatically segment the tissue organ and the tumor target area of the 3D-CT image set of the patient by adopting an automatic segmentation algorithm, and reconstruct the contour data of the tissue organ and the tumor target area of the treatment plan of the patient through a tissue organ model reconstruction algorithm;
the virtual image generation unit is used for generating a virtual 3D-CT image set of the patient by adopting an artificial intelligent network algorithm based on the real-time DR image of the patient;
the virtual organ reconstruction unit automatically segments a virtual 3D-CT image set of a patient into a virtual tissue organ and a tumor target area by adopting an automatic segmentation algorithm, and reconstructs the contour data of the virtual tissue organ and the tumor target area of the patient through a tissue organ model reconstruction algorithm;
the positioning judging unit is used for registering the contour data of the tissue organ and the tumor target area of the treatment plan of the patient and the contour data of the virtual tissue organ and the tumor target area, outputting a positioning offset parameter of the patient and judging whether the offset parameter accords with the radiotherapy condition: if not, guiding the patient to reposition; and if the conditions are met, finishing the positioning.
Example 4
The present embodiment provides a processing device corresponding to the method for 3D image guided positioning provided in embodiment 1, where the processing device may be an electronic device for a client, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, etc., to execute the method of embodiment 1.
The processing equipment comprises a processor, a memory, a communication interface and a bus, wherein the processor, the memory and the communication interface are connected through the bus so as to complete mutual communication. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The memory stores a computer program that can be run on the processor, and the processor executes the 3D image-guided positioning method provided in embodiment 1 when running the computer program.
In some implementations, the Memory may be a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory, such as at least one disk Memory.
In other implementations, the processor may be various general-purpose processors such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), and the like, and is not limited herein.
Example 5
The method of 3D image-guided positioning of this embodiment 1 can be embodied as a computer program product, which can include a computer-readable storage medium having computer-readable program instructions for executing the method of this embodiment 1.
The computer readable storage medium may be a tangible device that holds and stores the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any combination of the foregoing.
It should be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application should be defined by the claims.

Claims (9)

1. A method of three-dimensional image-guided placement, comprising:
automatically segmenting a tissue organ and a tumor target area of a 3D-CT image set of a patient by adopting an automatic segmentation algorithm, and reconstructing contour data of the tissue organ and the tumor target area of a treatment plan of the patient by adopting a tissue organ model reconstruction algorithm;
generating a virtual 3D-CT image set of a patient by adopting an artificial intelligence network algorithm based on a real-time DR image of the patient, wherein the artificial intelligence network algorithm is obtained by training and verifying and comprises the following steps: using a DR imaging device to shoot a DR image of a patient, simultaneously using a CT system to shoot a 3D-CT image of the same part of the same patient, enabling the DR image of the patient to correspond to the 3D-CT image one by one, and establishing the DR image and a 3D-CT image data set corresponding to the DR image; using part of data in the established data set as a training data set and the other part of the data set as a verification data set, constructing a neural network model for training and verification, and continuously iterating through operation to obtain the weight and parameters of the artificial intelligent network so as to obtain a trained artificial intelligent network model; automatically segmenting a virtual tissue organ and a tumor target area of a virtual 3D-CT image set of a patient by adopting an automatic segmentation algorithm, and reconstructing contour data of the virtual tissue organ and the tumor target area of the patient by adopting a tissue organ model reconstruction algorithm;
registering the contour data of the tissue organ and the tumor target area of the patient treatment plan and the contour data of the virtual tissue organ and the tumor target area, outputting a patient positioning offset parameter, and judging whether the offset parameter meets the radiotherapy condition: if not, guiding the patient to reposition; and if the conditions are met, finishing positioning.
2. The method of three-dimensional image-guided posing of claim 1, wherein the real-time DR image of the patient is acquired using a DR imaging device.
3. The method of claim 2, wherein the DR imaging apparatus comprises a set of X-ray sources and corresponding imaging panels;
the X-ray source is arranged at the top of the treatment room, the imaging flat plate is arranged on the ground part of the treatment room, and the X-ray source and the imaging flat plate respectively use a small-angle orbit to move; or,
the X-ray source and the imaging flat plate are connected integrally through a C-shaped arm to perform small-angle motion.
4. The method of claim 1, wherein the automatic segmentation algorithm employs a deep learning-based convolutional neural network model, which can automatically segment tissue organs and tumor target regions from the input CT images.
5. The method of claim 1, wherein the organ reconstruction algorithm reconstructs 3D models of all or designated organs and renders different organs in different colors and modalities for easy viewing and resolution.
6. The method of three-dimensional image-guided posing according to claim 1, wherein registration employs a tissue-organ registration algorithm for manual and/or automatic 3D model registration.
7. A three-dimensional image guidance system, comprising:
the organ reconstruction unit is configured to automatically segment the tissue organ and the tumor target area of the 3D-CT image set of the patient by adopting an automatic segmentation algorithm, and reconstruct the contour data of the tissue organ and the tumor target area of the treatment plan of the patient through a tissue organ model reconstruction algorithm;
the virtual image generation unit is used for generating a virtual 3D-CT image set of the patient by adopting an artificial intelligence network algorithm based on the real-time DR image of the patient, wherein the artificial intelligence network algorithm is obtained by training and verification and comprises the following steps: using a DR imaging device to shoot a DR image of a patient, simultaneously using a CT system to shoot a 3D-CT image of the same part of the same patient, enabling the DR image of the patient to correspond to the 3D-CT image one by one, and establishing the DR image and a 3D-CT image data set corresponding to the DR image; using part of data in the established data set as a training data set and the other part of the data set as a verification data set, constructing a neural network model for training and verification, and obtaining the weight and the parameters of the artificial intelligent network through continuous iteration of operation so as to obtain a trained artificial intelligent network model;
the virtual organ reconstruction unit automatically segments a virtual 3D-CT image set of a patient into a virtual tissue organ and a tumor target area by adopting an automatic segmentation algorithm, and reconstructs the contour data of the virtual tissue organ and the tumor target area of the patient through a tissue organ model reconstruction algorithm;
the positioning judging unit is used for registering the contour data of the tissue organ and the tumor target area of the treatment plan of the patient and the contour data of the virtual tissue organ and the tumor target area, outputting a positioning offset parameter of the patient and judging whether the offset parameter accords with the radiotherapy condition: if not, guiding the patient to reposition; and if the conditions are met, finishing positioning.
8. A processing device comprising at least a processor and a memory, the memory having stored thereon a computer program, characterized in that the processor executes when executing the computer program to implement the method of three-dimensional image guided setup according to any of claims 1 to 6.
9. A computer storage medium having computer readable instructions stored thereon which are executable by a processor to perform the method of three-dimensional image guided positioning of any of claims 1 to 6.
CN202110330501.6A 2021-03-25 2021-03-25 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image Active CN113041516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110330501.6A CN113041516B (en) 2021-03-25 2021-03-25 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110330501.6A CN113041516B (en) 2021-03-25 2021-03-25 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image

Publications (2)

Publication Number Publication Date
CN113041516A CN113041516A (en) 2021-06-29
CN113041516B true CN113041516B (en) 2022-07-19

Family

ID=76516292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110330501.6A Active CN113041516B (en) 2021-03-25 2021-03-25 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image

Country Status (1)

Country Link
CN (1) CN113041516B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744320B (en) * 2021-09-10 2024-03-29 中国科学院近代物理研究所 Intelligent ion beam self-adaptive radiotherapy system, storage medium and equipment
CN114241074B (en) * 2021-12-20 2023-04-21 四川大学 CBCT image reconstruction method for deep learning and electronic noise simulation
CN114558251A (en) * 2022-01-27 2022-05-31 苏州雷泰医疗科技有限公司 Automatic positioning method and device based on deep learning and radiotherapy equipment
CN117745978B (en) * 2024-02-20 2024-04-30 四川大学华西医院 Simulation quality control method, equipment and medium based on human body three-dimensional reconstruction algorithm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101574266A (en) * 2008-05-08 2009-11-11 西安一体医疗科技股份有限公司 Radiation therapy positioning method and radiation therapy positioning device
JP2011072457A (en) * 2009-09-30 2011-04-14 Hitachi Ltd Radiotherapy system
CN108460813A (en) * 2018-01-02 2018-08-28 沈阳东软医疗系统有限公司 A kind of Target delineations method and apparatus
CN111870825A (en) * 2020-07-31 2020-11-03 于金明 Radiotherapy precise field-by-field positioning method based on virtual intelligent medical platform
CN112316318A (en) * 2020-11-06 2021-02-05 中国科学院近代物理研究所 Positioning guide system and method for image-guided radiotherapy
CN112348857A (en) * 2020-11-06 2021-02-09 苏州雷泰医疗科技有限公司 Radiotherapy positioning offset calculation method and system based on deep learning
CN112435341A (en) * 2020-11-23 2021-03-02 推想医疗科技股份有限公司 Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10262424B2 (en) * 2015-12-18 2019-04-16 The Johns Hopkins University Method for deformable 3D-2D registration using multiple locally rigid registrations
US11475991B2 (en) * 2018-09-28 2022-10-18 Varian Medical Systems International Ag Methods and systems for adaptive radiotherapy treatment planning using deep learning engines

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101574266A (en) * 2008-05-08 2009-11-11 西安一体医疗科技股份有限公司 Radiation therapy positioning method and radiation therapy positioning device
JP2011072457A (en) * 2009-09-30 2011-04-14 Hitachi Ltd Radiotherapy system
CN108460813A (en) * 2018-01-02 2018-08-28 沈阳东软医疗系统有限公司 A kind of Target delineations method and apparatus
CN111870825A (en) * 2020-07-31 2020-11-03 于金明 Radiotherapy precise field-by-field positioning method based on virtual intelligent medical platform
CN112316318A (en) * 2020-11-06 2021-02-05 中国科学院近代物理研究所 Positioning guide system and method for image-guided radiotherapy
CN112348857A (en) * 2020-11-06 2021-02-09 苏州雷泰医疗科技有限公司 Radiotherapy positioning offset calculation method and system based on deep learning
CN112435341A (en) * 2020-11-23 2021-03-02 推想医疗科技股份有限公司 Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
普通摄像机图像引导的放射治疗技术;申国盛;《光学精密工程》;20190630(第6期);全文 *

Also Published As

Publication number Publication date
CN113041516A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN113041516B (en) Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image
JP6761128B2 (en) Neural network for generating synthetic medical images
US20210046327A1 (en) Real-time patient motion monitoring using a magnetic resonance linear accelerator (mrlinac)
CN111724904A (en) Multi-tasking progressive network for patient modeling for medical scanning
CN107358607A (en) Tumour radiotherapy visual monitoring and visual servo intelligent control method
AU2017378629C1 (en) Online learning enhanced atlas-based auto-segmentation
CN110381840A (en) Use rotation 2Dx ray imager as imaging device to carry out target tracking during radiation disposition delivering
JPWO2019003474A1 (en) Radiotherapy tracking device, position detection device, and moving body tracking method
CN105228527B (en) Use the perspective evaluation of the tumour visibility for IGRT of the template generated from planning CT and profile
WO2006130771A2 (en) Four-dimensional volume of interest
WO2020087257A1 (en) Image guidance method and device, and medical equipment and computer readable storage medium
CN105167788B (en) Slur is as C arm systems
WO2022198553A1 (en) Three-dimensional image-guided positioning method and system, and storage medium
CN112154483A (en) Method and system for synthesizing real-time image by using optical body surface motion signal
CN111214764B (en) Radiotherapy positioning verification method and device based on virtual intelligent medical platform
JP6800462B2 (en) Patient positioning support device
US20220054862A1 (en) Medical image processing device, storage medium, medical device, and treatment system
Zhou et al. Feasibility study of deep learning‐based markerless real‐time lung tumor tracking with orthogonal X‐ray projection images
CN113041515A (en) Three-dimensional image guided moving organ positioning method, system and storage medium
Ferguson et al. Automated MV markerless tumor tracking for VMAT
WO2022198554A1 (en) Method and system for three-dimensional image guided positioning of organs in motion, and storage medium
EP4299114A1 (en) Methods, systems and computer readable mediums for determining a region-of-interest in surface-guided monitoring
Santhanam et al. A multi-GPU real-time dose simulation software framework for lung radiotherapy
CN115006737A (en) Radiotherapy body position monitoring system based on depth camera
AU2019453270B2 (en) Geometry-based real-time adaptive radiotherapy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant