CN116407780A - Target area position monitoring method, system and storage medium - Google Patents

Target area position monitoring method, system and storage medium Download PDF

Info

Publication number
CN116407780A
CN116407780A CN202111678024.9A CN202111678024A CN116407780A CN 116407780 A CN116407780 A CN 116407780A CN 202111678024 A CN202111678024 A CN 202111678024A CN 116407780 A CN116407780 A CN 116407780A
Authority
CN
China
Prior art keywords
imaging
image information
image
target
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111678024.9A
Other languages
Chinese (zh)
Inventor
蔡波
张志都
张涵祎
廖璨
孙步梁
章卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202111678024.9A priority Critical patent/CN116407780A/en
Publication of CN116407780A publication Critical patent/CN116407780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1054Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using a portal imaging system

Abstract

The embodiment of the specification provides a method, a system and a storage medium for monitoring the position of a target area, wherein the method comprises the following steps: acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions; position information of the target area is determined based on the one or more first image information and the one or more second image information. The method provided by the embodiment of the specification can accurately and real-timely monitor or track the position of the target area in three dimensions, and is beneficial to improving the accuracy of radiotherapy.

Description

Target area position monitoring method, system and storage medium
Technical Field
The present disclosure relates to the field of medical imaging, and in particular, to a method, a system, and a storage medium for monitoring a position of a target area.
Background
Radiation therapy is a way of eliminating or treating a patient's lesion (e.g., a tumor lesion) by radiation technology, and when radiation therapy is performed, an image guidance system is generally required to perform image guidance imaging so as to monitor a lesion area of the patient, thereby making a corresponding radiation therapy plan so as to achieve precise irradiation of a radiation beam on the lesion area. Existing image guidance systems typically perform image guided imaging by releasing radiation of a single energy (e.g., MV level), which has limited imaging quality and cannot well enable monitoring of the focal region, affecting radiation therapy accuracy. In addition, in practice, autonomous or involuntary movements of the patient (e.g., respiratory movements, peristalsis of organs, etc.) may cause that the area irradiated by the planning in the radiotherapy plan and the actual focal area cannot be consistent at any time, and thus the radiotherapy accuracy is affected.
Therefore, it is desirable to provide a method for monitoring the position of a target area, which can realize real-time and accurate monitoring of a focal area in radiotherapy, so as to guide a radiation beam to accurately irradiate the focal area, and improve the radiotherapy precision.
Disclosure of Invention
One of embodiments of the present disclosure provides a method for monitoring a location of a target area, including: acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions; position information of the target area is determined based on the one or more first image information and the one or more second image information.
In some embodiments, the first imaging energy corresponding to the one or more first image information and the second imaging energy corresponding to the one or more second image information are different; the determining the location information of the target region based on the one or more first image information and the one or more second image information includes: determining a target image of the target region based on the one or more first image information and the one or more second image information; and determining the position information of the target area based on the target image.
In some embodiments, the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions is performed by a plurality of imaging devices; the plurality of imaging devices includes at least a first imaging device and a second imaging device; a connecting line between the first imaging device and the target area and a connecting line between the second imaging device and the target area form a preset included angle; wherein each of the plurality of imaging devices includes a bulb and a detector module; the bulb is used for emitting an imaging beam with preset imaging energy; the detector module is used for receiving the imaging beam to generate image information.
In some embodiments, the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions includes: controlling a bulb tube of the first imaging device and the bulb tube of the second imaging device to emit imaging beams with first imaging energy to the target area from a first shooting direction and a second shooting direction respectively, wherein the imaging beams with the first imaging energy are received by a detector module of the first imaging device and a detector module of the second imaging device so as to acquire first image information corresponding to the first shooting direction and first image information corresponding to the second shooting direction of the target area respectively; controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with second imaging energy from the first shooting direction and the second shooting direction to the target area, wherein the imaging beams with the second imaging energy are received by the first imaging device and the second imaging device so as to acquire second image information corresponding to the first shooting direction and second image information corresponding to the second shooting direction of the target area.
In some embodiments, the detector module of the first imaging device has at least a first energy receiving layer and a second energy receiving layer; the detector module of the second imaging device is provided with at least a third energy receiving layer and a fourth energy receiving layer; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes: controlling the first imaging device to emit an imaging beam with third imaging energy from a first shooting direction to the target area, wherein the imaging beam with the third imaging energy is received by the first energy receiving layer and the second energy receiving layer so as to acquire first image information and second image information of the target area corresponding to the first shooting direction respectively; controlling the second imaging device to emit an imaging beam with third imaging energy from a second shooting direction to the target area, wherein the imaging beam with the third imaging energy is received by the third energy receiving layer and the fourth energy receiving layer so as to acquire first image information and second image information of the target area corresponding to the second shooting direction respectively; wherein the third imaging energy comprises a first imaging energy and a second imaging energy.
In some embodiments, the target image comprises a first target image and a second target image; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes: determining a first target image corresponding to the first shooting direction based on first image information and second image information corresponding to the first shooting direction; and determining a second target image corresponding to the second shooting direction based on the first image information and the second image information corresponding to the second shooting direction.
In some embodiments, the first imaging device and the second imaging device are rotatable relative to the target region; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes: controlling the first imaging device and the second imaging device to respectively acquire a plurality of first image information corresponding to a plurality of first shooting directions and a plurality of first image information corresponding to a plurality of second shooting directions of the target area from the plurality of first shooting directions and the plurality of second shooting directions respectively in the rotating process; and controlling the first imaging device and the second imaging device to respectively acquire a plurality of second image information corresponding to the plurality of first shooting directions and a plurality of second image information corresponding to the plurality of second shooting directions of the target area from the plurality of first shooting directions and the plurality of second shooting directions respectively in the rotating process.
In some embodiments, the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions includes: controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with first imaging energy to the target area from the first shooting directions and the second shooting directions respectively in the rotating process, wherein the imaging beams with the first imaging energy are received by the detector modules of the first imaging device and the second imaging device so as to acquire a plurality of first image information corresponding to the first shooting directions and a plurality of first image information corresponding to the second shooting directions of the target area respectively; and controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with second imaging energy to the target area from the first shooting directions and the second shooting directions respectively in the rotating process, wherein the imaging beams with the second imaging energy are received by the detector modules of the first imaging device and the second imaging device so as to acquire a plurality of second image information corresponding to the first shooting directions and a plurality of second image information corresponding to the second shooting directions of the target area respectively.
In some embodiments, the flat panel detector module of the first imaging device has at least a first energy receiving layer and a second energy receiving layer; the detector module of the second imaging device is provided with at least a third energy receiving layer and a fourth energy receiving layer; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes: controlling a bulb tube of the first imaging device to respectively emit imaging beams with third imaging energy from the plurality of first shooting directions to the target area in a rotating process, wherein the imaging beams with the third imaging energy are received by the first energy receiving layer and the second energy receiving layer so as to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the plurality of first shooting directions; controlling a bulb tube of the second imaging device to respectively emit imaging beams with third imaging energy from a plurality of second shooting directions to the target area in a rotating process, wherein the imaging beams with the third imaging energy are received by the third energy receiving layer and the fourth energy receiving layer so as to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the plurality of second shooting directions; wherein the third imaging energy comprises a first imaging energy and a second imaging energy.
In some embodiments, the first imaging device and the second imaging device are rotated by a first angle relative to the target area; the target image comprises a first target image and a second target image; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes: determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of first photographing directions; and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions; the first target image is determined based on the plurality of first target sub-images corresponding to the plurality of first photographing directions, and the second target image is determined based on the plurality of second target sub-images corresponding to the plurality of second photographing directions.
The first imaging device and the second imaging device are rotated by a second angle relative to the target area; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes: determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of first photographing directions, and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of second photographing directions; the target image is determined based on a plurality of first target sub-images corresponding to the plurality of first photographing directions and a plurality of second target sub-images corresponding to the plurality of second photographing directions.
In some embodiments, the first imaging device and the second imaging device are rotatable relative to the target region; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes: controlling the first imaging device to respectively emit imaging beams with first imaging energy from the plurality of first shooting directions to the target area in the rotating process so as to acquire a plurality of first image information of the target area corresponding to the plurality of first shooting directions; and controlling the second imaging device to respectively emit imaging beams with second imaging energy from the plurality of second shooting directions to the target area in the rotating process so as to acquire a plurality of second image information of the target area corresponding to the plurality of second shooting directions.
In some embodiments, the determining the target image of the target region based on the one or more first image information and the one or more second image information comprises: determining a first reconstructed image based on a plurality of first image information corresponding to the plurality of first photographing directions; and determining a second reconstructed image based on a plurality of second image information corresponding to the plurality of second photographing directions; the target image is determined based on the first reconstructed image and the second reconstructed image.
In some embodiments, the target image includes a deboned image and a bone image; the determining the location information of the target area based on the target image further includes: acquiring a treatment plan image; registering the bone image with the treatment plan image; the location information of the target region is verified based on the deboning image and the treatment plan image.
In some embodiments, the deboned image and the bone image are two-dimensional images or three-dimensional images.
In some embodiments, the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions is performed by one imaging device having a plurality of beam-out focal points, each of the beam-out focal points corresponding to one of the plurality of different shooting directions.
In some embodiments, the imaging device has a first beam out focal point and a second beam out focal point; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes: controlling a bulb tube of the imaging device to emit an imaging beam from the first beam outlet focus, wherein the imaging beam is received by a detector module of the imaging device so as to acquire first image information; and controlling a bulb tube of the imaging device to emit an imaging beam from the second beam outlet focus, wherein the imaging beam is received by a detector module of the imaging device so as to acquire second image information.
In some embodiments, the determining the location information of the target region based on the first image information and the second image information includes: determining a first coordinate and a second coordinate of the target region in a first coordinate system based on the first image information and the second image information respectively; determining a first projection coordinate and a second projection coordinate which respectively correspond to the first coordinate and the second coordinate in a second coordinate system; determining a third coordinate of the target region in a second coordinate system based on the first projection coordinate and the second projection coordinate; fourth coordinates of the target region in a third coordinate system are determined based on the third coordinates.
In some embodiments, the determining the first projection coordinate and the second projection coordinate within the second coordinate system corresponding to the first coordinate and the second coordinate, respectively, includes: and determining a first projection coordinate and a second projection coordinate of the first coordinate and the second coordinate in a second coordinate system respectively based on the geometric relation of the first coordinate system and the second coordinate system.
In some embodiments, determining a third coordinate of the target region within a second coordinate system based on the first projection coordinate and the second projection coordinate comprises: determining a first linear equation based on first beam-out focal point projection coordinates of the first beam-out focal point in the second coordinate system and the first projection coordinates; determining a second linear equation based on second beam-out focal point projection coordinates of the second beam-out focal point in the second coordinate system and the second projection coordinates; the third coordinate of the target region within the second coordinate system is determined based on the first linear equation and the second linear equation.
In some embodiments, the determining fourth coordinates of the target region within a third coordinate system based on the third coordinates comprises: and converting the third coordinate into the fourth coordinate based on the mapping relation between the second coordinate system and the third coordinate system.
In some embodiments, the mapping of the second coordinate system and the third coordinate system is related to an angle of rotation of the imaging device relative to the target area.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer in the storage medium, the computer performs the method of monitoring the position of a target area described above.
One of the embodiments of the present disclosure provides a system for monitoring the position of a target area, wherein the system includes at least one processor, and the at least one processor is configured to perform the method for monitoring the position of the target area.
Drawings
The present application will be further illustrated by way of example embodiments, which will be described in detail with reference to the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic diagram of a position monitoring system for a target area according to some embodiments of the present disclosure;
FIG. 2 is a block diagram of a position monitoring system according to some embodiments of the present description;
FIG. 3A is a schematic diagram of an imaging principle of the position monitoring system shown in accordance with some embodiments of the present description;
FIG. 3B is a schematic diagram of a first or second detector module according to some embodiments of the present disclosure;
FIG. 4 is a schematic diagram of an imaging principle of the position monitoring system shown in accordance with some embodiments of the present description;
FIG. 5 is an exemplary flow chart of a location monitoring method shown in accordance with some embodiments of the present description;
FIG. 6 is an exemplary flow chart of a method of determining location information of a target area shown in accordance with some embodiments of the present disclosure;
FIG. 7 is an exemplary flowchart of a method of determining a target image according to some embodiments of the present description;
FIG. 8 is an exemplary flow chart of a method of determining a target image according to some embodiments of the present description;
FIG. 9 is an exemplary flowchart of a method of determining a target image according to some embodiments of the present description;
FIG. 10 is an exemplary flow chart of a method of verifying location information according to some embodiments of the present description;
FIG. 11 is a schematic diagram of an imaging principle of an imaging apparatus according to some embodiments of the present description;
fig. 12 is an exemplary flowchart of a method of determining location information of a target area according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "sub-module," and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions, or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Image Guided Radiation Therapy (IGRT) is an advanced radiation therapy technique, and in the radiation therapy process, the focal region (for example, tumor) and normal organs of a patient can be monitored through image guidance, so that accurate daily therapy positioning is realized, and the radiation therapy precision is improved. In actual clinical radiotherapy, an electronic portal imaging device (Electronic Portal Imaging Device, EPID) or a kV-level image guidance system is mainly used for image guidance. The EPID is used for collecting images of a focus area through MV-level rays, the collected MV-level images are not clear enough, the contrast and resolution of soft tissues of the focus area are not high, the kV-level image guiding system is used for collecting images of the focus area through KV-level rays, the problem that the focus area or a marker on the focus area can be blocked by tissues with high bone density possibly occurs in the collected KV-level images, monitoring of the focus area cannot be well achieved, and the purpose of accurate radiotherapy is achieved. In addition, during radiation therapy, the accuracy of radiation therapy is affected by the effects of patient motion (e.g., respiratory motion or organ motion) on the image-guided results (e.g., resulting in deviations of the area the radiation beam is intended to irradiate from the actual focal area).
The embodiment of the specification provides a target area monitoring method, a target area monitoring system and a storage medium. The method, the system and the storage medium can acquire two or more energy images of the focus area by performing dual-energy or multi-energy imaging on the focus area from multiple directions, and can acquire clear images (such as bone images and deboned images) of different tissues of the focus area by processing the two or more energy images so as to accurately and real-timely determine the position information of the focus area, thereby better realizing the monitoring of the focus area, further realizing accurate positioning, accurate transmission of radiation dose and the like and achieving the aim of improving the radiotherapy precision. In addition, the method, the system and the storage medium can also emit rays from different focus positions based on imaging equipment (such as a bulb tube) to image a focus area, acquire a plurality of images corresponding to the different focus positions, accurately and real-timely determine the position information of the focus area based on the plurality of images corresponding to the different focus positions, and realize real-time three-dimensional tracking of the focus area in the radiotherapy process, so that a radiation beam is guided to perform more accurate irradiation, and the purpose of accurate radiotherapy is achieved.
The method, system and storage medium for monitoring a target area according to the embodiments of the present disclosure will be described in detail with reference to the accompanying drawings and embodiments.
FIG. 1 is a schematic diagram of a position monitoring system for a target area according to some embodiments of the present description. As shown in fig. 1, the position monitoring system 100 may include a radiation therapy device 110, a network 120, one or more terminals 130, a processing device 140, a storage device 150, and one or more imaging devices 160.
The radiation therapy device 110 can irradiate radiation, such as X-rays, alpha rays, beta rays, gamma rays, heavy ions, etc., onto a target region (i.e., a focal region of a patient in need of treatment, e.g., a tumor). In some embodiments, the radiation therapy device 110 can include a gantry 111, a treatment couch 112, and a first radiation source 113. In some embodiments, the first radiation source 113 may be an accelerator (e.g., a linear accelerator, a cyclotron, etc.) for generating and emitting a therapeutic beam (e.g., an X-ray beam) onto the target area for the purpose of radiation treatment of the target area. The first radiation source 113 is disposed on the gantry 111 and the treatment couch 112 can be used to support a patient during or prior to treatment to expose a target region to the irradiation field of the first radiation source 113. The first radiation source 113 may be rotated with respect to the couch 112 along with the gantry 111 for the purpose of 360 ° irradiation of the target region.
Network 120 may include a network capable of facilitating the exchange of information and/or data in location monitoring system 100. In some embodiments, one or more components of the location monitoring system 100 (e.g., the radiation therapy device 110, the terminal 130, the processing engine 140, the storage device 150, or the imaging device 160, etc.) may communicate information and/or data with other one or more components of the location monitoring system 100 over the network 120. For example, the processing device 140 may acquire data corresponding to radiation signals (e.g., emitted treatment beams) of the radiation treatment device via the network 12. As another example, processing device 140 may obtain user instructions from terminal 130 via network 120.
The terminal 130 may include a mobile device 131, a tablet 132, a laptop 133, and the like, or a combination thereof. In some embodiments, mobile device 131 may include one or a combination of several of a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, and the like. In some embodiments, terminal 130 may be part of processing device 140.
The processing device 140 may process data and/or information acquired from the radiation therapy device 110, the terminal 130, the storage device 150, and/or the imaging device 160, for example, the processing device 140 may process image information of the target region acquired from the imaging device 160 and determine location information of the target region based on the image information. As another example, processing device 140 may obtain a treatment plan from terminal 130 and/or storage device 150 via network 120, which may correspond to a particular arrangement of radiation treatment device 110 and or imaging device 160 or components thereof. The processing device 140 may further adjust the treatment plan based on data and/or information received from the radiation treatment device 110 and/or the imaging device, and may control the radiation treatment device 110 and/or the imaging device 160 based on the adjusted treatment plan. In some embodiments, the processing device 140 may be a single server or a group of servers. The server group may be centralized or distributed. In some embodiments, the processing device 130 may be local or remote. For example, the processing device 140 may access information and/or data stored in the radiation therapy device 110, the terminal 130, the storage device 150, and/or the imaging device 160 via the network 120. As another example, processing device 140 may be directly connected to radiation therapy device 110, terminal 130, storage device 150, and/or imaging device 160 to access information and/or data stored therein. As another example, the processing device 140 may be integrated into the radiation therapy device 110. In some embodiments, the processing device 140 may be implemented on a cloud platform.
The storage device 150 may store data and/or instructions. In some embodiments, storage device 150 may store data acquired from terminal 130, processing device 140, and/or imaging device 160. For example, the storage device 150 may store a treatment plan and/or an adjusted treatment plan. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 140 may perform or be used to perform the exemplary methods described in this specification. In some embodiments, the storage device 150 may be implemented on a cloud platform.
In some embodiments, the storage device 150 may be connected to the network 120 to communicate with one or more components of the location monitoring system 100 (e.g., the processing device 140, the terminal 130, the imaging device 160, etc.). One or more components in the location monitoring system 100 may access data or instructions stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be directly connected to or in communication with one or more components of the radiation system 100 (e.g., the processing device 140, the terminal 130, etc.). In some embodiments, the storage device 150 may be part of the processing device 140.
The imaging device 160 may be used to acquire image information of a target region (or an imaging region coincident with the target region), which may be used to determine positional information of the target region, and/or movement of a treatment region during a radiation treatment operation performed by the radiation treatment device 110. In some embodiments, the location of the target region may change over time due to various movements including, for example, heart movements (and their effects on other organs), respiratory movements (lungs and/or diaphragm and their effects on other organs), blood flow and movements caused by vascular pulsations, muscle contraction and relaxation, secretory activity of the pancreas, and the like, or combinations thereof. The position monitoring system 100 may generate an image of the target region based on the imaging information acquired by the imaging device 160 before, during, and/or after the radiation therapy session, and monitor the position of the target region based on this image.
In some embodiments, the imaging device 160 may include a second radiation source 161 and a detector module 162, where the second radiation source 161 may generate an imaging beam (e.g., X-ray beam, alpha-ray beam, beta-ray beam, etc.) that emits an imaging beam having a preset imaging energy onto the target region, where the imaging beam is capable of passing through (or being absorbed by) the target region and of attenuating as it passes through the target region. The detector module 162 may detect and/or receive radiation associated with at least a portion of the attenuated or scattered imaging beam to generate corresponding image information (e.g., projection data), based on which the position monitoring system 100 (e.g., the processing device 140) may generate or reconstruct an image (e.g., a radiological image or a CT image) of the target region. In some embodiments, the second radiation source 161 may be a bulb (e.g., an X-ray bulb). Further, the location monitoring system 100 (e.g., the processing device 140) may determine location information of the target area based on the image of the target area.
In some embodiments, imaging device 160 may be fixed to radiation therapy device 110 or mounted to other devices or equipment surrounding radiation therapy device 110. For example, the imaging device 160 may be fixed to the gantry 111, wherein the second radiation source 161 and the detector module 162 are disposed opposite to each other on the gantry 111, and are capable of rotating with respect to the couch 112 along with the gantry 111 to ensure that the imaging beam emitted by the second radiation source 161 is detected or received by the corresponding detector module 162 after passing through the target region. In some embodiments, the second radiation source 161 can be used as the first radiation source 113, i.e., the second radiation source 162 can produce an emitted treatment beam and an imaging beam. In some embodiments, the imaging device 160 may be part of the radiation therapy device 110, i.e., the radiation therapy device 110 can have both therapeutic and imaging capabilities, and thus the treatment couch 112 may also be referred to as a scanning couch, which may be configured to carry and/or transport a subject (e.g., a patient) to the gantry 111 for imaging and/or radiation therapy.
In some embodiments, the position monitoring system 100 may acquire image information of the target area from a plurality of different shooting directions through the imaging device 160. The shooting direction refers to the direction in which the imaging beam is directed to the target region. In some embodiments, the shooting direction may be related to the direction and/or position of the imaging device 160 relative to the target area, e.g., acquiring image information of the target area from a plurality of different shooting directions may refer to one or more imaging devices respectively acquiring image information of the target area from different directions relative to the target area or at different positions relative to the target area. In some embodiments, the shooting direction may also be related to the beam-out focal position of the bulb of the imaging device 160, for example, acquiring image information of the target area from a plurality of different shooting directions may refer to the imaging beam generated by the bulb of the imaging device 160 being emitted from a plurality of different beam-out focal points toward the target area to acquire a plurality of different image information of the target area.
FIG. 2 is a block diagram of a position monitoring system according to some embodiments of the present description. As shown in fig. 2, the location monitoring system 200 may include an acquisition module 210 and a determination module 220.
The acquisition module 210 may acquire one or more first image information and one or more second image information of the target area based on a plurality of different photographing directions. In some embodiments, the imaging direction may refer to a position or orientation of the imaging device relative to the target area, and the plurality of different imaging directions may refer to one or more imaging devices emitting imaging beams to the target area at a plurality of positions or orientations relative to the target area. The target region may refer to a region of a subject (e.g., a patient) to be treated or imaged (scanned) (e.g., a focal region such as a tumor). In some embodiments, the photographing direction may refer to a beam out focal position of a bulb of the imaging device, and the plurality of different photographing directions may refer to the bulb of the imaging device emitting imaging beams from the plurality of different beam out focal positions thereof to the target area.
In some embodiments, the first imaging energy corresponding to the one or more first image information and the second imaging energy corresponding to the one or more second image information are different. In some embodiments, the acquisition module 210 acquires, by a plurality of imaging devices (e.g., the imaging device 160), one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions, wherein each of the plurality of imaging devices includes a bulb and a detector module; the bulb is used for emitting an imaging beam with preset imaging energy; the detector module is used for receiving the imaging beam to generate image information. In some embodiments, the plurality of imaging devices may include at least a first imaging device and a second imaging device, and a line between the first imaging device and the target area and a line between the second imaging device and the target area form a preset included angle.
In some embodiments, the acquiring module 210 may control the first imaging device and the bulb of the second imaging device to emit imaging beams having first imaging energies from the first photographing direction and the second photographing direction, respectively, toward the target area, the imaging beams having the first imaging energies being received by the first imaging device and the second imaging device to acquire first image information of the target area corresponding to the first photographing direction and first image information corresponding to the second photographing direction, respectively, and control the first imaging device and the bulb of the second imaging device to emit imaging beams having second imaging energies from the first photographing direction and the second photographing direction, respectively, toward the target area, the imaging beams having the second imaging energies being received by the first imaging device and the second imaging device to acquire second image information of the target area corresponding to the first photographing direction and second image information corresponding to the second photographing direction, respectively.
In some embodiments, the detector module of the first imaging device has at least a first energy receiving layer and a second energy receiving layer; the detector module of the second imaging device has at least a third energy receiving layer and a fourth energy receiving layer.
In some embodiments, the acquiring module 210 may control the bulb of the first imaging device to emit an imaging beam having a third imaging energy from the first photographing direction toward the target region, the imaging beam having the third imaging energy being received by the first energy receiving layer and the second energy layer to acquire first image information and second image information of the target region corresponding to the first photographing direction, respectively, and control the bulb of the second imaging device to emit an imaging beam having the third imaging energy from the second photographing direction toward the target region, the imaging beam having the third imaging energy being received by the third energy receiving layer and the fourth energy receiving layer to acquire first image information and second image information of the target region corresponding to the second photographing direction, respectively. Wherein the third imaging energy comprises the first imaging energy and the second imaging energy.
In some embodiments, the first imaging device and the second imaging device are rotatable relative to the target region. The first imaging device has a plurality of first shooting directions in the rotating process, and the second imaging device has a plurality of second shooting directions in the rotating process.
In some embodiments, the obtaining module 210 may control the first imaging device and the second imaging device to obtain, during rotation, a plurality of first image information of the target area corresponding to the plurality of first shooting directions and a plurality of first image information of the target area corresponding to the plurality of second shooting directions from the plurality of first shooting directions and the plurality of second shooting directions, respectively, and control the first imaging device and the second imaging device to obtain, during rotation, a plurality of second image information of the target area corresponding to the plurality of first shooting directions and a plurality of second image information corresponding to the plurality of second shooting directions from the plurality of first shooting directions and the plurality of second shooting directions, respectively.
In some embodiments, the acquiring module 210 may control the bulb of the first imaging device and the second imaging device to emit imaging beams with first imaging energies from the plurality of first shooting directions and the plurality of second shooting directions, respectively, toward the target area during rotation, the imaging beams with the first imaging energies being received by the detector modules of the first imaging device and the second imaging device to acquire a plurality of first image information of the target area corresponding to the plurality of first shooting directions and a plurality of first image information of the plurality of second shooting directions, respectively, and control the bulb of the first imaging device and the second imaging device to emit imaging beams with second imaging energies from the plurality of first shooting directions and the plurality of second shooting directions, respectively, toward the target area during rotation, the imaging beams with the second imaging energies being received by the detector modules of the first imaging device and the second imaging device to acquire a plurality of second image information of the target area corresponding to the plurality of first shooting directions and a plurality of second image information corresponding to the plurality of second shooting directions, respectively.
In some embodiments, the acquiring module 210 may control the bulb of the first imaging device to emit imaging beams with third imaging energy from the plurality of first shooting directions toward the target area during rotation, respectively, the imaging beams with third imaging energy being received by the first energy receiving layer and the second energy receiving layer to acquire the plurality of first image information and the plurality of second image information of the target area corresponding to the plurality of first shooting directions, respectively, and control the bulb of the second imaging device to emit imaging beams with third imaging energy from the plurality of second shooting directions toward the target area during rotation, respectively, the imaging beams with third imaging energy being received by the third energy receiving layer and the fourth energy receiving layer to acquire the plurality of first image information and the plurality of second image information of the target area corresponding to the second shooting directions, respectively. Wherein the third imaging energy comprises the first imaging energy and the second imaging energy.
In some embodiments, the acquiring module 210 may control the first imaging device to emit imaging beams with first imaging energies from the plurality of first shooting directions to the target area during rotation to acquire a plurality of first image information of the target area corresponding to the plurality of first shooting directions, and control the second imaging device to emit imaging beams with second imaging energies from the plurality of second shooting directions to the target area during rotation to acquire a plurality of second image information of the target area corresponding to the plurality of second shooting directions.
In some embodiments, the acquiring module 210 acquires one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions, and the one or more second image information is performed by an imaging device having a plurality of beam-out focal points, each beam-out focal point corresponding to one of the plurality of different shooting directions. The beam-out focal points corresponding to the one or more first image information and the one or more second image information are different.
In some embodiments, the acquisition module 210 may control the bulb of the imaging device to emit an imaging beam from a first beam exit focus, the imaging beam being received by the detector module of the imaging device to acquire the first image information, and control the bulb of the imaging device to emit an imaging beam from a second beam exit focus, the imaging beam being received by the detector module of the imaging device to acquire the second image information.
The determining module 220 may be configured to determine the location information of the target area based on the one or more first image information and the one or more second image information.
In some embodiments, the determination module 220 may include a target image determination sub-module and/or a location information determination sub-module.
In some embodiments, the target image may include a first target image and a second target image. In some embodiments, the target image determination sub-module may determine the first target image corresponding to the first photographing direction based on the first image information and the second image information corresponding to the first photographing direction, and determine the second target image corresponding to the second photographing direction based on the first image information and the second image information corresponding to the second photographing direction.
In some embodiments, the first and second imaging devices are rotated by a first angle relative to the target area, and the target image may include first and second target images. In some embodiments, the target image determining sub-module may determine a plurality of first target images corresponding to the plurality of first photographing directions based on the plurality of first image information corresponding to the plurality of first photographing directions and the plurality of second image information corresponding to the plurality of first photographing directions, and determine a plurality of second target images corresponding to the plurality of second photographing directions based on the plurality of first image information corresponding to the plurality of second photographing directions and the plurality of second image information corresponding to the plurality of second photographing directions. In some embodiments, the target image determination sub-module may determine the first reconstructed image and the second reconstructed image based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first photographing directions, respectively, and determine the third reconstructed image and the fourth reconstructed image based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions, respectively. Further, the target image determination sub-module may determine the first target image based on the first reconstructed image and the second reconstructed image, and determine the second target image based on the third reconstructed image and the fourth reconstructed image.
In some embodiments, the first imaging device and the second imaging device are rotated a second angle relative to the target area. The target image determining sub-module may determine a plurality of first target sub-images corresponding to the plurality of first photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first photographing directions, and determine a plurality of second target sub-images corresponding to the plurality of second photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions. Further, the target image determination sub-module may determine the target image based on a plurality of first target sub-images corresponding to the plurality of first photographing directions and a plurality of second target sub-images corresponding to the plurality of second photographing directions. In some embodiments, the target image determination sub-module may determine the first reconstructed image based on the plurality of first image information corresponding to the plurality of first photographing directions and the first image information corresponding to the plurality of second photographing directions, and determine the second reconstructed image based on the plurality of second image information corresponding to the plurality of first photographing directions and the plurality of second image information corresponding to the plurality of second photographing directions. Further, a target image determination sub-module may determine a target image based on the first reconstructed image and the second reconstructed image.
In some embodiments, the target image determination sub-module may determine the first reconstructed image based on a plurality of first image information corresponding to the plurality of first photographing directions and determine the second reconstructed image based on a plurality of second image information corresponding to the plurality of second photographing directions. Further, a target image determination sub-module may determine the target image based on the first reconstructed image and the second reconstructed image.
In some embodiments, the target image determination sub-module may determine a plurality of target sub-images based on the plurality of first image information corresponding to the plurality of first photographing directions and the plurality of second image information corresponding to the plurality of second photographing directions, and then determine a target image based on the plurality of target sub-images.
In some embodiments, the target image may include a bone image and a deboned image.
In some embodiments, the location information determination sub-module may include an acquisition unit, a registration unit, and a verification unit. The acquisition unit may be used to acquire a treatment plan image. The registration unit may be used to register the bone image with the treatment planning image. The verification unit may determine the location information of the target area based on the deboning image and the treatment planning image.
In some embodiments, the location information determination sub-module may include a first determination unit, a second determination unit, a third determination unit, and a fourth determination unit. The first determination unit may determine a first coordinate and a second coordinate of the target region in the first coordinate system based on the first image information and the second image information, respectively.
The second determination unit may determine a first projection coordinate and a second projection coordinate corresponding to the first coordinate and the second coordinate, respectively, in the second coordinate system. In some embodiments, the second determining unit may determine the first projection coordinates and the second projection coordinates of the first coordinate and the second coordinate within the second coordinate system, respectively, based on a geometric relationship of the first coordinate system and the second coordinate system.
The third determination unit may determine a third coordinate of the target region within the second coordinate system based on the first projection coordinate and the second projection coordinate. In some embodiments, the third determining unit may determine the first straight line based on the first beam-out focal point projection coordinates and the first projection coordinates of the first beam-out focal point in the second coordinate system, and determine the second straight line based on the second beam-out focal point projection coordinates and the second projection coordinates of the second beam-out focal point in the second coordinate system, and then determine the third coordinates in the second coordinate system of the target area based on the first straight line and the second straight line.
The fourth determination unit may determine fourth coordinates within the third coordinate system of the target area based on the third coordinates. In some embodiments, the fourth determining unit may determine a fourth coordinate corresponding to the third coordinate within the third coordinate system based on a mapping relationship of the second coordinate system and the third coordinate system. Wherein the mapping relation of the second coordinate system and the third coordinate system is related to the rotation angle of the imaging device relative to the target area.
It should be noted that the above description of the position monitoring system and its modules is for convenience of description only and is not intended to limit the application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. In some embodiments, the acquisition module and the determination module disclosed in fig. 2 or each sub-module in the determination module may be different modules in one system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present application.
The imaging principle of the position monitoring system provided in the embodiments of the present specification will be described below with reference to the accompanying drawings.
Fig. 3A is a schematic diagram of an imaging principle of a position monitoring system according to some embodiments of the present description.
As shown in fig. 3A, the position monitoring system 300 may include a first imaging device 310 and a second imaging device 320. The first imaging device 310 may include a first bulb 311 and a first detector module 312, and the second imaging device 320 may include a second bulb 321 and a second detector module 322. The line between the first imaging device 310 (bulb 311) and the target area 330 and the line between the second imaging device 320 (bulb 321) and the target area 330 have a preset angle. The first imaging device 310 is positioned in a first shooting direction relative to the target area 330 and the second imaging device 320 is positioned in a second shooting direction relative to the target area.
In some embodiments, the first and second bulb 311 and 321 are capable of switching between emitting an imaging beam having a first imaging energy and emitting an imaging beam having a second imaging energy to emit the imaging beam having the first imaging energy and the imaging beam having the second imaging energy from the first and second photographing directions, respectively, toward the target area, and the first and second detector modules 312 and 322 are capable of detecting or receiving the imaging beam having the first imaging energy and the imaging beam having the second imaging energy to generate first and second image information corresponding to the first photographing direction and first and second image information corresponding to the second photographing direction. In some embodiments, the first image information refers to image information generated based on an imaging beam having a first imaging energy, and the second image information refers to image information generated based on an imaging beam having a second imaging energy, the first and second imaging energies being different. In some embodiments, the first imaging energy is less than the second imaging energy.
In some embodiments, the detector module may be configured as a plurality of different energy receiving layers, and the plurality of different energy receiving layers may respectively detect or receive imaging beams having different imaging energies from the imaging beams emitted by the bulb, so as to acquire image information corresponding to the different imaging beams. By this arrangement, it is possible to acquire a plurality of image information (e.g., first image information and second image information) of a target area corresponding to a plurality of different imaging energies without switching between transmitting imaging beams having different imaging energies by a bulb, and it is possible to avoid a situation in which the target area moves during the acquisition of the plurality of image information corresponding to the plurality of different imaging energies, resulting in a change in position information of the target area, without requiring the bulb to transmit the imaging beams having the imaging energies including the plurality of imaging energies (e.g., second imaging energy of the first imaging energy).
Specifically, fig. 3B is a schematic structural diagram of the first detector module or the second detector module according to some embodiments of the present disclosure. Since the first detector module 312 and the second detector module 322 may have the same structure, the first detector module 312 and the second detector module 322 are shown in fig. 3B. As shown in fig. 3B, the first detector module 312 may include a first energy receiving layer 3121 and a second energy receiving layer 3122, and the second detector module 322 may include a third energy receiving layer 3221 and a fourth energy receiving layer 3222. The first bulb 311 and the second bulb 321 are capable of emitting imaging beams having a third imaging energy including the first imaging energy and the second imaging energy from the first photographing direction and the second photographing direction, respectively, toward the target region, that is, the imaging beams having the first imaging energy and the imaging beams having the second imaging energy are included in the imaging beams having the third imaging energy. An imaging beam having a first imaging energy of the imaging beams having a third imaging energy can be detected or received by the first energy receiving layer and the third energy receiving layer to generate first image information corresponding to the first photographing direction and the second photographing direction, respectively, and an imaging beam having a second imaging energy of the imaging beams having a third imaging energy can be detected or received by the second energy receiving layer and the fourth energy receiving layer to generate second image information corresponding to the first photographing direction and the second photographing direction, respectively.
In some embodiments, the first detector module 312 and/or the second detector module 322 may further include further energy receiving layers that may be used to detect or receive imaging beams having a plurality of different imaging energies other than the imaging beam having the first imaging energy and the imaging beam having the second imaging energy in the imaging beam having the third imaging energy to obtain a plurality of image information corresponding to the plurality of different imaging energies. In some embodiments, the plurality of different energy receiving layers in the detector module may be arranged from top to bottom in the order of their corresponding imaging energies from low to high. For example, a low energy receiving layer (first energy receiving layer or third energy receiving layer) may be disposed above a high energy receiving layer (second energy receiving layer or fourth energy receiving layer), and a filter layer may be disposed between the low energy receiving layer and the high energy receiving layer, which may be used to filter low energy imaging beams (e.g., imaging beams having a first imaging energy) while retaining as high energy imaging beams (e.g., imaging beams having a second imaging energy) as possible to reach the high energy receiving layer to be detected or received.
In some embodiments, the first imaging device 310 and the second imaging device 320 are rotatable relative to the target area 330. Wherein the first imaging device 310 and the second imaging device 320 have a plurality of first photographing directions and a plurality of second photographing directions, respectively, during rotation. In some embodiments, the first imaging device 310 can acquire first image information and/or second image information corresponding to the plurality of first photographing directions during rotation, and the second imaging device 320 can acquire first image information and/or second image information corresponding to the plurality of second photographing directions during rotation. In some embodiments, the first imaging device 310 and the second imaging device 320 may be rotated by a first angle, a second angle, or other angle with respect to the target area. In some embodiments, the first angle may be the angle of rotation required by the first and second imaging devices to digitally tomosynthesis image the target region, and the second angle may be the angle of rotation required by the first and second imaging devices to cone-beam CT image the target region. Further description of the first angle and the second angle can be found in the related description of fig. 8.
Fig. 4 is a schematic diagram of an imaging principle of a position monitoring system according to some embodiments of the present description.
As shown in fig. 4, the position monitoring system 400 may include an imaging device 410. The imaging device 410 may include a bulb 411 and a detector module 412. Wherein the bulb 411 may include a cathode 4111 and an anode target 4112. The imaging device 410 (anode target 4112) has at least a first beam-out focal point a and a second beam-out focal point B. The first beam outlet focus a and the second beam outlet focus B may be different positions corresponding to the anode target 4112 after accelerating the electrons emitted from the cathode 4111. The imaging beam generated by the electrons from the cathode 4111 striking the first beam focus a passing through the target area 430 can be detected or received by the detector module 412 to generate the first image information, and the imaging beam generated by the electrons from the cathode 4111 striking the second beam focus B passing through the target area 430 can be detected or received by the detector module 412 to generate the second image information. The first image information is image information generated based on the imaging beam generated by the first beam outlet focus A, and the second image information is image information generated based on the imaging beam generated by the second beam outlet focus B.
In some embodiments, the position of electron bombardment on the anode target can be changed by means of generating a deflection magnetic field by a magnetic deflection coil, so as to obtain a first beam outlet focus A and a second beam outlet focus B.
FIG. 5 is an exemplary flow chart of a location monitoring method according to some embodiments of the present description. The method 500 may be implemented by the position monitoring system 300 shown in fig. 3A or the position monitoring system 400 shown in fig. 4.
As shown in fig. 5, the method 500 may include the steps of:
step 510, acquiring one or more first image information and one or more second image information of a target area based on a plurality of different shooting directions. Specifically, step 510 may be performed by the acquisition module 210.
In some embodiments, the different shooting directions may be positions or directions of different imaging devices relative to the target area. For example, the first imaging device 310 in fig. 3A is positioned in a first photographing direction with respect to the target area 330, and the second imaging device 320 is positioned in a second photographing direction with respect to the target area 330. In some embodiments, different shooting directions may correspond to different beam-out foci of the imaging device, e.g., the first beam-out focus a and the second beam-out focus B in fig. 4 may correspond to different shooting directions.
In some embodiments, the first image information and the second image information may be image information corresponding to different imaging energies. For example, the first image information may correspond to a first imaging energy and the second image information may correspond to a second imaging energy. In some embodiments, the first imaging energy may be less than the second imaging energy. In some embodiments, the first imaging energy and/or the second imaging energy may be of the order of kV, for example, the first imaging energy may be 80kV and the second imaging energy may be 140kV. In some embodiments, the first imaging energy and/or the second imaging energy may be MV level. It should be noted that, the imaging energy (such as the first imaging energy, the second imaging energy, or the third imaging energy) referred to in the embodiments of the present disclosure refers to an upper energy limit of the imaging beam, and the imaging beam can obtain other energy data not greater than the upper energy limit through the processing of the detector during the imaging process. For example, when the first imaging energy is 80kV, the upper energy limit of the first imaging energy is 80kV, and the imaging beam can obtain image information corresponding to any imaging energy of 80kV or less through the processing of the detector during the imaging process.
In some embodiments, acquiring one or more first image information and one or more second image information of the target region may be accomplished by a bulb of the imaging device emitting an imaging beam having a first imaging energy and an imaging beam having a second imaging energy, respectively, towards the target region. Specifically, the bulb may emit an imaging beam having a corresponding imaging energy by setting a tube voltage of the bulb. For example, the tube voltage of the bulb is 80kV and the imaging energy of the emitted imaging beam is 80kV.
In some embodiments, acquiring one or more first image information and one or more second image information of the target region may be performed by a bulb of the imaging device transmitting an imaging beam having a third imaging energy to the target region and a different energy receiving layer detection or reception of a detector module of the imaging device. Wherein the third imaging energy may comprise a first imaging energy and a second imaging energy, and the imaging beam having the third imaging energy may comprise an imaging beam having the first imaging energy and an imaging beam having the second imaging energy. For example, the first imaging energy is 80kV, the second imaging energy is 140kV, and the third imaging energy may be an imaging energy greater than or equal to 140kV, i.e., the energy value range of the third imaging energy includes the energy value ranges of the first imaging energy and the second imaging energy.
In particular, the different energy receiving layers are capable of detecting or receiving imaging beams having different imaging energies from among the imaging beams having the third imaging energy to generate image information corresponding to the different imaging energies. For example, the first energy receiving layer and the third energy receiving layer described in fig. 3B can detect or receive an imaging beam having a first imaging energy to produce first image information, and the second energy receiving layer and the fourth energy receiving layer can detect or receive an imaging beam having a second imaging energy to produce second image information. In this specification, an imaging beam having a certain energy is understood to include rays having a plurality of energies but not exceeding the energy, and for example, an imaging beam having 80kV may be a beam composed of rays having a plurality of energies (for example, 70kV, 60kV, 50kV, etc.) of 80kV and below. The image information referred to in this specification may include at least a portion of the imaging beam detected or received by the detector module, and an image generated by the detector module based on the at least a portion of the imaging beam detected or received.
In some embodiments, the first image information and the second image information may be image information corresponding to different beam focus points of the imaging device (bulb). In some embodiments, acquiring one or more first image information and one or more second image information of the target region may be implemented by an imaging device (bulb) emitting the imaging beam from different beam-out foci. In some embodiments, the imaging device may include at least a first beam exit focus and a second beam exit focus, the image information acquired based on the imaging beam emitted from the first beam exit focus may be first image information, and the image information acquired based on the imaging beam emitted from the second beam exit focus may be second image information.
Step 520, determining location information of the target area based on the one or more first image information and the one or more first image information. Specifically, step 520 may be performed by determination module 220.
In some embodiments, when the acquired first image information and second image information are image information corresponding to different energies, in step 520, a target image of the target region may be determined based on the one or more first image information and the one or more second image information, and then position information of the target region may be determined based on the target image. In some embodiments, the target image may include a first target image and a second target image, and the location information of the target region may be determined based on the first target image and the second target image. In some embodiments, the target image may include images of different tissues corresponding to the target region, e.g., a deboned image and a bone image. In some embodiments, the target image may be a two-dimensional image or a three-dimensional image, for example, when the first and second imaging devices are fixed in position relative to the target area, the final determined target image may be a two-dimensional image when the first and second imaging devices are in different shooting directions, respectively. For another example, when the first imaging device and the second imaging device are rotatable relative to the target area, the first imaging device and the second imaging device each have a plurality of different shooting directions during rotation, the finally determined target image may be a three-dimensional image. The target image of how to determine the target area and the positional information of how to determine the target area based on the target image can be found elsewhere in this specification, e.g., fig. 6-10 and their associated descriptions.
In some embodiments, when the acquired first image information and second image information are image information corresponding to different beam-out focuses, in step 520, the position information of the target area may be determined directly based on the first image information and second image information, coordinates of the beam-out focuses, and the like. Positional information about how to determine the target area based on the first and second image information, coordinates of the beam-out focal point, and the like can be found elsewhere in this specification, for example, fig. 12 and its associated description.
In some embodiments, the first imaging energy corresponding to the one or more first image information and the second imaging energy corresponding to the one or more second image information are different, and a clearer image of various tissues (such as bones, soft tissues and the like) in the target area can be determined as a target image of the target area through the first image information and the second image information corresponding to the different imaging energies, and the clearer image of various tissues (such as bone images) in the target area can be used for obtaining better registration results, or (such as deboned images) can be used for solving the problem of difficult identification or shielding of the target area (or markers) so as to better determine the position information of the target area from the target image.
Fig. 6 is an exemplary flowchart of a method of determining location information of a target area according to some embodiments of the present description. As shown in fig. 6, the method 600 may include the steps of:
in step 610, a target image of the target region is determined based on the one or more first image information and the one or more second image information. Step 610 may be performed by the target image determination submodule.
In some embodiments, the target image may include a first target image and a second target image. The target image determination sub-module may determine the first target image based on the first image information and the second image information corresponding to the first photographing direction and determine the target area based on the first image information and the second image information corresponding to the second photographing direction. In some embodiments, the first target image and/or the second target image may be two-dimensional images. For more description of this section, reference may be made to fig. 7 and its associated description.
In some embodiments, the target image may include a first target image and a second target image. The target image determining sub-module may determine the first target image based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first photographing directions and determine the second target image based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions when the first and second photographing apparatuses are rotated by the first angle with respect to the target region. In some embodiments, the first target image and/or the second target image may be three-dimensional images. For more description of this portion, reference may be made to the description in relation to fig. 8 when the first imaging device and the second imaging device are rotated by a first angle with respect to the target area.
In some embodiments, the target image determining sub-module may determine the target image of the target region based on the plurality of first and second image information corresponding to the plurality of first photographing directions and the plurality of first and second image information corresponding to the plurality of second photographing directions when the first and second imaging devices are rotated by the second angle with respect to the target region. In some embodiments, the target image may be a three-dimensional image. For more description of this portion, reference may be made to fig. 8 for a description of the first and second imaging devices rotated by a second angle relative to the target area.
In some embodiments, the target image determination sub-module may determine the target image of the target region based on the first image information corresponding to the plurality of first photographing directions and the second image information corresponding to the plurality of second photographing directions. In some embodiments, the target image may be a three-dimensional image. For more description of this section, reference may be made to fig. 9 and its associated description.
In some embodiments, the plurality of first photographing directions and the plurality of second photographing directions may be respectively implemented by rotating the first imaging device and the second imaging device by a first angle or a second angle with respect to the target area.
Step 620, determining location information of the target area based on the target image. In particular, step 620 may be performed by the location information determination submodule.
In some embodiments, the location information determination submodule may determine location information of the target region from the target image through an algorithm such as image recognition. In some embodiments, when the target image includes a first target image and a second target image, the location information determining sub-module may determine the location information of the target region from the first target image and the second target image through an algorithm such as image recognition.
In some embodiments, the location information determination submodule may calculate an equivalent atomic number from the target image, from which the location information of the target region is determined.
A detailed description will be given below of how to determine a target image of a target area with reference to the drawings.
Fig. 7 is an exemplary flow chart of a method of determining a target image according to some embodiments of the present description.
As shown in fig. 7, method 700 may include the steps of:
step 710, acquiring first image information and second image information corresponding to a first shooting direction and first image information and second image information corresponding to a second shooting direction. In particular, step 710 may be an implementation of step 510, i.e., step 710 may be performed by acquisition module 210.
In some embodiments, the acquiring module 210 may acquire the first image information and the second image information in the same shooting direction by respectively emitting the imaging beam with the first imaging energy and the imaging beam with the second imaging energy to the target area through a bulb of the imaging device. Further, in step 710, the acquiring module 210 may control the bulbs of the first imaging device and the second imaging device to emit an imaging beam with the first imaging energy from the first shooting direction and the second shooting direction toward the target area, respectively, and the imaging beam with the first imaging energy may be received by the detector modules of the first imaging device and the second imaging device to acquire the first image information corresponding to the first shooting direction and the first image information corresponding to the second shooting direction of the target area, respectively. The acquiring module 210 may further control the bulb of the first imaging device and the bulb of the second imaging device to emit imaging beams with second imaging energy from the first photographing direction and the second photographing direction, respectively, to the target area, where the imaging beams with the second imaging energy are received by the detector modules of the first imaging device and the second imaging device to acquire second image information of the target area corresponding to the first photographing direction and second image information corresponding to the second photographing direction, respectively. In some embodiments, the bulb of the first imaging device and the bulb of the second imaging device may be switched between emitting an imaging beam having a first imaging energy and an imaging beam having a second imaging energy. By way of example only, when the first imaging device and the bulb of the second imaging device acquire first image information corresponding to the first photographing direction and first image information corresponding to the second photographing direction of the target region from the first photographing direction and the second photographing direction, respectively, the tube voltage of the first imaging device and the bulb of the second imaging device may be switched to tube voltage corresponding to the second photographing energy to emit the imaging beam having the second photographing energy to the target region from the first photographing direction and the second photographing direction, respectively, to acquire the second image information corresponding to the first photographing direction and the second image information corresponding to the second photographing direction of the target region, respectively. In some embodiments, the time interval for switching between the first imaging device and the second imaging device emitting the imaging beam with the first imaging energy and the imaging beam with the second imaging energy is 30 μs to 300 μs, and the movement of the target area within the time interval does not cause a large change in the target area, so as to ensure that the difference between the position information included in the first image information and the position information included in the second image information corresponding to the first photographing direction or the second photographing direction is small.
In some embodiments, acquiring the first image information and the second image information of the same shooting direction may be achieved by the bulb of the imaging device transmitting the imaging beam with the third imaging energy to the target area and the different energy receiving layers of the detector module of the imaging device detecting or receiving. In some embodiments, the detector module of the first imaging device may comprise at least a first energy receiving layer and a second energy receiving layer, and the detector module of the second imaging device may comprise at least a third energy receiving layer and a fourth energy receiving layer.
Further, in step 710, the acquiring module 210 may control the bulb of the first imaging device to emit an imaging beam with a third imaging energy from the first shooting direction to the target area, where the imaging beam with the third imaging energy is received by the first energy receiving layer and the second energy layer to acquire the first image information and the second image information of the target area corresponding to the first shooting direction, respectively. The acquisition module 210 may further control the bulb of the second imaging device to emit an imaging beam having a third imaging energy from the second photographing direction toward the target region, the imaging beam having the third imaging energy being received by the third energy receiving layer and the fourth energy receiving layer; so as to respectively acquire the first image information and the second image information of the target area corresponding to the second shooting direction. In some embodiments, the third imaging energy comprises the first imaging energy and the second imaging energy, i.e. the imaging beam having the third imaging energy comprises the imaging beam having the first imaging energy and the imaging beam having the second imaging energy.
Further, among imaging beams having the first imaging energy emitted by the bulb tubes of the first imaging device and the second imaging device, the imaging beam having the first imaging energy may be detected or received by the first energy receiving layer and the third energy layer, respectively, to acquire first image information corresponding to the first shooting direction and the second shooting direction, respectively; the imaging beams having the second imaging energy among the imaging beams having the third imaging energy emitted by the bulb tubes of the first imaging device and the second imaging device may be detected or received by the second energy receiving layer and the fourth energy receiving layer, respectively, to acquire second image information corresponding to the first photographing direction and the second photographing direction, respectively.
In some embodiments, the first energy receiving layer and the third energy receiving layer may have the same structure, and the second energy receiving layer and the fourth energy receiving layer may have the same structure. For example, the first energy receiving layer and the third energy receiving layer may comprise scintillator material having a higher sensitivity to imaging beams (photons) of the first imaging energy, and the second energy receiving layer and the fourth energy receiving layer may comprise scintillator material having a higher sensitivity to imaging beams (photons) of the second imaging energy. In some embodiments, the energy receiving layer in embodiments of the present description includes transistors in addition to the scintillator material.
Step 720, determining a first target image corresponding to the first shooting direction based on the first image information and the second image information corresponding to the first shooting direction; and determining a second target image corresponding to the second photographing direction based on the first image information and the second image information corresponding to the second photographing direction. Step 720 may be performed by the target image determination submodule.
In some embodiments, the target image determining sub-module may obtain the first target image by performing a dual-energy process on the first image information and the second image information in the first photographing direction. The target image determining sub-module may further perform dual-energy processing on the first image information and the second image information corresponding to the second shooting direction to obtain a second target image. Specifically, since the attenuation coefficients of the imaging beam with the first imaging energy and the imaging beam with the second imaging energy are different for different tissues of the target area, the first image information and the second image information corresponding to the first shooting direction and/or the second shooting direction can be processed by a dual-energy image algorithm to obtain attenuation conditions of different tissues in the target area under the first imaging energy and the second imaging energy respectively so as to distinguish the different tissues, and thus a clearer image of the different tissues can be obtained, for example, the obtained first target image and second target image can be a bone image or a deboned image corresponding to the first shooting direction and the second shooting direction respectively. In some embodiments, the dual energy image algorithm may include a base material decomposition algorithm, an equivalent atomic number algorithm, an electron density algorithm, a K-absorption edge imaging algorithm, or the like, or a combination thereof.
In some embodiments, to further meet the requirement for target area location monitoring, the location monitoring system may acquire a first target image and a second target image in three dimensions to achieve the purpose of monitoring the three-dimensional location of the target area in real time.
In some embodiments, the position monitoring system (e.g., position monitoring system 300) in embodiments of the present description may acquire a first target image and a second target image in three dimensions by performing method 800. Wherein the first imaging device and the second imaging device have a plurality of first photographing directions and second photographing directions, respectively, during rotation.
In some embodiments, the method 800 may be implemented by rotating the first imaging device and the second imaging device by a first angle or a second angle relative to the target area. Fig. 8 is an exemplary flow chart of a method of determining a target image according to some embodiments of the present description. As shown in fig. 8, the method 800 may include the steps of:
step 810, acquiring a plurality of first image information and a plurality of second image information corresponding to a plurality of first shooting directions and a plurality of first image information and a plurality of second image information corresponding to a plurality of second shooting directions of a target area. In particular, step 810 may be an implementation of step 510, i.e., step 810 may be performed by acquisition module 210.
In step 810, the obtaining module 210 may control the first imaging device to obtain a plurality of first image information of the target area corresponding to the plurality of first shooting directions from the plurality of first shooting directions during rotation, and control the second imaging device to obtain a plurality of first image information corresponding to the plurality of second shooting directions from the plurality of second shooting directions during rotation. The acquiring module 210 may further control the first imaging device to acquire second image information corresponding to the target area and the plurality of first shooting directions from the plurality of first shooting directions during rotation, and control the second imaging device to acquire the plurality of second image information corresponding to the target area and the plurality of second shooting directions from the plurality of second shooting directions during rotation.
In some embodiments, the first imaging device and the second imaging device may be disposed on a gantry (e.g., gantry 111) of the radiation therapy device, and the acquisition module 210 may control the gantry to rotate such that the first imaging device and the second imaging device are capable of rotating by a first angle or a second angle relative to the target region, and during rotation, the first imaging device and the second imaging device may image the target region from a plurality of first imaging directions and a plurality of second imaging directions, respectively, to acquire first image information and second image information of the target region corresponding to the plurality of imaging directions.
In some embodiments, the acquiring module 210 may control the bulb of the first imaging device and the bulb of the second imaging device to emit, during rotation, an imaging beam having a first imaging energy from the plurality of first shooting directions and the plurality of second shooting directions toward the target area, respectively, the imaging beam having the first imaging energy being received by the detector modules of the first imaging device and the second imaging device to acquire a plurality of first image information of the target area corresponding to the plurality of first shooting directions and a plurality of first image information corresponding to the plurality of second shooting directions, respectively. The acquiring module 210 may further control the bulb of the first imaging device and the bulb of the second imaging device to emit imaging beams with second imaging energy from the plurality of first shooting directions and the plurality of second shooting directions, respectively, toward the target area during rotation, the imaging beams with the second imaging energy being received by the detector modules of the first imaging device and the second imaging device to acquire a plurality of second image information of the target area corresponding to the plurality of first shooting directions and a plurality of second image information corresponding to the plurality of second shooting directions, respectively.
In some embodiments, the acquiring module 210 may control the bulb of the first imaging device to respectively emit imaging beams with third imaging energy from the plurality of first shooting directions to the target area during rotation, where the imaging beams with the third imaging energy are received by the first energy receiving layer and the second energy receiving layer to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the plurality of first shooting directions. Specifically, the first energy receiving layer may receive an imaging beam having a first imaging energy among imaging beams having a third imaging energy to acquire a plurality of first image information corresponding to a plurality of first photographing directions. The second energy receiving layer is used for receiving imaging beams with first imaging energy in the imaging beams with third imaging energy so as to acquire a plurality of second image information corresponding to a plurality of first shooting directions. The acquiring module 210 may further control the bulb of the second imaging device to respectively emit imaging beams with third imaging energy from the plurality of second shooting directions to the target area during rotation, where the imaging beams with the third imaging energy are received by the third energy receiving layer and the fourth energy receiving layer to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the second shooting directions. Specifically, the third energy receiving layer may receive an imaging beam having the first imaging energy among the imaging beams having the third imaging energy to acquire a plurality of first image information corresponding to a plurality of second photographing directions. The fourth energy receiving layer is used for receiving imaging beams with second imaging energy in the imaging beams with third imaging energy to acquire a plurality of second image information corresponding to a plurality of second shooting directions.
Step 820, determining a target image of the target area based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first shooting directions and the plurality of first image information and the plurality of second image information corresponding to the plurality of second shooting directions. Specifically, step 820 may be performed by the target image determination submodule.
In some embodiments, the target image may include a first target image and a second target image when the first imaging device and the second imaging device are rotated a first angle relative to the target region. Step 820 may include: determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first photographing directions; and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions; the first target image is determined based on a plurality of first target sub-images corresponding to a plurality of first photographing directions, and the second target image is determined based on a plurality of second target sub-images corresponding to a plurality of second photographing directions.
In some embodiments, the target image determining sub-module may perform dual-energy processing on the first image information and the second image information corresponding to the first shooting directions through a dual-energy image algorithm to obtain the first target sub-images corresponding to the first shooting directions. The first target sub-image may be an image obtained by performing dual-energy processing on the first image information and the second image corresponding to each first shooting direction. The target image determining sub-module can also perform dual-energy processing on the first image information and the second image information corresponding to the second shooting directions through a dual-energy image algorithm to obtain second target sub-images corresponding to the second shooting directions. The second target sub-image may be an image obtained by performing dual-energy processing on the first image information and the second image corresponding to each second shooting direction. Further, the target image determination sub-module may reconstruct a plurality of first target sub-images corresponding to the plurality of first photographing directions to obtain a first target image and determine a second target image for a plurality of second target sub-images corresponding to the plurality of second photographing directions by a digital tomosynthesis technique (Digital Tomosynthesis, DTS). In some embodiments, the first angle may be 30 ° to 50 ° in order to meet the image reconstruction requirements of digital tomosynthesis techniques. In some embodiments, the first angle may be selected based on rotational speeds of the first imaging device and the second imaging device (i.e., the gantry) and/or image acquisition frequencies of the first imaging device and the second imaging device. For example, when the image capturing frequencies (e.g., the number of times a target area is photographed per unit time) of the first and second imaging devices are set, the higher the rotational speeds of the first and second imaging devices (i.e., the gantry) may be, the larger the first angle may be (e.g., greater than 50 °). For another example, the higher the image acquisition frequency of the first and second imaging devices, the smaller the first angle (e.g., less than 30 °) when the rotational speeds of the first and second imaging devices are fixed.
In some embodiments, the object determining sub-module may also perform image reconstruction on the first image information and the second image information corresponding to the plurality of first shooting directions by using a digital tomosynthesis technique to obtain a first reconstructed image and a second reconstructed image, and perform image reconstruction on the first image information and the second image information corresponding to the plurality of second shooting directions to obtain a third reconstructed image and a fourth reconstructed image. Further, the target determination submodule may perform dual-energy processing on the first reconstructed image and the second reconstructed image to obtain a first target image, and perform dual-energy processing on the third reconstructed image and the fourth reconstructed image to obtain a second target image.
In some embodiments, the first and second imaging devices perform Cone Beam CT (CBCT) imaging of the target region when the first and second imaging devices are rotated a second angle relative to the target region. Step 820 may include: determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of first photographing directions, and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions; the target image is determined based on a plurality of first target sub-images corresponding to the plurality of first photographing directions and a plurality of second target sub-images corresponding to the plurality of second photographing directions.
In some embodiments, the target image determining sub-module may perform dual-energy processing on the first image information and the second image information corresponding to the first shooting directions through a dual-energy image algorithm to obtain the first target sub-images corresponding to the first shooting directions. The target image determining sub-module can also perform dual-energy processing on the first image information and the second image information corresponding to the second shooting directions through a dual-energy image algorithm to obtain second target sub-images corresponding to the second shooting directions. Further, the target image determining sub-module may perform image reconstruction on the plurality of first target sub-images corresponding to the plurality of first shooting directions and the plurality of second target sub-images corresponding to the plurality of second shooting directions through a CBCT image reconstruction algorithm to obtain a target image of the target region. In some embodiments, to meet the image reconstruction requirements of CBCT, the second angle may be no less than 360 °. In some embodiments, the second angular angle may be at least 180 ° plus the fan angle of the imaging beam. In some embodiments, the second angle may be 360 ° minus a preset included angle existing between a line connecting the first imaging device and the target area and a line connecting the second imaging device and the target area. In some embodiments, the second angle may be 360 ° or more. In some embodiments, the second angle may also be selected in some embodiments based on the rotational speed of the first and second imaging devices (i.e., the gantry) and/or the image acquisition frequency of the first and second imaging devices, as may the first angle, and the higher the rotational speed, the greater the second angle. The higher the image acquisition frequency, the smaller the second angle may be.
In some embodiments, the target determination submodule may also perform image reconstruction on the first image information corresponding to the plurality of first shooting directions and the first image information corresponding to the plurality of second shooting directions through a CBCT image reconstruction algorithm to obtain a first reconstructed image, perform image reconstruction on the plurality of second image information corresponding to the plurality of first shooting directions and the plurality of second image information corresponding to the plurality of second shooting directions to obtain a second reconstructed image, and then perform dual-energy processing on the first reconstructed image and the second reconstructed image to obtain the target image of the target area.
In some embodiments, a position monitoring system (e.g., position monitoring system 300) in embodiments of the present description may acquire a three-dimensional target image of a target area by performing method 900. The first imaging device and the second imaging device have a plurality of first photographing directions and a plurality of second photographing directions, respectively, during rotation, and the plurality of first photographing directions may be the same as the plurality of second photographing directions. Wherein the first imaging device and the second imaging device are rotatable relative to the target area by an angle of rotation of at least 360 degrees.
Fig. 9 is an exemplary flow chart of a method of determining a target image according to some embodiments of the present description.
As shown in fig. 9, method 900 may include the steps of:
step 910, a plurality of first image information corresponding to a plurality of first shooting directions and a plurality of second image information corresponding to a plurality of second shooting directions of the target area are obtained. In particular, step 910 may be an implementation of step 510, i.e., step 910 may be performed by acquisition module 210.
In step 910, the acquiring module 210 may control the first imaging device to emit imaging beams with the first imaging energy from the plurality of first capturing directions to the target area during rotation, so as to acquire a plurality of first image information of the target area corresponding to the plurality of first capturing directions. The acquiring module 210 may further control the second imaging device to respectively emit imaging beams with the second imaging energies from the plurality of second capturing directions to the target area during rotation, so as to acquire a plurality of second image information of the target area corresponding to the plurality of second capturing directions.
In step 920, a target image of the target area is determined based on the plurality of first image information corresponding to the plurality of first photographing directions and the plurality of second image information corresponding to the plurality of second photographing directions. In particular, step 920 may be performed by the target image determination submodule. In some embodiments, step 920 may include: determining a first reconstructed image based on a plurality of first image information corresponding to a plurality of first photographing directions; and determining a second reconstructed image based on the plurality of second image information corresponding to the plurality of second photographing directions; a target image is determined based on the first reconstructed image and the second reconstructed image.
In some embodiments, the target image determination sub-module may perform image reconstruction on the plurality of first image information corresponding to the plurality of first shooting directions by a CBCT image reconstruction algorithm to obtain a first reconstructed image. The target image determining sub-module can also reconstruct a plurality of second image information corresponding to a plurality of second shooting directions through a CBCT image reconstruction algorithm to obtain a second reconstructed image. Wherein the first reconstructed image corresponds to a first imaging energy and the second reconstructed image corresponds to a second imaging energy. Further, the target determination submodule may perform dual-energy processing on the first reconstructed image and the second reconstructed image to obtain a target image of the target region.
In some embodiments, in the method 900, the target image determining sub-module may also perform dual-energy processing on the first image information corresponding to the first shooting direction and the second image information corresponding to the second shooting direction that is the same as the first shooting direction to obtain a plurality of target sub-images corresponding to the first shooting direction (or the second shooting direction), and then perform image reconstruction on the plurality of target sub-images to obtain the target image of the target area. Specifically, the target image determining sub-module may perform dual-energy processing on the first image information and the second image information corresponding to each first shooting direction and the second shooting direction identical to each first shooting direction to obtain target sub-images corresponding to each first shooting direction (or the second shooting direction identical to the first shooting direction), and then perform image reconstruction on the target sub-images corresponding to all the first shooting directions (or the second shooting direction identical to the first shooting direction) through a CBCT image reconstruction algorithm to obtain the target image.
In some embodiments, in methods 700, 800, and 900, the images obtained by the dual-energy processing (e.g., the first and second target images determined by method 700, the first and second target images determined when the first and second imaging devices in method 800 are rotated by the first angle relative to the target region, the target images determined when the first and second imaging devices in method 800 are rotated by the second angle relative to the target region, and the target images determined in method 900) may each include a bone image and a deboned image, which may be used to verify the accuracy of the positional information of the target region. In some embodiments, the bone image and the deboned image may be two-dimensional images or three-dimensional images.
Fig. 10 is an exemplary flow chart of a method of verifying location information according to some embodiments of the present description.
As shown in fig. 10, the method 1000 may include the steps of:
at step 1010, a treatment plan image is acquired. In particular, step 1010 may be performed by a treatment planning image acquisition unit.
In some embodiments, the treatment planning image may be an image that a physician formulates for a subject (e.g., patient) prior to radiation treatment for planning a treatment region or treatment modality. In some embodiments, the treatment planning image may include a CT image. In some embodiments, the treatment planning image may be pre-acquired by a storage device (e.g., storage device 150) stored in the position monitoring system, and the treatment planning image acquisition unit may acquire the treatment planning image directly from the storage device.
Step 1020, registering the bone image with the treatment plan image. In particular, step 1020 may be performed by the registration unit.
In some embodiments, when the bone image is a two-dimensional image, the treatment planning image may be projected by an image digital image reconstruction technique (Digitally Reconstructured Radiograph, DRR) to obtain a DRR image for registration with the bone image. In some embodiments, when the bone image is a three-dimensional image, the treatment plan image may be registered directly with the bone image. In some embodiments, bone image registration with the treatment planning image may be accomplished in a two-dimensional domain or a three-dimensional domain.
In some embodiments, when the target image includes the first target image and the second target image determined by method 700, bone image registration with the treatment plan image may be accomplished in a two-dimensional domain. In some embodiments, the acquiring unit may perform projection processing on the treatment plan image from the first shooting direction and the plurality of second shooting directions respectively by using an image digital image reconstruction technique to acquire a first DRR image and a second DRR image corresponding to the first target image and the second target image respectively, and the registering unit may register the bone image in the first target image with the first DRR image respectively by using an image registration algorithm. The registration unit may also register the bone image in the second target image with the second DRR image by an image registration algorithm. In some embodiments, the image registration algorithm may include a gray-based image registration algorithm, a feature-based image registration algorithm, a model-based image registration algorithm, and the like.
In some embodiments, registration of the bone image with the treatment planning image may also be accomplished in a three-dimensional domain when the target image includes the first target image and the second target image determined by method 700. In some embodiments, the registration unit (or acquisition unit) may first image-process the first target image and the second target image to obtain an image including three-dimensional positional information of the target region, and then register the bone image in the image with the treatment plan image.
In some embodiments, when the target image includes the first target image and the second target image determined by the method 800, the acquiring unit may perform projection processing on the treatment plan image from the plurality of first shooting directions and the plurality of second shooting directions to acquire a plurality of first DRR images and a plurality of second DRR images, respectively, through an image digital image reconstruction technique, and then reconstruct the plurality of first DRR images and the plurality of second DRR images to obtain a first reconstructed DRR image and a second reconstructed DRR image, respectively. The registration unit may register the bone image in the first target image and the first reconstructed DRR image and the bone image in the second target image and the second reconstructed DRR image. In some embodiments, the registration of the bone image in the first target image and the first reconstructed DRR image and the registration of the bone image in the second target image and the second reconstructed DRR image may be accomplished in a three-dimensional domain.
In some embodiments, when the target image is the target image obtained by the CBCT image reconstruction algorithm in method 800 or method 900, the bone image in the target image may be directly registered with the treatment planning image.
Step 1030 verifies the location information of the target region based on the deboned image and the treatment plan image. In particular, step 1030 may be performed by the authentication unit.
In step 1030, the verification unit may compare the deboned image and the treatment plan image to determine location information of the target region. Specifically, whether the current position information of the target region in the deboned image is consistent with the planned position information of the target region in the treatment plan image may be determined by comparing the deboned image and the treatment plan image to verify the accuracy of the position information of the target region.
In some embodiments, when the target image includes a first target image and a second target image determined by the method 700, and registration of the bone image with the treatment plan image may be accomplished in a two-dimensional domain, the verification unit may determine whether the current location information of the target region in the deboned image is consistent with the planned location information of the target region in the treatment plan image by comparing the deboned image in the first target image with the first DRR image and comparing the deboned image in the second target image with the second DRR image. Further, a position deviation corresponding to the first shooting direction between the current position information and the planned position information can be obtained by comparing the deboned image in the first target image with the first DRR image, a position deviation corresponding to the second shooting direction between the current position information and the planned position information can be obtained by comparing the deboned image in the second target image with the second DRR image, and the current position of the target area or the motion state of the judgment target area can be calculated based on the position deviation between the current position information and the planned position information. When the target image includes the first target image and the second target image determined by the method 700 and the registration of the bone image and the treatment plan image can be completed in the three-dimensional domain, the registration unit may compare the deboned image in the image obtained by processing the first target image and the second target image and including the three-dimensional position information of the target area with the treatment plan image to obtain a three-dimensional position deviation between the current position information and the plan position information, and the verification unit may calculate the current position of the target area or determine the current position or determine the movement condition of the target area based on the three-dimensional position deviation.
In some embodiments, when the target image includes a first target image and a second target image determined by the method 800, the verification unit may compare the deboned image in the first target image with the first reconstructed DRR image and compare the deboned image in the second target image with the second reconstructed DRR image to determine whether the current location information of the target region in the deboned image is consistent with the planned location information of the target region in the treatment plan image to verify the accuracy of the location information of the target region. Further, by comparing the deboned image in the first target image with the first reconstructed DRR image, and comparing the deboned image in the second target image with the second reconstructed DRR image, a three-dimensional position deviation between the current position information and the planned position information of the target area can be obtained, and the verification unit can calculate the current position of the target area or judge the current position or judge the movement condition of the target area based on the three-dimensional position deviation.
In some embodiments, when the target image is the target image obtained by the CBCT image reconstruction algorithm in the method 800 or the method 900, the deboned image in the target image may be directly compared with the treatment plan image, and whether the current position information of the target region in the deboned image is consistent with the planned position information of the target region in the treatment plan image may be determined, so as to verify the accuracy of the position information of the target region. The three-dimensional position deviation between the current position information and the planned position information can be obtained by comparing the deboned image with the treatment plan image, and the verification unit can calculate the current position of the target area or judge the current position or judge the movement condition of the target area based on the three-dimensional position deviation.
In some embodiments, the verification unit may acquire a Beam Eye View (BEV) image of the target region through BEV imaging, and then further determine the accuracy of the position information of the target region through the BEV image, so as to more accurately monitor the position of the target region in real time during the radiotherapy process. In some embodiments, BEV imaging may be an electronic portal imaging device (Electronic Portal Imaging Device, EPID) comprised of an imaging radiation source and a corresponding detector module in a radiation therapy apparatus (e.g., radiation therapy apparatus 110) in a position monitoring system, wherein the imaging radiation source may emit an imaging beam of MV grade that is received by the detector module to acquire BEV images.
In some embodiments, during radiation treatment, it is determined by the verification unit whether the current position information of the target region in the deboned image is consistent with the planned position information of the target region in the treatment planning image or not, which may be used to guide radiation treatment. Specifically, when the current position information of the target area in the deboned image does not coincide with the planned position information of the target area in the treatment plan image, the radiotherapy apparatus may suspend the radiotherapy, then adjust the current position information of the target area by positioning the patient, and again determine whether the adjusted current position information of the target area coincides with the planned position information, and if so, may restart the radiotherapy.
In some embodiments, acquiring one or more first image information and one or more second image information of the target region may be implemented by an imaging device (bulb) emitting the imaging beam from different beam-out foci. In particular, the imaging device may have a first beam-out focal point and a second beam-out focal point. The acquisition module 210 may control a bulb of the imaging device to emit an imaging beam from a first beam exit focus, the imaging beam being received by a detector module of the imaging device to acquire first image information, and control the bulb of the imaging device to emit an imaging beam from a second beam exit focus, the imaging beam being received by the detector module of the imaging device to acquire second image information. Fig. 11 is a schematic diagram of an imaging principle of an imaging apparatus according to some embodiments of the present specification.
FIG. 11 includes a first coordinate system (u, v), a second coordinate system (x s ,y s ,z s ) And a third coordinate system (x, y, z). The first coordinate system is established based on the image coordinate systems of the first image information and the second image information, and the origin of the first coordinate system (u, v) may be a certain vertex (for example, a right lower vertex) of the detector module, and directions of the u axis and the v axis are from right to left and from bottom to top along the detector module, respectively. A second coordinate system (x s ,y s ,z s ) Takes a bulb tube of imaging equipment as an origin, and x is the same as the origin s Axes and y s The axes are respectively consistent with the directions of the u axis and the v axis of the first coordinate system, and the z axis thereof s The axis is directed toward the isocenter of the gantry (e.g., gantry 111). The third coordinate system (x, y, z) is a world coordinate system with its origin at the isocenter of the linac of the imaging apparatus with its x, y, z axes pointing forward in the left, rear and head directions of the patient, respectively, when the subject is in the head advanced supine position and the couch is not rotating. Wherein the target area T is located in the third coordinate system.
In some embodiments, the method 1100 may determine the location information of the target region when taking one or more first image information and one or more second image information of the target region may be implemented by an imaging device (bulb) transmitting imaging beams from different beam-out foci.
Fig. 12 is an exemplary flowchart of a method of determining location information of a target area according to some embodiments of the present description.
As shown in fig. 12, method 1200 may include the steps of:
in step 1210, a first coordinate and a second coordinate of the target region in the first coordinate system are determined based on the first image information and the second image information, respectively. Specifically, step 1210 may be performed by the first determination unit.
In some embodiments, the first and second coordinates may refer to coordinates of the target region within the first coordinate system (u, v) at positions D and D' of the first and second image information, respectively.
In step 1210, as shown in fig. 11, the first determining unit may determine, from the first image information and the second image information, first coordinates and second coordinates of the position D and the position D' of the target area T in the first image information and the second image information, respectively, in the first coordinate system (u, v), respectively, by an algorithm such as image recognition, the first coordinates being (u T ,v T ) And the second coordinate is (u' T ,v′ T )。
In step 1220, first projection coordinates and second projection coordinates corresponding to the first coordinates and second coordinates, respectively, in the second coordinate system are determined. Specifically, step 1220 may be performed by the second determination unit.
As shown in connection with fig. 11, the first projection coordinates and the second projection coordinates corresponding to the first coordinates and the second coordinates, respectively, may refer to the positions D and D' in the second coordinate system (x s ,y s ,z s ) Coordinates within. In some embodiments, the second determination unit may be based on the first coordinate system (u, v) and the second coordinate system (x s ,y s ,z s ) Determining the geometrical relationship between the first and second coordinates in a second coordinate system (x s ,y s ,z s ) First projection coordinates and second projection coordinates within. Specifically, the geometric relationship includes x s Axes and y s The axes coincide with the directions of the u-axis and the v-axis of the first coordinate system (u, v), respectively, and the second coordinate system (x s ,y s ,z s ) Z of (2) s The axis is directed towards the isocenter of the gantry. Based on the geometric relationship and the first and second coordinates, the first and second projection coordinates can be calculated as (u) T -l u /2,v T -l v /2, SDD) and (u' T -l u /2,v′ T -l v /2, SDD). Wherein l u 、l v The length and the width of the imaging effective area of the detector module are respectively, and the SDD is the distance from the bulb of the imaging device to the center of the detector module.
In step 1230, a third coordinate of the target region in the second coordinate system is determined based on the first projection coordinate and the second projection coordinate. Specifically, step 1230 may be performed by the third determination unit.
As shown in connection with fig. 11, the third coordinates are coordinates of the target area T in the second coordinate system.
In step 1230, as shown in fig. 11, since the imaging beam is emitted from the first beam-out focal point S and the second beam-out focal point S' to obtain the first image information and the second image information, respectively, the lines between the beam-out focal points and the positions of the corresponding target areas in the image information pass through the target area T and are not parallel to each other. For example, the line between the first beam-out focal point S and the position D and the line between the second beam-out focal point S 'and the position D' both pass through the target area T. Therefore, the third determination unit can determine the position of the object in the second coordinate system (x s ,y s ,z s ) The equations of two straight lines of the internal middle simultaneous SD and S 'D' calculate the intersection point coordinate of the two straight lines, and the intersection point coordinate is the target area T in a second coordinate system (x s ,y s ,z s ) And a third coordinate within. In some embodiments, the first linear equation SD may be determined based on the first beam-out focus projection coordinates and the first projection coordinates of the first beam-out focus within the second coordinate system, and based on the second beam-out focus within the second coordinate system (x s ,y s ,z s ) The second straight line equation S 'D' is determined based on the second beam-exiting focal point projection coordinates and the second projection coordinates, and then the target region is determined in the second coordinate system (x s ,y s ,z s ) And a third coordinate within. Wherein the position of the first beam-out focal spot and the position of the second beam-out focal spot are predetermined, so that the first beam-out focal spot projection coordinates and the second beam-out focal spot projection coordinates are also known. In some embodiments, the target region T may be calculated in a second coordinate system (x using a least squares method based on the first and second linear equations S ' D ', S ' D s ,y s ,z s ) And a third coordinate within.
In step 1240, fourth coordinates in the third coordinate system within the target area are determined based on the third coordinates. Specifically, step 1240 may be performed by the fourth position determination unit.
As shown in connection with fig. 11, the fourth coordinate is the coordinate of the target area T in the third coordinate system.
In some embodiments, the fourth position determining unit may determine the position of the object by the second coordinate system (x s ,y s ,z s ) And a third coordinate system (x, y, z), and converting the third coordinate into a fourth coordinate. In some embodiments, the second coordinate system (x s ,y s ,z s ) And a third coordinate system (x, y, z) may be related to the angle at which the imaging device is rotated relative to the target area (i.e., gantry angle). In some embodiments, the mapping relationship may be represented by a mapping matrix G, i.e., the mapping matrix G may vary with a random frame angle variation. Specifically, the third coordinates may be converted into fourth coordinates by multiplying the mapping matrix G by the third coordinates. Wherein the fourth coordinates may be used to represent location information of the target area.
According to the position monitoring system provided by the embodiment of the specification, the imaging device emits the imaging beam from the different beam outlet focuses to acquire the first image information and the second image information, and then the position information of the target area is determined based on the first image information and the second image information, so that three-dimensional tracking of the target area can be realized, and the image brought to the accuracy of a tracking result is reduced. Further, the position information may be used to reflect a motion state of the target region, such that a position monitoring system (e.g., a radiation therapy device) may adjust a relative positional relationship between the therapy beam and the target region according to the motion state, enabling three-dimensional tracking therapy of the target region.
In some embodiments, when there are a plurality of target areas (markers), the imaging device may transmit imaging beams through a plurality of beam-out focuses to acquire a plurality of image information, so as to calculate three-dimensional spatial positions of the plurality of target areas, so as to more accurately reflect the motion state of the target areas.
In some embodiments, when the bulb of the imaging device has an exit beam focus in multiple directions, a tomographic image of the target area may be provided in a limited shooting direction by digital tomosynthesis imaging techniques, the acquired tomographic image having a higher tissue contrast.
It should be noted that the above descriptions of methods 500, 600, 700, 800, 900, 1000, and 1200 are for descriptive convenience only and are not intended to limit the invention to the illustrated embodiments. Various modifications and adaptations in the field and details of the application for carrying out the above-described methods and systems may be made by those skilled in the art after understanding the principles of the system without departing from such principles. For example, step 710 in method 700, step 810 in method 800, step 910 in method 900, and step 1210 in method 1200 may be different implementations of step 510 in method 500.
Possible beneficial effects of embodiments of the present application include, but are not limited to: (1) According to the position monitoring system provided by the embodiment of the specification, the image information of the target area corresponding to different imaging energies is obtained from a plurality of different shooting directions, and the target image with different tissues of the target area can be obtained by performing dual-energy processing and/or image fusion or reconstruction on the image information of the different imaging energies, so that the position information of the target area can be conveniently and accurately obtained from the target image, and the purpose of real-time and accurate monitoring on the three-dimensional position of the target area is achieved; (2) The target image in the embodiment of the specification can be a bone image or a deboned image, wherein the bone image can be used for image registration to obtain a better image registration result, the deboned image can obtain a better soft tissue contrast and retain a marker, and the problems that the marker or the target area is difficult to identify or is blocked by bones are solved; (3) According to the position monitoring system provided by the embodiment of the specification, imaging beams are emitted through different beam outlet focuses to acquire the first image information and the second image information, and then the position information of the target area is determined based on the first image information and the second image information, so that three-dimensional tracking of the target area can be realized, and images brought to the accuracy of tracking results are reduced. Further, the position information may be used to reflect a motion state of the target region, such that a position monitoring system (e.g., a radiation therapy device) may adjust a relative positional relationship between the therapy beam and the target region according to the motion state, enabling three-dimensional tracking therapy of the target region.
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations of the present application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application and are not intended to limit the order in which the processes and methods of the application are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this application is hereby incorporated by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the present application, documents that are currently or later attached to this application for which the broadest scope of the claims to the present application is limited. It is noted that the descriptions, definitions, and/or terms used in the subject matter of this application are subject to such descriptions, definitions, and/or terms if they are inconsistent or conflicting with such descriptions, definitions, and/or terms.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of this application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present application may be considered in keeping with the teachings of the present application. Accordingly, embodiments of the present application are not limited to only the embodiments explicitly described and depicted herein.

Claims (24)

1. A method for monitoring the location of a target area, comprising:
acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions;
position information of the target area is determined based on the one or more first image information and the one or more second image information.
2. The method of claim 1, wherein a first imaging energy corresponding to the one or more first image information and a second imaging energy corresponding to the one or more second image information are different; the determining the location information of the target region based on the one or more first image information and the one or more second image information includes:
Determining a target image of the target region based on the one or more first image information and the one or more second image information;
and determining the position information of the target area based on the target image.
3. The method of claim 2, wherein the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions is performed by a plurality of imaging devices; the plurality of imaging devices includes at least a first imaging device and a second imaging device; a connecting line between the first imaging device and the target area and a connecting line between the second imaging device and the target area form a preset included angle;
wherein each of the plurality of imaging devices includes a bulb and a detector module; the bulb is used for emitting an imaging beam with preset imaging energy; the detector module is used for receiving the imaging beam to generate image information.
4. The method of claim 3, wherein the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions comprises:
Controlling a bulb tube of the first imaging device and the bulb tube of the second imaging device to emit imaging beams with first imaging energy to the target area from a first shooting direction and a second shooting direction respectively, wherein the imaging beams with the first imaging energy are received by a detector module of the first imaging device and a detector module of the second imaging device so as to acquire first image information corresponding to the first shooting direction and first image information corresponding to the second shooting direction of the target area respectively;
controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with second imaging energy from the first shooting direction and the second shooting direction to the target area, wherein the imaging beams with the second imaging energy are received by the first imaging device and the second imaging device so as to acquire second image information corresponding to the first shooting direction and second image information corresponding to the second shooting direction of the target area.
5. A method according to claim 3, wherein the detector module of the first imaging device has at least a first energy receiving layer and a second energy receiving layer; the detector module of the second imaging device is provided with at least a third energy receiving layer and a fourth energy receiving layer; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes:
Controlling the first imaging device to emit an imaging beam with third imaging energy from a first shooting direction to the target area, wherein the imaging beam with the third imaging energy is received by the first energy receiving layer and the second energy receiving layer so as to acquire first image information and second image information of the target area corresponding to the first shooting direction respectively;
controlling the second imaging device to emit an imaging beam with third imaging energy from a second shooting direction to the target area, wherein the imaging beam with the third imaging energy is received by the third energy receiving layer and the fourth energy receiving layer so as to acquire first image information and second image information of the target area corresponding to the second shooting direction respectively;
wherein the third imaging energy comprises a first imaging energy and a second imaging energy.
6. The method of claim 4 or 5, wherein the target image comprises a first target image and a second target image; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes:
Determining the first target image corresponding to the first shooting direction based on first image information and second image information corresponding to the first shooting direction; and determining the second target image corresponding to the second shooting direction based on the first image information and the second image information corresponding to the second shooting direction.
7. A method according to claim 3, wherein the first imaging device and the second imaging device are rotatable relative to the target area; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes:
controlling the first imaging device and the second imaging device to respectively acquire a plurality of first image information corresponding to a plurality of first shooting directions and a plurality of first image information corresponding to a plurality of second shooting directions of the target area from the plurality of first shooting directions and the plurality of second shooting directions respectively in the rotating process;
and controlling the first imaging device and the second imaging device to respectively acquire a plurality of second image information corresponding to the plurality of first shooting directions and a plurality of second image information corresponding to the plurality of second shooting directions of the target area from the plurality of first shooting directions and the plurality of second shooting directions respectively in the rotating process.
8. The method of claim 7, wherein the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions comprises:
controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with first imaging energy to the target area from the first shooting directions and the second shooting directions respectively in the rotating process, wherein the imaging beams with the first imaging energy are received by the detector modules of the first imaging device and the second imaging device so as to acquire a plurality of first image information corresponding to the first shooting directions and a plurality of first image information corresponding to the second shooting directions of the target area respectively;
and controlling the bulb tubes of the first imaging device and the second imaging device to emit imaging beams with second imaging energy to the target area from the first shooting directions and the second shooting directions respectively in the rotating process, wherein the imaging beams with the second imaging energy are received by the detector modules of the first imaging device and the second imaging device so as to acquire a plurality of second image information corresponding to the first shooting directions and a plurality of second image information corresponding to the second shooting directions of the target area respectively.
9. The method of claim 7, wherein the flat panel detector module of the first imaging device has at least a first energy receiving layer and a second energy receiving layer; the detector module of the second imaging device is provided with at least a third energy receiving layer and a fourth energy receiving layer; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes:
controlling a bulb tube of the first imaging device to respectively emit imaging beams with third imaging energy from the plurality of first shooting directions to the target area in a rotating process, wherein the imaging beams with the third imaging energy are received by the first energy receiving layer and the second energy receiving layer so as to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the plurality of first shooting directions;
controlling a bulb tube of the second imaging device to respectively emit imaging beams with third imaging energy from a plurality of second shooting directions to the target area in a rotating process, wherein the imaging beams with the third imaging energy are received by the third energy receiving layer and the fourth energy receiving layer so as to respectively acquire a plurality of first image information and a plurality of second image information of the target area corresponding to the plurality of second shooting directions;
Wherein the third imaging energy comprises a first imaging energy and a second imaging energy.
10. The method of claim 7, wherein the first imaging device and the second imaging device are rotated a first angle relative to the target area; the target image comprises a first target image and a second target image; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes:
determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of first photographing directions; and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on the plurality of first image information and the plurality of second image information corresponding to the plurality of second photographing directions;
the first target image is determined based on the plurality of first target sub-images corresponding to the plurality of first photographing directions, and the second target image is determined based on the plurality of second target sub-images corresponding to the plurality of second photographing directions.
11. The method of claim 7, wherein the first imaging device and the second imaging device are rotated a second angle relative to the target area; the determining a target image of the target region based on the one or more first image information and the one or more second image information includes:
determining a plurality of first target sub-images corresponding to the plurality of first photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of first photographing directions, and determining a plurality of second target sub-images corresponding to the plurality of second photographing directions based on a plurality of first image information and a plurality of second image information corresponding to the plurality of second photographing directions;
the target image is determined based on a plurality of first target sub-images corresponding to the plurality of first photographing directions and a plurality of second target sub-images corresponding to the plurality of second photographing directions.
12. A method according to claim 3, wherein the first imaging device and the second imaging device are rotatable relative to the target area; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes:
Controlling the first imaging device to respectively emit imaging beams with first imaging energy from the plurality of first shooting directions to the target area in the rotating process so as to acquire a plurality of first image information of the target area corresponding to the plurality of first shooting directions;
and controlling the second imaging device to respectively emit imaging beams with second imaging energy from the plurality of second shooting directions to the target area in the rotating process so as to acquire a plurality of second image information of the target area corresponding to the plurality of second shooting directions.
13. The method of claim 12, wherein the determining the target image of the target region based on the one or more first image information and the one or more second image information comprises:
determining a first reconstructed image based on a plurality of first image information corresponding to the plurality of first photographing directions; and determining a second reconstructed image based on a plurality of second image information corresponding to the plurality of second photographing directions;
the target image is determined based on the first reconstructed image and the second reconstructed image.
14. The method of claim 2, wherein the target image comprises a deboned image and a bone image; the determining the location information of the target area based on the target image further includes:
Acquiring a treatment plan image;
registering the bone image with the treatment plan image;
the location information of the target region is verified based on the deboning image and the treatment plan image.
15. The method of claim 14, wherein the deboned image and the bone image are two-dimensional images or three-dimensional images.
16. The method of claim 1, wherein the acquiring one or more first image information and one or more second image information of the target area based on a plurality of different shooting directions is performed by an imaging device having a plurality of beam out focal points, each of the beam out focal points corresponding to one of the plurality of different shooting directions.
17. The method of claim 16, wherein the imaging device has a first beam out focal point and a second beam out focal point; the acquiring the one or more first image information and the one or more second image information of the target area based on the plurality of different shooting directions includes:
controlling a bulb tube of the imaging device to emit an imaging beam from the first beam outlet focus, wherein the imaging beam is received by a detector module of the imaging device so as to acquire first image information;
And controlling a bulb tube of the imaging device to emit an imaging beam from the second beam outlet focus, wherein the imaging beam is received by a detector module of the imaging device so as to acquire second image information.
18. The method of claim 17, wherein the determining location information for the target region based on the first image information and the second image information comprises:
determining a first coordinate and a second coordinate of the target region in a first coordinate system based on the first image information and the second image information respectively;
determining a first projection coordinate and a second projection coordinate which respectively correspond to the first coordinate and the second coordinate in a second coordinate system;
determining a third coordinate of the target region in a second coordinate system based on the first projection coordinate and the second projection coordinate;
fourth coordinates of the target region in a third coordinate system are determined based on the third coordinates.
19. The method of claim 18, wherein determining first and second projection coordinates within a second coordinate system that correspond to the first and second coordinates, respectively, comprises:
and determining a first projection coordinate and a second projection coordinate of the first coordinate and the second coordinate in a second coordinate system respectively based on the geometric relation of the first coordinate system and the second coordinate system.
20. The method of claim 18, wherein determining a third coordinate of the target region within a second coordinate system based on the first projection coordinate and the second projection coordinate comprises:
determining a first linear equation based on first beam-out focal point projection coordinates of the first beam-out focal point in the second coordinate system and the first projection coordinates;
determining a second linear equation based on second beam-out focal point projection coordinates of the second beam-out focal point in the second coordinate system and the second projection coordinates;
the third coordinate of the target region within the second coordinate system is determined based on the first linear equation and the second linear equation.
21. The method of claim 18, wherein the determining a fourth coordinate of the target region within a third coordinate system based on the third coordinate comprises:
and converting the third coordinate into the fourth coordinate based on the mapping relation between the second coordinate system and the third coordinate system.
22. The method of claim 21, wherein the mapping of the second coordinate system and the third coordinate system is related to an angle of rotation of the imaging device relative to the target area.
23. A computer-readable storage medium storing computer instructions that, when read by a computer in the storage medium, perform the method of monitoring the position of a target area according to any one of claims 1 to 22.
24. A system for monitoring the position of a target area, characterized in that the system comprises at least one processor for performing the method for monitoring the position of a target area according to any of claims 1-22.
CN202111678024.9A 2021-12-31 2021-12-31 Target area position monitoring method, system and storage medium Pending CN116407780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111678024.9A CN116407780A (en) 2021-12-31 2021-12-31 Target area position monitoring method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111678024.9A CN116407780A (en) 2021-12-31 2021-12-31 Target area position monitoring method, system and storage medium

Publications (1)

Publication Number Publication Date
CN116407780A true CN116407780A (en) 2023-07-11

Family

ID=87048436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111678024.9A Pending CN116407780A (en) 2021-12-31 2021-12-31 Target area position monitoring method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116407780A (en)

Similar Documents

Publication Publication Date Title
US11511132B2 (en) Tumor tracking during radiation treatment using ultrasound imaging
US7453983B2 (en) Radiation therapy method with target detection
US7831073B2 (en) Precision registration of X-ray images to cone-beam CT scan for image-guided radiation treatment
US7720196B2 (en) Target tracking using surface scanner and four-dimensional diagnostic imaging data
US9968321B2 (en) Method and imaging system for determining a reference radiograph for a later use in radiation therapy
US9125570B2 (en) Real-time tomosynthesis guidance for radiation therapy
JP7122003B2 (en) radiotherapy equipment
JP2015029793A (en) Radiotherapy system
US10631778B2 (en) Patient setup using respiratory gated and time resolved image data
JP2010187991A (en) Bed positioning system, radiotherapy system and bed positioning method
CN110381838A (en) Use disposition target Sport Administration between the gradation of the view without view of volume imagery
US10813205B2 (en) Detecting motion by using a low dose x-ray image
JP7311109B2 (en) medical image processing device, medical image processing program, medical device, and treatment system
CN116407780A (en) Target area position monitoring method, system and storage medium
WO2022120707A1 (en) Real-time image guiding method, apparatus and system, and radiation therapy system
WO2022120716A1 (en) Real-time image guided method, apparatus and system, and radiotherapy system
Hsieh et al. A simulated comparison of lung tumor target verification using stereoscopic tomosynthesis or radiography
JP2022069797A (en) Radiation therapy equipment and radiation therapy method
CN117101022A (en) Image guiding method, device, medium and equipment for radiotherapy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination