EP3590577A1 - Object tracking device - Google Patents

Object tracking device Download PDF

Info

Publication number
EP3590577A1
EP3590577A1 EP18761561.2A EP18761561A EP3590577A1 EP 3590577 A1 EP3590577 A1 EP 3590577A1 EP 18761561 A EP18761561 A EP 18761561A EP 3590577 A1 EP3590577 A1 EP 3590577A1
Authority
EP
European Patent Office
Prior art keywords
image
tracking
tracking object
unit configured
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP18761561.2A
Other languages
German (de)
French (fr)
Other versions
EP3590577B1 (en
EP3590577A4 (en
Inventor
Toshiyuki Terunuma
Takeji Sakae
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Tsukuba NUC
Original Assignee
University of Tsukuba NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Tsukuba NUC filed Critical University of Tsukuba NUC
Publication of EP3590577A1 publication Critical patent/EP3590577A1/en
Publication of EP3590577A4 publication Critical patent/EP3590577A4/en
Application granted granted Critical
Publication of EP3590577B1 publication Critical patent/EP3590577B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/105Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using a laser alignment system
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1059Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using cameras imaging the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1061Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1061Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
    • A61N2005/1062Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10121Fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates to an object tracking device.
  • a target object (tumor, etc.) of treatment moves due to breathing or pulsation of a subject (patient)
  • it is required to capture an X-ray fluoroscopic image in real time, track the target object and specify its position, and irradiate with radiation, only when the target object has moved to an irradiation position.
  • a so-called object tracking technique by guiding the image generally uses the X-ray fluoroscopic image, but it is not limited thereto.
  • ultrasound images, magnetic resonance imaging (MRI) images, computed tomography (CT) images, and positron emission tomography (PET) images may also be used.
  • the following techniques are known as a technique for specifying and tracking the position of the target object.
  • Patent Document 1 Japanese Patent No. 5610441 describes a technique of compensating for a deterioration in tracking performance assumed at the time of markerless tracking by X-ray fluoroscopy with extracorporeal information. That is, the above patent describes an object tracking technique, in which based on a template image including a therapeutic target object such as a cancer lesion, pattern matching is executed on each frame of an X-ray fluoroscopic image for detecting a position of the therapeutic target object during the treatment, and irradiation of therapeutic radiation is performed, when the position of the therapeutic target falls within a predetermined error range from a planned irradiation position, and in order to minimize an irradiation deviation due to pseudo-periodic movement caused by breathing etc., within a time (phase) window (set based on the movement of a body surface) suitable as an irradiation timing.
  • a time (phase) window set based on the movement of a body surface
  • Patent Document 1 also describes that, in order to increase a determination accuracy of a therapeutic target position, pattern matching is performed using a non-tracking object such as a thoracic diaphragm or a bone, etc. different from the therapeutic target as information on a reference position.
  • a non-tracking object such as a thoracic diaphragm or a bone, etc. different from the therapeutic target as information on a reference position.
  • Patent Document 2 Japanese Patent No. 3053389 describes a technique for inserting a metal marker into a patient's body to increase the tracking accuracy of tracking near a tumor with the X-ray fluoroscopy. That is, the above patent describes a technique of specifying a position of a tumor marker in three dimensions by capturing a tumor marker embedded near the tumor to be the target object of treatment by transmitting X-ray from a plurality of directions, and performing template matching with the template image of the previously registered tracking object (tumor) marker by using a density normalized cross-correlation method.
  • Patent Document 3 Japanese Patent No. 4505639 describes a technique of specifying a position of a tumor marker by performing template matching by using the density normalized cross-correlation method similar to Patent Document 2, and acquiring time-series positional data in advance, such that the position of the tumor marker at the current time is estimated from the previously acquired data.
  • Patent Document 4 International Publication No. WO 2015/125600 describes a technique of specifying a position of a therapeutic target site in each breathing phase by executing template matching created for different breathing phases on continuously acquired X-ray images.
  • the template is created by calculating a deformation amount of X-ray images in a treatment object site between different breathing phases from three-dimensional X-ray image data including the position of the treatment object site in the reference breathing phase and the treatment object sites in a plurality of continuous breathing phases, and acquiring positions of the treatment object site in each breathing phase.
  • a template image of a site (tracking object site) including a tumor or a lesion of a patient who is a target object of treatment is preliminarily created, and when actually irradiating the site with radiation, the template image and the actually captured image are compared with each other to specify the tracking object site.
  • the tracking object site may be hidden by the bone (a non-tracking object, obstacle).
  • the tracking object may be easily hidden by the bones.
  • a non-tracking object such as a bed frame (couch frame) on which a patient is immobilized or a fixture may be photographed in the X-ray image.
  • EPID electronic portal imaging device
  • the therapeutic target object site may be hidden by a collimator for limiting irradiation to a normal tissue.
  • a collimator for limiting irradiation to a normal tissue.
  • the obstacle such as a bone, couch frame or collimator, etc.
  • tracking may be difficult in a tracking algorithm such as a conventional matching method.
  • a moving body tracking irradiation method it is possible to enhance an accuracy of template matching by embedding the metal marker in the body for ensuring tracking, but it may be an invasive method and may place a burden on the patient.
  • a method of processing for subtracting information on a bone (a bone suppression method) by specifying a non-tracking object (obstacle) portion corresponding to the bone or the couch frame has also been reported.
  • a processing speed may be slow because there is a specific image processing on the bone and processing for subtracting bone image information, and it is difficult to track the X-ray irradiation position in which real-time properties are required.
  • information up to the tracking object site including a part of the image information in the obstacle image due to being hidden by the obstacle, etc. is subtracted, such that an outer shape of the tracking object site on the image may be deviated from the outer shape of the actual tracking object site.
  • the shape of the tracking object to be captured changes with the breathing of the patient, but in the prior arts, when creating a template image, it may be difficult to specify the shape of a portion hidden by the obstacle such as a bone.
  • the image of the X-ray to be captured may also have a difference between a bright portion and a dark portion, when capturing the template image and when irradiating radiation, that is, a difference in a so-called contrast.
  • the tracking object may make a large movement (unexpected motion) beyond a range of the template image.
  • the techniques such as Patent Documents 3 and 4, when creating a template image, it may only cope with a periodic change such as breathing.
  • an object tracking device including: a superimposed image creation unit configured to create a plurality of superimposed images in which each of a plurality of non-tracking object images which do not include an image feature of a tracking object is superimposed on a tracking object section image which includes the image feature of the tracking object; a discriminator creation unit configured to learn at least one of an image feature and position information of the tracking object, based on the plurality of superimposed images to create a discriminator; and a tracking object specifying unit configured to specify at least one of the image feature and the position information of the tracking object in a tracked image including the image feature of the tracking object, based on the discriminator and the tracked image.
  • An invention of a second aspect of the present invention is the object tracking device according to the first aspect of the present invention, including: an input unit configured to previously input a teacher image which specifies the image feature of the tracking object; and the discriminator creation unit configured to learn at least one of the image feature and the position information of the tracking object, based on the plurality of superimposed images and the teacher image to create a discriminator
  • An invention of a third aspect of the present invention is the object tracking device according to the first or second aspect of the present invention, including a non-tracking object image edition unit configured to derive the number of non-tracking object images, based on a size of the image feature of the tracking object, a resolution of the image, and a preset tracking accuracy, so as to create the non-tracking object image according to the derived number.
  • An invention of a fourth aspect of the present invention is the object tracking device according to any one of the first to third aspects of the present invention, including: an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from a learning original image including the image feature of the tracking object; a non-tracking object image edition unit configured to edit the separation non-tracking object image to create the plurality of edited non-tracking object images; and the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  • An invention of a fifth aspect of the present invention is the object tracking device according to any one of the first to third aspects of the present invention, including: an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from an original image including the image feature of the tracking object; an obstacle image acquisition unit configured to acquire an image of an obstacle which is not included in the original image and is included in the tracked image; a non-tracking object image edition unit configured to edit at least one of the separation non-tracking object image and the image of the obstacle to create a plurality of edited non-tracking object images; and the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  • An invention of a sixth aspect of the present invention is the object tracking device according to the fourth or fifth aspect of the present invention, including the image separation unit configured to separate and extract the tracking object section image and the separation non-tracking object image, from the learning original image including the image feature of the tracking object, based on contrast information of the image feature of the tracking object.
  • An invention of a seventh aspect of the present invention is the object tracking device according to any one of the first to sixth aspects of the present invention, including a tracking accuracy evaluation unit configured to evaluate a tracking accuracy of the tracking object, based on an image for evaluation in which at least one of the image feature and the position information of the tracking object is preset, and the discriminator.
  • An invention of an eight aspect of the present invention is the object tracking device according to seventh aspect of the present invention, including the discriminator creation unit configured to increase the number of non-tracking object images to recreate the discriminator, when the tracking accuracy by the tracking accuracy evaluation unit does not reach a preset accuracy.
  • At least one of the region and the position of the tracking object site of the treatment may be easily specified compared to the image tracking method by template matching of the prior art.
  • the processing may be speeded up compared to a case in which the teacher image is not used.
  • the position and the shape of the image feature region of the tracking object in the teacher image have a correlation with the position and the shape of the image feature region of the tracking object included in the plurality of superimposed images, but there is no correlation with the position and the shape of the image feature of the obstacle other than the tracking object. The presence or absence of this correlation produces an effect of learning by distinguishing whether it is information required for tracking.
  • the required and sufficient number of non-tracking object images may be created according to a size or resolution of the image and a tracking system.
  • each subject such as a patient may learn depending on a difference in different backgrounds or obstacles, as compared to the case in which the separated non-tracking object image including a background object and the obstacle such as a bone to be an obstacle for tracking, is not used. That is, in the prior art, only discriminators created based on a large number of subjects may be realized, but tracking suitable for each of subjects may be performed in the present invention as compared to the prior arts.
  • the accuracy of object tracking may also be improved. That is, conventionally, when an obstacle not included in the original image of such learning is mixed into the tracked image during tracking the object, the accuracy of object tracking is significantly reduced, but it is possible to perform tracking suitable for the individual tracked image in the present invention as compared to the prior arts.
  • the invention of the sixth aspect of the present invention it is possible to automatically separate the tracking object and the non-tracking object based on the contrast information, and facilitate processing of separation, as compared to the case in which the tracking object is manually specified.
  • the discriminator may be evaluated before an actual treatment.
  • the discriminator when the accuracy of the discriminator is insufficient, the discriminator may be recreated using the superimposed image in which the number of images is increased, and the accuracy of the discriminator may be improved.
  • FIG. 1 is a view describing a radiation therapy machine to which an object tracking device of Example 1 of the present invention is applied.
  • a radiation therapy machine 1 to which the object tracking device of Example 1 of the present invention is applied has a bed 3 on which a patient 2 who is a subject of treatment sleeps.
  • a fluoroscopic X-ray irradiation device 4 is disposed above the bed 3.
  • the fluoroscopic X-ray irradiation device 4 is configured to irradiate a patient with X-rays to capture an X-ray fluoroscopic image (a CT image).
  • An imaging device 6 is disposed on a side opposite to the fluoroscopic X-ray irradiation device 4 with the patient 2 interposed therebetween. The imaging device 6 receives an X-ray transmitted through the patient and captures an X-ray fluoroscopic image.
  • An image captured by the imaging device 6 is converted into an electrical signal by an image generator 7, and is input to a control system 8.
  • the fluoroscopic X-ray irradiation device 4, the imaging apparatus 6, and the image generator 7 may employ any structure known in the art, for example, and it is preferable to employ a configuration that can create a three-dimensional CT image as described in Patent Documents 2 to 4, etc.
  • a therapeutic radiation irradiator (therapeutic device) 11 is disposed on a side of the bed 3.
  • the therapeutic radiation irradiator 11 is configured to receive a control signal from the control system 8.
  • the therapeutic radiation irradiator 11 is configured to irradiate a preset position (an affected portion of the patient 2) with therapeutic radiation based on the input of the control signal.
  • FIG. 2 is a block diagram illustrating each function provided in the control unit of the radiation therapy machine of Example 1.
  • a control unit C of the control system 8 has an input/output interface I/O which performs input/output of signals with an outside.
  • the control unit C has a read only memory (ROM) in which a program for performing required processing, information, and the like are stored.
  • the control unit C has a random access memory (RAM) for temporarily storing required data.
  • the control unit C includes a central processing unit (CPU) which performs processing according to a program stored in the ROM or the like. Therefore, the control unit C of Example 1 is configured by a small information processing apparatus, a so-called microcomputer. Thereby, the control unit C may realize various functions by executing the program stored in the ROM or the like.
  • the control unit C receives output signals from an operation unit Ul, the image generator 7, and signal output elements such as a sensor (not illustrated).
  • the operation unit (a user interface) UI is an example of a display unit and includes a touch panel UI0 as an example of an input unit.
  • the operation unit UI includes various input members such as a button UI1 for starting learning processing, a button UI2 for inputting teacher data, and a button UI3 for starting a treatment.
  • the image generator 7 inputs the CT image captured by the imaging device 6 to the control unit C. Further, the image generator 7 inputs, for example, 15 images (15 frames: 66 [ms/f]) per second.
  • the control unit C is connected to the fluoroscopic X-ray irradiation device 4, the therapeutic radiation irradiator 11, and other control elements (not illustrated).
  • the control unit C outputs control signals to the fluoroscopic X-ray irradiation device 4, the therapeutic radiation irradiator 11 and the like.
  • the fluoroscopic X-ray irradiation device 4 irradiates the patient 2 with X-rays for capturing an X-ray fluoroscopic image during learning or treatment.
  • the therapeutic radiation irradiator 11 irradiates the patient 2 with therapeutic radiation (X-ray) at the time of treatment.
  • the control unit C has functions of executing the processing based on input signals from the signal output elements, and outputting the control signals to each control element. That is, the control unit C has the following functions.
  • FIG. 3 is a view describing an example of processing in the control unit of Example 1.
  • the learning image reading unit C1 reads (reads-in) the CT image input from the image generator 7.
  • the learning image reading unit C1 of Example 1 reads an image input from the image generator 7 when the button UI1 for starting learning processing is input. Further, in Example 1, reading of the CT image is performed during a preset learning period after the input of the button UI1 for starting learning processing is started. Furthermore, the learning image reading unit C1 of Example 1 forms a longitudinal-sectional image (not illustrated) from a learning original image (a plurality of cross-sectional images) including a tracking object image feature 21 (an arrangement pattern and an image region of pixel values representing a feature of a tumor which is the tracking object) illustrated in FIG. 3 , then performs the following operations.
  • each of the units C2 to C10 related to learning executes image separation, editing, superposition, etc. based on each of images respectively read and stored in time sequence by the learning image reading unit C1. That is, in Example 1, learning processing is not performed in real time for capturing the CT image, but a processing speed may be improved by speeding up of the CPU, or the like, and if processing is possible even in real time, the processing may also be performed in real time.
  • the image separation unit C2 separates and extracts a soft tissue digitally reconstructed radiograph (DRR) image as an example of the tracking object section image 23 including the tracking object image feature 21, and a bone structure DRR image including an image feature of a bone structure as an example of a separation background image (non-tracking object image, separation non-tracking object image) 24 which does not include a region representing the feature of tracking object, such as the tracking object image feature 21, based on a learning original image 22 including the tracking object image feature 21.
  • the image separation unit C2 of Example 1 separates the learning original image into the tracking object section image 23 (a first image, soft tissue DRR image) and the separation background image 24 (bone structure DRR image) based on the CT value which is contrast information of the CT image.
  • Example 1 as an example, the separation background image 24 is formed by a region having a CT value of 200 or more as the bone structure DRR image, and the tracking object section image 23 is formed by a region having a CT value of less than 200 as the soft tissue DRR image.
  • Example 1 embodies a case, in which a tumor (tracking object) generated in a lung, that is, the target object of treatment is photographed in the tracking object section image 23 as the soft tissue DRR image, but for example, when the tracking object is an abnormal portion of a bone, etc., the bone structure DRR image is selected as the tracking object section image 23, and the soft tissue DRR image is selected as the separation background image 24.
  • the tracking object section image and the background image is appropriately selected according to the background image including the tracking object and the obstacle.
  • the tumor may also be manually designated in such a way as to manually designate a region of the tumor on a screen.
  • a configuration, in which the tumor is automatically discriminated in such a way as to automatically extract an object commonly photographed in a plurality of original images 22, may also be possible.
  • the obstacle image acquisition unit C3 acquires an image 25 of an obstacle which is included when acquiring a tracked image 28 in the therapeutic radiation irradiator 11 and is different from the separation background image 24.
  • an obstacle image 25 X-ray images of the frame (couch frame) of the bed 3 and the fixture for fixing the patient to the bed 3 are previously stored in the storage medium, and the obstacle image acquisition unit C3 reads and acquires the image (obstacle image 25) of the couch frame or the fixture stored in the storage medium.
  • the random number generation unit C4 generates random numbers.
  • the background image edition unit C5 is one example which edits at least one of position, enlargement/reduction, rotation, and light and darkness of the separation background image 24 or the obstacle image 25 with respect to the tracking object section image 23, thereby creating a background image (non-tracking object image) 29.
  • the background image edition unit C5 of Example 1 edits at least one of the position, enlargement/reduction, rotation, and light and darkness based on the random numbers.
  • Example 1 specifically, by performing affine transformation, the separation background image 24 or the obstacle image 25 is moved in parallel, or by performing linear transformation (rotation, shearing, enlargement, reduction), and by changing each value of the affine transformation matrix based on the random numbers, the separation background image 24, or the like is edited to create the background image 29.
  • the background image edition unit C5 creates 100 edited background images as an example of a preset number, with respect to one separation background image 24 or the like. That is, 100 background images 29, in which the position, etc., of the separation background image 24, or the like is randomly edited by the random numbers, are created.
  • the number N of background images 29 to be created is set based on a size of the region of the image of the tracking object section image 23, the resolution of the image, and the preset tracking accuracy.
  • the resolution is 1 mm
  • the required tracking accuracy is 10 times the resolution
  • it is possible to create 1000 background images 29, which are obtained by ⁇ 10 (cm) / 1 (mm) ⁇ ⁇ 10 (times) 1000 (sheets).
  • the superimposed image creation unit C6 creates a superimposed image 26 in which each of the background images 29 (an image obtained by performing edition such as rotation on the separation background image 24 or the obstacle image 25) is respectively superimposed on the tracking object section image 23.
  • the teacher image input reception unit C7 receives an input of a teacher image 30 including a teacher image feature 27 as an example of an image for teaching an object to be tracked, according to an input on the touch panel UI0 or an input on the teacher data input button U12. Further, in Example 1, it is configured that, by displaying the learning original image (CT image) 22 on the touch panel UI0, and inputting by the teacher on the screen so as to surround the image feature of the tracking object which is the target object of treatment, the teacher image feature 27 can be determined.
  • CT image learning original image
  • the teacher image adding unit C8 adds (further superimposes) the teacher image 30 including the input teacher image feature 27 to the superimposed image 26.
  • FIG. 4 is views describing an example of learning a position of the tracking object, wherein FIG. 4A is a view describing an example of a plurality of superimposed images, and FIG. 4B is a view describing a case in which the superimposed images are superimposed.
  • the learning unit C9 learns at least one of region information and position information of the tracking object image feature 21 in the image and creates a discriminator, based on a plurality of learning images 51 in which the teacher image 30 is added to the plurality of superimposed images 26.
  • both the region and the position of the tracking object image feature 21 are learned.
  • a position of a center in the region of the tracking object image feature 21 is exemplified as the position of the tracking object image feature 21, but it may be changed to any position such as an upper end, lower end, right end, or left end of the region according to a design or specification.
  • any conventionally known configuration may be employed as the learning unit C9, it is preferable to use so-called deep learning (neural network having a multilayer structure), and in particular, it is preferable to use a convolutional neural network (CNN).
  • CNN convolutional neural network
  • Caffe is used in Example 1 as an example of deep learning, but it is not limited thereto, and any learning unit (framework, algorithm, software) may be employed.
  • the learning result storage unit C10 stores learning results of the learning unit C9. That is, the CNN optimized by learning is stored as a discriminator.
  • the X-ray fluoroscopic image reading unit C11 reads the CT image (a third image) input from the image generator 7.
  • the X-ray fluoroscopic image reading unit C11 of Example 1 reads the tracked image 28 input from the image generator 7 when the button UI3 for starting treatment is input.
  • the object specifying unit C12 specifies the position of the tracking object in the tracked image 28, based on the learning results of the learning unit C9 and the tracked image 28 including the target object which is the tracking object.
  • the object specifying unit C12 of Example 1 specifies the region information and the position information of the tracking object image feature 21 in the tracked image 28 (in this case, the X-ray fluoroscopic image) using the discriminator (CNN) optimized by learning, and outputs the specified region information and position information.
  • the radiation irradiation unit C13 controls the therapeutic radiation irradiator 11, such that, when the region and position of the tracking object image feature 21 specified by the object specifying unit C12 are included in a radiation range of therapeutic X-rays, it is irradiated with the therapeutic X-rays.
  • the tracking accuracy evaluation unit C14 evaluates the tracking accuracy of tracking object, based on an image for evaluation (test image) in which at least one of the region and the position of the tracking object is preset, and the discriminator.
  • test image an image for evaluation
  • tracking of the tracking object is performed by the discriminator using an image whose region and position of the tracking object are already known as the test image, and a deviation between the region and the position of the tracking object specified by the discriminator and the already known region and the position is evaluated.
  • a calculation of (the number of pixels whose positions coincide with each other) / (the total number of pixels of outer edge) is performed on each pixel of an outer edge of the region specified by the discriminator and a pixel of an outer edge of the tracking object in the test image, and if the calculated value exceeds a threshold (e.g., 90%), it is possible to evaluate that the accuracy of the discriminator is sufficient for the specification of the region.
  • a threshold e.g. 90%
  • the evaluation method is not limited to the above description, and for example, any evaluation method such as a method, in which the region is calculated by deriving a correlation coefficient of the shapes of the outer shape, may be employed.
  • Example 1 if it is evaluated that the accuracy is insufficient by the tracking accuracy evaluation unit C14, the background image edition unit C5 creates the number (2N) of images twice the number N of images up to that time, and additionally creates the superimposed image using the added edited image, then the learning unit C9 recreates the discriminator using the increased superimposed image. Therefore, the accuracy of the recreated discriminator is improved.
  • the object tracking device of Example 1 includes the fluoroscopic X-ray irradiation device 4, the imaging apparatus 6, the image generator 7, the control system 8, and the units C1 to C14.
  • FIG. 5 is a view describing a flowchart of learning processing of Example 1.
  • Processing of each step ST of the flowchart in FIG. 5 is performed according to the program stored in the control unit C of the radiation therapy machine 1. In addition, this processing is performed in parallel with other various processings of the radiation therapy machine 1. Further, processing of evaluating the tracking accuracy of the created discriminator according to an input of a user has a simple flowchart, and therefore will not be illustrated and described in detail.
  • the flowchart illustrated in FIG. 5 is started by an input of the button UI1 for starting learning processing.
  • the tracking object section image 23 having a CT value of less than 200 and the separation background image 24 having a CT value of 200 or more are created from the original image 22. Then, the processing proceeds to ST3.
  • ST6 it is determined whether the teacher data (teacher image feature 27) is input. When it results in yes (Y), the processing proceeds to ST7, and if it results in no (N), ST6 is repeated.
  • the teacher data (teacher image feature 27) is added to each superimposed image 26. Then, the processing proceeds to ST8.
  • FIG. 6 is a view describing a flowchart of tracking processing of Example 1.
  • Processing of each step ST of the flowchart in FIG. 6 is performed according to the program stored in the control unit C of the radiation therapy machine 1. In addition, this processing is performed in parallel with other various processings of the radiation therapy machine 1. Further, the X-ray irradiation processing is processing that irradiates with X-rays when the position of the tracking object reaches a preset irradiation position, and does not irradiate in the other cases, and therefore will not be illustrated and described in detail. Furthermore, as the X-ray irradiation processing, it is also possible to employ the conventionally known techniques described in Patent Documents 1 to 4.
  • the flowchart illustrated in FIG. 6 is started by an input of the button UI3 for starting treatment.
  • ST22 it is determined whether the X-ray fluoroscopic image is input. When it results in yes (Y), the processing proceeds to ST23, and if it results in no (N), ST22 is repeated.
  • the position of the tracking object is specified from the input image (X-ray fluoroscopic image) and the discriminator, and is output. Then, the processing proceeds to ST24.
  • FIG. 7 is a view describing tracking results of a tumor in the radiation therapy machine of Example 1.
  • the soft tissue DRR image as the tracking object section image 23 including the tracking object image feature 21 and the bone structure DRR image as the separation background image 24 which does not include the tracking object are separated from the original image 22, and the separation background image 24 (bone structure DRR image) is randomly edited (processed), and then superimposed on the tracking object section image 23 (soft tissue DRR image). Therefore, it is easy to clearly recognize the position and the shape of the tracking object image feature 21 at the time of learning.
  • the image feature is learned in a state of being separated from the original image 22 and included in the separation background image 24 (bone structure DRR image), such that the influence of the obstacle such as a couch frame is also suppressed at the time of learning.
  • the obstacle to be learned is a surgical instrument, etc. that enters the body such as a catheter, the catheter, etc. that has entered the body and been photographed may be determined as the obstacle, and may be easily separated from the tracking object. Thereby, the influence of the obstacle in the body may also be suppressed when tracking the object in a situation in which the obstacle such as a catheter has entered the body.
  • the position and the outer shape (region) of the tracking object (target object of treatment) may be easily specified. Therefore, as illustrated in FIG. 7 , even if the bones (non-tracking object, obstacle) are photographed in the fluoroscopic image, and even if the position or the outer shape of the tumor changes in an expiratory phase, an intermediate phase, or an inspiratory phase with the breathing of the patient, it is possible to accurately track the tumor. That is, not only the position of the tumor but also the shape of the tumor may be tracked in real time.
  • a discriminator corresponding to a particular patient is created based on the original image 22, that is, the image of the patient, as compared to a case of using a versatile technique (template image) created based on data of many patients, it is possible to create a highly accurate discriminator according to the patient.
  • individual differences in the shapes of bones are small, there are individual differences in the soft tissues (such as blood vessel travel or the shape of tumor), such that learning may not be effectively performed even when using images of other people. That is, when learning is performed using the images of many patients at random, the learning proceeds so that the features of each patient are ignored, and thus the accuracy is likely to be reduced.
  • Example 1 the discriminator is created for a particular patient, and the accuracy is improved.
  • FIG. 8 is a view describing experimental results of Example 1.
  • FIG. 8 an experiment for tracking the tracked object was performed using test data.
  • Experimental Example 1 adopts the configuration of Example 1.
  • Comparative Example 1 an experiment was performed using only the soft tissue DRR image by a conventional template method.
  • Comparative Example 2 an experiment was performed using a common DRR image (soft tissue DRR image + bone structure DRR image) by the conventional template method. The experimental results are illustrated in FIG 8 .
  • Example 1 Example 1
  • the tracking accuracy is improved compared to the conventional cases (Comparative Examples 1 and 2). If one pixel is 1 mm, the average tracking error is 4 mm or more in the prior art because the average error is about 4 pixels, but in Example 1, the average tracking error is 0.22 pixels, which indicates an improvement to 0.22 mm or less.
  • Example 1 since the teacher data (teacher image feature 27) is used, it is possible to perform learning with sufficient accuracy in which the data of the superimposed image 26 required for learning is about 2000 sheets. In addition, although learning (deep learning) is possible without the teacher data, a large number of superimposed images 26 necessary to learn is required. Thereby, in the case of absence of the teacher data, the time for creating the superimposed image 26 and the time taken to learn are longer than the case of the presence of the teacher data, but in Example 1, since the teacher data (teacher image feature 27) is used, it is possible to perform the processing in a short time. Further, in Example 1, separation of the image, editing based on the random numbers, and creation of 2000 superimposed images may be performed in about one minute.
  • Example 1 processing for specifying (tracking) the position of the target object at the time of X-ray therapy was also possible in 25 ms per image. Furthermore, when the bone suppression method described in the background art of the present disclosure was performed by tracking processing, it took 15 seconds per image. Therefore, the processing speed during tracking has been increased by 600 times compared to the prior art.
  • Example 1 the tracking object section image 23 (soft tissue DRR image) and the separation background image 24 (bone structure DRR image) are separated from the original image 22 based on CT values, then edited and processed. Therefore, it is possible to learn according to the patients.
  • Example 1 when editing the separation background image 24, the light and darkness (contrast) is also edited. Therefore, even if the contrasts are different between when the learning original image 22 is captured and when the X-ray fluoroscopic image for treatment is captured, learning is finished and it is easy to track even in a situation in which the contrasts are different. In addition, if learning is performed using an image in which not only the contrast of the separation background image 24 but also the entire contrast of the original image 22 are changed, the tracking accuracy may be further improved.
  • Example 1 the separation background image 24 or the obstacle image 25 is randomly edited. For accurate learning, it is sufficient when there are sufficient data uniformly distributed, but if preparing in advance without editing at random, it is troublesome and time-consuming.
  • the separation background image 24 separated from the original image 22, or the obstacle image 25 added according to the therapeutic radiation irradiator 11 is randomly edited by random numbers, and it is possible to easily align the sufficient number of uniformly distributed data.
  • Example 1 when performing learning, learning is performed on the CT image of a predetermined learning period. That is, learning is performed on a so-called 4DCT, which is the CT image in which an element of time (a fourth dimension) is also added, in addition to an element of space (three dimensions).
  • 4DCT which is the CT image in which an element of time (a fourth dimension) is also added, in addition to an element of space (three dimensions).
  • FIG. 9 is views describing a range in which the tracking object moves with the lapse of time, wherein FIG. 9A is a view describing the conventional template method, and FIG. 9B is a view describing the learning method of Example 1.
  • the range in which the tracking object moves as a whole is the range photographed in the CT image, that is, learning is performed only in the range which fluctuates by breathing.
  • the separation background image 24 etc. is randomly edited, and a situation, in which the relative positions of the tracking object with respect to the non-tracking objects such as bones and couch frames etc., and the obstacles are variously different, is learned.
  • Example 1 when viewing on the basis of the bone, etc., learning is performed on a situation in which the tracking object is at various positions (a situation having a positional width), and this learning is performed on each time. Therefore, as illustrated in FIG. 9B , for the range in which the tracking object moves as a whole, learning is performed also in a range exceeding the range photographed in the CT image, that is, a range exceeding the range moving by breathing. Therefore, in Example 1, when performing learning, a situation, in which the tracking object is relatively deviated with respect to the bone, etc., is learned, such that it is possible to cope with the situation of the unexpected motion beyond the range of movement due to breathing (coughing or sneezing) acquired by the 4DCT.
  • Example 1 evaluation of the created discriminator is performed by the tracking accuracy evaluation unit C14. Therefore, before performing the treatment of actually irradiating with radiation, it is possible to evaluate the accuracy of the created discriminator, and it is possible to confirm in advance before the treatment whether the accuracy of the discriminator is sufficient. Then, if the accuracy is insufficient, it is possible to execute recreation of the discriminator. Therefore, the treatment may be performed using the discriminator having a sufficient accuracy, and the treatment accuracy may be improved, as well as exposing the patient to extra radiation may also be reduced as compared to the case of using an insufficient discriminator.
  • FIG. 10 is views describing when tracking a tumor which is the target object of treatment using an image captured by an EPID, wherein FIG. 10A is a view describing a state in which the tracking object is not hidden by a collimator, and FIG. 10B is a view describing a state in which the tracking object is partially hidden by the collimator.
  • a collimator 42 when applying to the EPID, a collimator 42 may be photographed as an obstacle with respect to the tracking object image feature 41. Then, depending on body movement such as breathing of the patient, there may be a case of changing between a state in which the tracking object image feature 41 is not hidden by the collimator 42 ( FIG. 10A ), and a state of being partially hidden ( FIG. 10B ). In such a case, when tracking is performed using the conventional template, the accuracy of estimating the outer shape of the tracking object image feature 41 in a hidden state is decreased. On the other hand, in Example 1, the obstacle image 25 such as a collimator is added and learned, and even if there is an obstacle such as the collimator 42, it is possible to accurately track the outer shape of the tracking object image feature 41.
  • the present invention is not limited to the image captured by X-rays, and it is also possible to use an image captured by a magnetic resonance imaging (MRI) apparatus or an ultrasonic examination (echo).
  • MRI magnetic resonance imaging
  • echo ultrasonic examination
  • the configuration in which the separation of the soft tissue DRR image and the bone structure DRR image is performed according to the CT value, has been described as an example.
  • the CT image when the CT image is not used, it is possible to set an appropriate threshold for a parameter that can separate an image in which the tracking object is photographed and an image in which the tracking object is not photographed in each image.
  • the tumor is exemplified as the tracking object (important site) using the image
  • the bone, couch frame, and the like are exemplified as an unimportant site, but the present invention is not limited thereto.
  • it may also be configured to track person by learning a person (important, tracking object) with respect to the background (non-important, non-tracking object).
  • the configuration for tracking both the region and the position of the tracking object has been described as an example, but it is not limited thereto. For example, it may also be configured to track only the region or only the position. Further, it is also possible to perform learning so as to track elements other than the region and the position, such as brightness and color.
  • the tumor as the tracking object and the bone as the obstacle have been described as an example, but it is not limited thereto.
  • the bone when confirming a posture of the patient at the time of treatment, it is also possible to use the bone as the tracking object and the soft tissue as the obstacle.
  • the DRR image is not limited to the projection angle exemplified in the examples, and it is possible to select projection in any angular direction.
  • the case in which there is one tumor, etc. which is the tracking object has been described as an example, but it may also be applied to a case in which there are a plurality of tracking objects.
  • the teacher image is a labeled image representing region division, and the pixel value (gradation) is used as a label value.
  • a simple binarized image it may be expressed by one bit (0, 1), but in order to label and distinguish a plurality of regions, gradations of two or more bits are required. Further, for example, it is also possible to individually track by setting one of two tracking objects as a tracking object image feature without a marker and the other as the tracking object image feature in which the marker is embedded.
  • the configuration in which the tracking object device according to the present invention is applied to the radiation therapy machine 1 has been described as an example, but it is not limited thereto.
  • the therapeutic device for crushing gallstones with ultrasonic waves it is possible to apply to any device that requires precise tracking by using for tracking the position of gallstones, or using for tracking a photosensitive substance in photodynamic therapy (PDT), etc.
  • PDT photodynamic therapy

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Radiation-Therapy Devices (AREA)
  • Surgical Instruments (AREA)

Abstract

An object tracking device includes a superimposed image creation unit (C6) configured to create a plurality of superimposed images (26) in which each of a plurality of non-tracking object images which do not include a tracking object image feature is superimposed on a tracking object section image (23) which includes a tracking object image feature (21); a discriminator creation unit (C9) configured to learn at least one of an image feature and position information of the tracking object, based on the plurality of superimposed images (26) to create a discriminator; and a tracking object specifying unit (C12) configured to specify at least one of the image feature and the position information of the tracking object in a tracked image (28) including the respective image features of the tracking object and an obstacle, based on the discriminator and the tracked image (28). Thereby, it easy to specify a tracking object site without being affected by the obstacle, as compared to an image tracking method by template matching of the prior art.

Description

    [Technical Field]
  • The present invention relates to an object tracking device.
  • [Background Art]
  • When performing radiation therapy, or the like, since a position of a target object (tumor, etc.) of treatment moves due to breathing or pulsation of a subject (patient), in order not to irradiate a site (normal site) other than the target object with radiation, it is required to capture an X-ray fluoroscopic image in real time, track the target object and specify its position, and irradiate with radiation, only when the target object has moved to an irradiation position. Further, in a technique of capturing an image, specifying and tracking the position of the target object, and guiding the radiation to the irradiation position, a so-called object tracking technique by guiding the image, generally uses the X-ray fluoroscopic image, but it is not limited thereto. For example, in addition to the X-ray fluoroscopic images, ultrasound images, magnetic resonance imaging (MRI) images, computed tomography (CT) images, and positron emission tomography (PET) images may also be used.
  • The following techniques are known as a technique for specifying and tracking the position of the target object.
  • Patent Document 1 (Japanese Patent No. 5610441 ) describes a technique of compensating for a deterioration in tracking performance assumed at the time of markerless tracking by X-ray fluoroscopy with extracorporeal information. That is, the above patent describes an object tracking technique, in which based on a template image including a therapeutic target object such as a cancer lesion, pattern matching is executed on each frame of an X-ray fluoroscopic image for detecting a position of the therapeutic target object during the treatment, and irradiation of therapeutic radiation is performed, when the position of the therapeutic target falls within a predetermined error range from a planned irradiation position, and in order to minimize an irradiation deviation due to pseudo-periodic movement caused by breathing etc., within a time (phase) window (set based on the movement of a body surface) suitable as an irradiation timing. Furthermore, Patent Document 1 also describes that, in order to increase a determination accuracy of a therapeutic target position, pattern matching is performed using a non-tracking object such as a thoracic diaphragm or a bone, etc. different from the therapeutic target as information on a reference position.
  • Patent Document 2 (Japanese Patent No. 3053389 ) describes a technique for inserting a metal marker into a patient's body to increase the tracking accuracy of tracking near a tumor with the X-ray fluoroscopy. That is, the above patent describes a technique of specifying a position of a tumor marker in three dimensions by capturing a tumor marker embedded near the tumor to be the target object of treatment by transmitting X-ray from a plurality of directions, and performing template matching with the template image of the previously registered tracking object (tumor) marker by using a density normalized cross-correlation method.
  • Patent Document 3 (Japanese Patent No. 4505639 ) describes a technique of specifying a position of a tumor marker by performing template matching by using the density normalized cross-correlation method similar to Patent Document 2, and acquiring time-series positional data in advance, such that the position of the tumor marker at the current time is estimated from the previously acquired data.
  • Patent Document 4 (International Publication No. WO 2015/125600 ) describes a technique of specifying a position of a therapeutic target site in each breathing phase by executing template matching created for different breathing phases on continuously acquired X-ray images. According to the technique described in Patent Document 4, the template is created by calculating a deformation amount of X-ray images in a treatment object site between different breathing phases from three-dimensional X-ray image data including the position of the treatment object site in the reference breathing phase and the treatment object sites in a plurality of continuous breathing phases, and acquiring positions of the treatment object site in each breathing phase.
  • [Prior Art Document] [Patent Document]
    • [Patent Document 1] Japanese Patent No. 5610441 (paragraphs "0010," "0013" to "0016," and FIGS. 1 and FIG. 2)
    • [Patent Document 2] Japanese Patent No. 3053389 (paragraphs "0035" to "0046," and FIGS. 1 and 2)
    • [Patent Document 3] Japanese Patent No. 4505639 (paragraphs "0024" to "0029") [Patent Document 4] International Publication NO. 2015/125600 (paragraph "0010")
    [Summary of Invention] [Problems to be Solved by Invention] (Problems of the prior art)
  • According to the techniques described in Patent Documents 1 to 4, basically, a template image of a site (tracking object site) including a tumor or a lesion of a patient who is a target object of treatment is preliminarily created, and when actually irradiating the site with radiation, the template image and the actually captured image are compared with each other to specify the tracking object site.
  • However, since the X-ray fluoroscopic image is an image obtained by projecting a three-dimensional structure in the body, the tracking object site may be hidden by the bone (a non-tracking object, obstacle). In particular, when many bones are photographed in the X-ray image, the tracking object may be easily hidden by the bones. In addition, when capturing the X-ray image, a non-tracking object (obstacle) such as a bed frame (couch frame) on which a patient is immobilized or a fixture may be photographed in the X-ray image. Further, in X-ray therapy, by imaging the transmitted therapeutic X-ray by means of an electronic portal imaging device (EPID), a therapeutic target object site may be tracked to verify a positional accuracy. In this case, if the positional deviation of the therapeutic target object site is large, the therapeutic target object site may be hidden by a collimator for limiting irradiation to a normal tissue. As such, when creating a template image or capturing an X-ray fluoroscopic image, it may be difficult to see the therapeutic target object site due to the obstacle (such as a bone, couch frame or collimator, etc.). Therefore, tracking may be difficult in a tracking algorithm such as a conventional matching method. In addition, as a moving body tracking irradiation method, it is possible to enhance an accuracy of template matching by embedding the metal marker in the body for ensuring tracking, but it may be an invasive method and may place a burden on the patient.
  • A method of processing for subtracting information on a bone (a bone suppression method) by specifying a non-tracking object (obstacle) portion corresponding to the bone or the couch frame has also been reported.
  • However, in this processing method, a processing speed may be slow because there is a specific image processing on the bone and processing for subtracting bone image information, and it is difficult to track the X-ray irradiation position in which real-time properties are required. In addition, when subtracting information on the obstacle such as a bone, information up to the tracking object site including a part of the image information in the obstacle image due to being hidden by the obstacle, etc. is subtracted, such that an outer shape of the tracking object site on the image may be deviated from the outer shape of the actual tracking object site.
  • In addition, it is known that the shape of the tracking object to be captured changes with the breathing of the patient, but in the prior arts, when creating a template image, it may be difficult to specify the shape of a portion hidden by the obstacle such as a bone.
  • Further, the image of the X-ray to be captured may also have a difference between a bright portion and a dark portion, when capturing the template image and when irradiating radiation, that is, a difference in a so-called contrast. Thereby, even when comparing the template image and the X-ray fluoroscopic image at the time of radiation irradiation, the position of the tracking object may be unclear.
  • In addition, due to an unexpected motion such as coughing or sneezing of the patient, the tracking object may make a large movement (unexpected motion) beyond a range of the template image. However, in the techniques such as Patent Documents 3 and 4, when creating a template image, it may only cope with a periodic change such as breathing.
  • It is a technical object of the present invention to make it easy to specify a tracking object site without being affected by an obstacle, as compared to an image tracking method by template matching of the prior art.
  • [Means for Solving Problems]
  • In order to solve the above technical object, according to a first aspect of the present invention, there is provided an object tracking device an object tracking device including: a superimposed image creation unit configured to create a plurality of superimposed images in which each of a plurality of non-tracking object images which do not include an image feature of a tracking object is superimposed on a tracking object section image which includes the image feature of the tracking object; a discriminator creation unit configured to learn at least one of an image feature and position information of the tracking object, based on the plurality of superimposed images to create a discriminator; and a tracking object specifying unit configured to specify at least one of the image feature and the position information of the tracking object in a tracked image including the image feature of the tracking object, based on the discriminator and the tracked image.
  • An invention of a second aspect of the present invention is the object tracking device according to the first aspect of the present invention, including: an input unit configured to previously input a teacher image which specifies the image feature of the tracking object; and the discriminator creation unit configured to learn at least one of the image feature and the position information of the tracking object, based on the plurality of superimposed images and the teacher image to create a discriminator
  • An invention of a third aspect of the present invention is the object tracking device according to the first or second aspect of the present invention, including a non-tracking object image edition unit configured to derive the number of non-tracking object images, based on a size of the image feature of the tracking object, a resolution of the image, and a preset tracking accuracy, so as to create the non-tracking object image according to the derived number.
  • An invention of a fourth aspect of the present invention is the object tracking device according to any one of the first to third aspects of the present invention, including: an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from a learning original image including the image feature of the tracking object; a non-tracking object image edition unit configured to edit the separation non-tracking object image to create the plurality of edited non-tracking object images; and the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  • An invention of a fifth aspect of the present invention is the object tracking device according to any one of the first to third aspects of the present invention, including: an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from an original image including the image feature of the tracking object; an obstacle image acquisition unit configured to acquire an image of an obstacle which is not included in the original image and is included in the tracked image; a non-tracking object image edition unit configured to edit at least one of the separation non-tracking object image and the image of the obstacle to create a plurality of edited non-tracking object images; and the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  • An invention of a sixth aspect of the present invention is the object tracking device according to the fourth or fifth aspect of the present invention, including the image separation unit configured to separate and extract the tracking object section image and the separation non-tracking object image, from the learning original image including the image feature of the tracking object, based on contrast information of the image feature of the tracking object.
  • An invention of a seventh aspect of the present invention is the object tracking device according to any one of the first to sixth aspects of the present invention, including a tracking accuracy evaluation unit configured to evaluate a tracking accuracy of the tracking object, based on an image for evaluation in which at least one of the image feature and the position information of the tracking object is preset, and the discriminator.
  • An invention of an eight aspect of the present invention is the object tracking device according to seventh aspect of the present invention, including the discriminator creation unit configured to increase the number of non-tracking object images to recreate the discriminator, when the tracking accuracy by the tracking accuracy evaluation unit does not reach a preset accuracy.
  • [Advantageous Effects]
  • According to the invention of the first aspect of the present invention, at least one of the region and the position of the tracking object site of the treatment may be easily specified compared to the image tracking method by template matching of the prior art.
  • According to the invention of the second aspect of the present invention, the processing may be speeded up compared to a case in which the teacher image is not used. According to the invention of the second aspect of the present invention, when there is the teacher image, the position and the shape of the image feature region of the tracking object in the teacher image have a correlation with the position and the shape of the image feature region of the tracking object included in the plurality of superimposed images, but there is no correlation with the position and the shape of the image feature of the obstacle other than the tracking object. The presence or absence of this correlation produces an effect of learning by distinguishing whether it is information required for tracking.
  • According to the invention of the third aspect of the present invention, the required and sufficient number of non-tracking object images may be created according to a size or resolution of the image and a tracking system.
  • According to the invention of the fourth aspect of the present invention, each subject such as a patient may learn depending on a difference in different backgrounds or obstacles, as compared to the case in which the separated non-tracking object image including a background object and the obstacle such as a bone to be an obstacle for tracking, is not used. That is, in the prior art, only discriminators created based on a large number of subjects may be realized, but tracking suitable for each of subjects may be performed in the present invention as compared to the prior arts.
  • According to the invention of the fifth aspect of the present invention, by adding the image of the obstacle not included in the original image, even when the obstacle not included in the original image is mixed into the tracked image during tracking the object, the accuracy of object tracking may also be improved. That is, conventionally, when an obstacle not included in the original image of such learning is mixed into the tracked image during tracking the object, the accuracy of object tracking is significantly reduced, but it is possible to perform tracking suitable for the individual tracked image in the present invention as compared to the prior arts.
  • According to the invention of the sixth aspect of the present invention, it is possible to automatically separate the tracking object and the non-tracking object based on the contrast information, and facilitate processing of separation, as compared to the case in which the tracking object is manually specified.
  • According to the invention of the seventh aspect of the present invention, the discriminator may be evaluated before an actual treatment.
  • According to the invention of the eighth aspect of the present invention, when the accuracy of the discriminator is insufficient, the discriminator may be recreated using the superimposed image in which the number of images is increased, and the accuracy of the discriminator may be improved.
  • [Brief Description of Drawings]
    • FIG. 1 is a view describing a radiation therapy machine to which an object tracking device of Example 1 of the present invention is applied.
    • FIG. 2 is a block diagram illustrating each function of a control unit in the radiation therapy machine of Example 1.
    • FIG. 3 is a view describing an example of processing in the control unit of Example 1.
    • FIG. 4 is views describing an example of learning a position of a tracking object, wherein FIG. 4A is a view describing an example of a plurality of superimposed images, and FIG. 4B is a view describing a case in which the superimposed images are superimposed.
    • FIG. 5 is a view describing a flowchart of learning processing of Example 1.
    • FIG. 6 is a view describing a flowchart of tracking processing of Example 1.
    • FIG. 7 is a view describing tracking results of a tumor in the radiation therapy machine of Example 1.
    • FIG. 8 is a view describing experimental results of Example 1.
    • FIG. 9 is views describing a range in which a tracking object moves with the lapse of time, wherein FIG. 9A is a view describing a conventional template method, and FIG. 9B is a view describing a learning method of Example 1.
    • FIG. 10 is views describing when tracking a tumor which is a target object of treatment using an image captured by an EPID, wherein FIG. 10A is a view describing a state in which the tracking object is not hidden by a collimator, and FIG. 10B is a view describing a state in which the tracking object is partially hidden by the collimator.
    [Mode for Carrying out Invention]
  • Hereinafter, examples that are specific examples of the embodiment of the present invention will be described with reference to the drawings, but the present invention is not limited to the following examples.
  • In the following description using the drawings, members other than members necessary for the description will not be appropriately illustrated to facilitate the understanding.
  • EXAMPLE 1
  • FIG. 1 is a view describing a radiation therapy machine to which an object tracking device of Example 1 of the present invention is applied.
  • In FIG. 1, a radiation therapy machine 1 to which the object tracking device of Example 1 of the present invention is applied has a bed 3 on which a patient 2 who is a subject of treatment sleeps. A fluoroscopic X-ray irradiation device 4 is disposed above the bed 3. The fluoroscopic X-ray irradiation device 4 is configured to irradiate a patient with X-rays to capture an X-ray fluoroscopic image (a CT image). An imaging device 6 is disposed on a side opposite to the fluoroscopic X-ray irradiation device 4 with the patient 2 interposed therebetween. The imaging device 6 receives an X-ray transmitted through the patient and captures an X-ray fluoroscopic image. An image captured by the imaging device 6 is converted into an electrical signal by an image generator 7, and is input to a control system 8. In addition, the fluoroscopic X-ray irradiation device 4, the imaging apparatus 6, and the image generator 7 may employ any structure known in the art, for example, and it is preferable to employ a configuration that can create a three-dimensional CT image as described in Patent Documents 2 to 4, etc.
  • Further, a therapeutic radiation irradiator (therapeutic device) 11 is disposed on a side of the bed 3. The therapeutic radiation irradiator 11 is configured to receive a control signal from the control system 8. The therapeutic radiation irradiator 11 is configured to irradiate a preset position (an affected portion of the patient 2) with therapeutic radiation based on the input of the control signal.
  • (Description of control system (control unit) of Example 1)
  • FIG. 2 is a block diagram illustrating each function provided in the control unit of the radiation therapy machine of Example 1.
  • In FIG. 2, a control unit C of the control system 8 has an input/output interface I/O which performs input/output of signals with an outside. In addition, the control unit C has a read only memory (ROM) in which a program for performing required processing, information, and the like are stored. Further, the control unit C has a random access memory (RAM) for temporarily storing required data. Further, the control unit C includes a central processing unit (CPU) which performs processing according to a program stored in the ROM or the like. Therefore, the control unit C of Example 1 is configured by a small information processing apparatus, a so-called microcomputer. Thereby, the control unit C may realize various functions by executing the program stored in the ROM or the like.
  • (Signal output elements connected to the control unit C)
  • The control unit C receives output signals from an operation unit Ul, the image generator 7, and signal output elements such as a sensor (not illustrated).
  • The operation unit (a user interface) UI is an example of a display unit and includes a touch panel UI0 as an example of an input unit. In addition, the operation unit UI includes various input members such as a button UI1 for starting learning processing, a button UI2 for inputting teacher data, and a button UI3 for starting a treatment.
  • The image generator 7 inputs the CT image captured by the imaging device 6 to the control unit C. Further, the image generator 7 inputs, for example, 15 images (15 frames: 66 [ms/f]) per second.
  • (Controlled elements connected to the control unit C)
  • The control unit C is connected to the fluoroscopic X-ray irradiation device 4, the therapeutic radiation irradiator 11, and other control elements (not illustrated). The control unit C outputs control signals to the fluoroscopic X-ray irradiation device 4, the therapeutic radiation irradiator 11 and the like.
  • The fluoroscopic X-ray irradiation device 4 irradiates the patient 2 with X-rays for capturing an X-ray fluoroscopic image during learning or treatment.
  • The therapeutic radiation irradiator 11 irradiates the patient 2 with therapeutic radiation (X-ray) at the time of treatment.
  • (Functions of control unit C)
  • The control unit C has functions of executing the processing based on input signals from the signal output elements, and outputting the control signals to each control element. That is, the control unit C has the following functions.
  • FIG. 3 is a view describing an example of processing in the control unit of Example 1.
  • C1: Learning image reading unit
  • The learning image reading unit C1 reads (reads-in) the CT image input from the image generator 7. The learning image reading unit C1 of Example 1 reads an image input from the image generator 7 when the button UI1 for starting learning processing is input. Further, in Example 1, reading of the CT image is performed during a preset learning period after the input of the button UI1 for starting learning processing is started. Furthermore, the learning image reading unit C1 of Example 1 forms a longitudinal-sectional image (not illustrated) from a learning original image (a plurality of cross-sectional images) including a tracking object image feature 21 (an arrangement pattern and an image region of pixel values representing a feature of a tumor which is the tracking object) illustrated in FIG. 3, then performs the following operations. Thereby, in Example 1, each of the units C2 to C10 related to learning executes image separation, editing, superposition, etc. based on each of images respectively read and stored in time sequence by the learning image reading unit C1. That is, in Example 1, learning processing is not performed in real time for capturing the CT image, but a processing speed may be improved by speeding up of the CPU, or the like, and if processing is possible even in real time, the processing may also be performed in real time.
  • C2: Image separation unit
  • The image separation unit C2 separates and extracts a soft tissue digitally reconstructed radiograph (DRR) image as an example of the tracking object section image 23 including the tracking object image feature 21, and a bone structure DRR image including an image feature of a bone structure as an example of a separation background image (non-tracking object image, separation non-tracking object image) 24 which does not include a region representing the feature of tracking object, such as the tracking object image feature 21, based on a learning original image 22 including the tracking object image feature 21. The image separation unit C2 of Example 1 separates the learning original image into the tracking object section image 23 (a first image, soft tissue DRR image) and the separation background image 24 (bone structure DRR image) based on the CT value which is contrast information of the CT image. In Example 1, as an example, the separation background image 24 is formed by a region having a CT value of 200 or more as the bone structure DRR image, and the tracking object section image 23 is formed by a region having a CT value of less than 200 as the soft tissue DRR image.
  • Further, as an example, Example 1 embodies a case, in which a tumor (tracking object) generated in a lung, that is, the target object of treatment is photographed in the tracking object section image 23 as the soft tissue DRR image, but for example, when the tracking object is an abnormal portion of a bone, etc., the bone structure DRR image is selected as the tracking object section image 23, and the soft tissue DRR image is selected as the separation background image 24. As such, for selection, the tracking object section image and the background image (non-tracking object image) is appropriately selected according to the background image including the tracking object and the obstacle.
  • For designation, the tumor (tracking object) may also be manually designated in such a way as to manually designate a region of the tumor on a screen. In addition, a configuration, in which the tumor is automatically discriminated in such a way as to automatically extract an object commonly photographed in a plurality of original images 22, may also be possible.
  • C3: Obstacle image acquisition unit
  • The obstacle image acquisition unit C3 acquires an image 25 of an obstacle which is included when acquiring a tracked image 28 in the therapeutic radiation irradiator 11 and is different from the separation background image 24. In Example 1, as an obstacle image 25, X-ray images of the frame (couch frame) of the bed 3 and the fixture for fixing the patient to the bed 3 are previously stored in the storage medium, and the obstacle image acquisition unit C3 reads and acquires the image (obstacle image 25) of the couch frame or the fixture stored in the storage medium.
  • C4: Random number generation unit
  • The random number generation unit C4 generates random numbers.
  • C5: Background image edition unit (non-tracking object image edition unit, non-tracking object image creation unit)
  • The background image edition unit C5 is one example which edits at least one of position, enlargement/reduction, rotation, and light and darkness of the separation background image 24 or the obstacle image 25 with respect to the tracking object section image 23, thereby creating a background image (non-tracking object image) 29. The background image edition unit C5 of Example 1 edits at least one of the position, enlargement/reduction, rotation, and light and darkness based on the random numbers. In Example 1, specifically, by performing affine transformation, the separation background image 24 or the obstacle image 25 is moved in parallel, or by performing linear transformation (rotation, shearing, enlargement, reduction), and by changing each value of the affine transformation matrix based on the random numbers, the separation background image 24, or the like is edited to create the background image 29.
  • Further, for the light and darkness (contrast), an amount of light and darkness of the separation background image 24, or the like is changed in a light or darkness direction according to the random numbers. In Example 1, the background image edition unit C5 creates 100 edited background images as an example of a preset number, with respect to one separation background image 24 or the like. That is, 100 background images 29, in which the position, etc., of the separation background image 24, or the like is randomly edited by the random numbers, are created.
  • Furthermore, it is preferable that the number N of background images 29 to be created is set based on a size of the region of the image of the tracking object section image 23, the resolution of the image, and the preset tracking accuracy. As an example, when the image of the tracking object section image 23 has a size of 10 cm × 10 cm, the resolution is 1 mm, and the required tracking accuracy is 10 times the resolution, it is possible to create 1000 background images 29, which are obtained by {10 (cm) / 1 (mm)} × 10 (times) = 1000 (sheets).
  • C6: Superimposed image creation unit
  • The superimposed image creation unit C6 creates a superimposed image 26 in which each of the background images 29 (an image obtained by performing edition such as rotation on the separation background image 24 or the obstacle image 25) is respectively superimposed on the tracking object section image 23.
  • C7: Teacher image input reception unit
  • The teacher image input reception unit C7 receives an input of a teacher image 30 including a teacher image feature 27 as an example of an image for teaching an object to be tracked, according to an input on the touch panel UI0 or an input on the teacher data input button U12. Further, in Example 1, it is configured that, by displaying the learning original image (CT image) 22 on the touch panel UI0, and inputting by the teacher on the screen so as to surround the image feature of the tracking object which is the target object of treatment, the teacher image feature 27 can be determined.
  • C8: Teacher image adding unit
  • The teacher image adding unit C8 adds (further superimposes) the teacher image 30 including the input teacher image feature 27 to the superimposed image 26.
  • FIG. 4 is views describing an example of learning a position of the tracking object, wherein FIG. 4A is a view describing an example of a plurality of superimposed images, and FIG. 4B is a view describing a case in which the superimposed images are superimposed.
  • C9: Learning unit (discriminator creation unit)
  • The learning unit C9 learns at least one of region information and position information of the tracking object image feature 21 in the image and creates a discriminator, based on a plurality of learning images 51 in which the teacher image 30 is added to the plurality of superimposed images 26. In addition, in Example 1, both the region and the position of the tracking object image feature 21 are learned. Further, in Example 1, a position of a center in the region of the tracking object image feature 21 is exemplified as the position of the tracking object image feature 21, but it may be changed to any position such as an upper end, lower end, right end, or left end of the region according to a design or specification.
  • In FIG. 4, in the learning unit C9 of Example 1, if each image is further superimposed on a plurality of superimposed images (see FIG. 4A), in which the image of the obstacle image feature 31 whose position, etc. is randomly edited are superimposed on the tracking object image feature 32, as illustrated in FIG. 4B, the tracking object image feature 32 whose position is not edited may be relatively emphasized (amplified), and the obstacle image feature 31 whose position etc. is random may be relatively suppressed (attenuated). Therefore, it is possible to learn the position and the outer shape of the tracking object image feature 32 in the CT image. In addition, although any conventionally known configuration may be employed as the learning unit C9, it is preferable to use so-called deep learning (neural network having a multilayer structure), and in particular, it is preferable to use a convolutional neural network (CNN). Caffe is used in Example 1 as an example of deep learning, but it is not limited thereto, and any learning unit (framework, algorithm, software) may be employed.
  • C10: Learning result storage unit
  • The learning result storage unit C10 stores learning results of the learning unit C9. That is, the CNN optimized by learning is stored as a discriminator.
  • C11: X-ray fluoroscopic image reading unit
  • The X-ray fluoroscopic image reading unit C11 reads the CT image (a third image) input from the image generator 7. The X-ray fluoroscopic image reading unit C11 of Example 1 reads the tracked image 28 input from the image generator 7 when the button UI3 for starting treatment is input.
  • C12: Object specifying unit
  • The object specifying unit C12 specifies the position of the tracking object in the tracked image 28, based on the learning results of the learning unit C9 and the tracked image 28 including the target object which is the tracking object. The object specifying unit C12 of Example 1 specifies the region information and the position information of the tracking object image feature 21 in the tracked image 28 (in this case, the X-ray fluoroscopic image) using the discriminator (CNN) optimized by learning, and outputs the specified region information and position information.
  • C13: Radiation irradiation unit
  • The radiation irradiation unit C13 controls the therapeutic radiation irradiator 11, such that, when the region and position of the tracking object image feature 21 specified by the object specifying unit C12 are included in a radiation range of therapeutic X-rays, it is irradiated with the therapeutic X-rays.
  • C14: Tracking accuracy evaluation unit
  • The tracking accuracy evaluation unit C14 evaluates the tracking accuracy of tracking object, based on an image for evaluation (test image) in which at least one of the region and the position of the tracking object is preset, and the discriminator. In Example 1, tracking of the tracking object is performed by the discriminator using an image whose region and position of the tracking object are already known as the test image, and a deviation between the region and the position of the tracking object specified by the discriminator and the already known region and the position is evaluated. In a case of evaluation for the region, for example, a calculation of (the number of pixels whose positions coincide with each other) / (the total number of pixels of outer edge) is performed on each pixel of an outer edge of the region specified by the discriminator and a pixel of an outer edge of the tracking object in the test image, and if the calculated value exceeds a threshold (e.g., 90%), it is possible to evaluate that the accuracy of the discriminator is sufficient for the specification of the region. Similarly, in a case of evaluation for the position, if a deviation of the position of the tracking object (a center of gravity position) specified by the discriminator with respect to the position of the tracking object in the test image is within a threshold (e.g., 5 pixels), it is possible to evaluate that the accuracy of the discriminator is sufficient for the specification of the position. In addition, the evaluation method is not limited to the above description, and for example, any evaluation method such as a method, in which the region is calculated by deriving a correlation coefficient of the shapes of the outer shape, may be employed.
  • Further, in Example 1, if it is evaluated that the accuracy is insufficient by the tracking accuracy evaluation unit C14, the background image edition unit C5 creates the number (2N) of images twice the number N of images up to that time, and additionally creates the superimposed image using the added edited image, then the learning unit C9 recreates the discriminator using the increased superimposed image. Therefore, the accuracy of the recreated discriminator is improved. In addition, it is also possible to configure so as to automatically increase the edited image to continuously recreate the discriminator until the accuracy of the discriminator reaches the preset threshold, and it is also possible to manually increase the number of edited images and recreate the discriminator.
  • Further, the object tracking device of Example 1 includes the fluoroscopic X-ray irradiation device 4, the imaging apparatus 6, the image generator 7, the control system 8, and the units C1 to C14.
  • (Description of flowchart of Example 1)
  • Next, a flow of control in the radiation therapy machine 1 of Example 1 will be described using a flow diagram, a so-called flowchart.
  • (Description of flowchart of learning processing)
  • FIG. 5 is a view describing a flowchart of learning processing of Example 1.
  • Processing of each step ST of the flowchart in FIG. 5 is performed according to the program stored in the control unit C of the radiation therapy machine 1. In addition, this processing is performed in parallel with other various processings of the radiation therapy machine 1. Further, processing of evaluating the tracking accuracy of the created discriminator according to an input of a user has a simple flowchart, and therefore will not be illustrated and described in detail.
  • The flowchart illustrated in FIG. 5 is started by an input of the button UI1 for starting learning processing.
  • In ST1 of FIG. 5, the CT image (original image 22) is read.
  • Then, the processing proceeds to ST2.
  • In ST2, the tracking object section image 23 having a CT value of less than 200 and the separation background image 24 having a CT value of 200 or more are created from the original image 22. Then, the processing proceeds to ST3.
  • In ST3, a predetermined number N of edited images, in which parallel movement, etc. is edited, are created from the random numbers and the separation background image 24. Then, the processing proceeds to ST4.
  • In ST4, the superimposed image 26 in which the tracking object section image 23 and each edited image are superimposed is created. Then, the processing proceeds to ST5.
  • In ST5, an image for inputting the teacher data (teacher image feature 27) is displayed on the touch panel UI0. Then, the processing proceeds to ST6.
  • In ST6, it is determined whether the teacher data (teacher image feature 27) is input. When it results in yes (Y), the processing proceeds to ST7, and if it results in no (N), ST6 is repeated.
  • In ST7, the teacher data (teacher image feature 27) is added to each superimposed image 26. Then, the processing proceeds to ST8.
  • In ST8, it is determined whether the learning period ends. If it results in no (N), the processing proceeds to ST9, and when it results in yes (Y), the processing proceeds to ST10.
  • In ST9, the next CT image (original image 22) is read. Then, the processing returns to ST2.
  • In ST10, learning is performed by the CNN, and the discriminator (optimized CNN) is output and stored. Then, the learning processing ends.
  • (Description of flowchart of tracking processing)
  • FIG. 6 is a view describing a flowchart of tracking processing of Example 1.
  • Processing of each step ST of the flowchart in FIG. 6 is performed according to the program stored in the control unit C of the radiation therapy machine 1. In addition, this processing is performed in parallel with other various processings of the radiation therapy machine 1. Further, the X-ray irradiation processing is processing that irradiates with X-rays when the position of the tracking object reaches a preset irradiation position, and does not irradiate in the other cases, and therefore will not be illustrated and described in detail. Furthermore, as the X-ray irradiation processing, it is also possible to employ the conventionally known techniques described in Patent Documents 1 to 4.
  • The flowchart illustrated in FIG. 6 is started by an input of the button UI3 for starting treatment.
  • In ST21 of FIG. 6, the discriminator (optimized CNN) is read. Then, the processing proceeds to ST22.
  • In ST22, it is determined whether the X-ray fluoroscopic image is input. When it results in yes (Y), the processing proceeds to ST23, and if it results in no (N), ST22 is repeated.
  • In ST23, the position of the tracking object (target object of treatment) is specified from the input image (X-ray fluoroscopic image) and the discriminator, and is output. Then, the processing proceeds to ST24.
  • In ST24, it is determined whether the tracking processing, that is, the irradiation processing of the therapeutic X-ray ends. If it results in no (N), the processing returns to ST22, and when it results in yes (Y), the tracking processing ends.
  • (Operation of Example 1)
  • FIG. 7 is a view describing tracking results of a tumor in the radiation therapy machine of Example 1.
  • In the radiation therapy machine 1 of Example 1 having the above-described configuration, the soft tissue DRR image as the tracking object section image 23 including the tracking object image feature 21 and the bone structure DRR image as the separation background image 24 which does not include the tracking object are separated from the original image 22, and the separation background image 24 (bone structure DRR image) is randomly edited (processed), and then superimposed on the tracking object section image 23 (soft tissue DRR image). Therefore, it is easy to clearly recognize the position and the shape of the tracking object image feature 21 at the time of learning. When an obstacle such as a couch frame is photographed in the original image 22, the image feature is learned in a state of being separated from the original image 22 and included in the separation background image 24 (bone structure DRR image), such that the influence of the obstacle such as a couch frame is also suppressed at the time of learning. In addition, even when the obstacle to be learned is a surgical instrument, etc. that enters the body such as a catheter, the catheter, etc. that has entered the body and been photographed may be determined as the obstacle, and may be easily separated from the tracking object. Thereby, the influence of the obstacle in the body may also be suppressed when tracking the object in a situation in which the obstacle such as a catheter has entered the body.
  • Therefore, in the radiation therapy machine 1 of Example 1, compared to the method of creating a template image of the prior art, in which the outer shape may become unclear due to the influence of the bone, etc., the position and the outer shape (region) of the tracking object (target object of treatment) may be easily specified. Therefore, as illustrated in FIG. 7, even if the bones (non-tracking object, obstacle) are photographed in the fluoroscopic image, and even if the position or the outer shape of the tumor changes in an expiratory phase, an intermediate phase, or an inspiratory phase with the breathing of the patient, it is possible to accurately track the tumor. That is, not only the position of the tumor but also the shape of the tumor may be tracked in real time.
  • In addition, since a discriminator corresponding to a particular patient is created based on the original image 22, that is, the image of the patient, as compared to a case of using a versatile technique (template image) created based on data of many patients, it is possible to create a highly accurate discriminator according to the patient. Although individual differences in the shapes of bones are small, there are individual differences in the soft tissues (such as blood vessel travel or the shape of tumor), such that learning may not be effectively performed even when using images of other people. That is, when learning is performed using the images of many patients at random, the learning proceeds so that the features of each patient are ignored, and thus the accuracy is likely to be reduced. On the other hand, in Example 1, the discriminator is created for a particular patient, and the accuracy is improved.
  • FIG. 8 is a view describing experimental results of Example 1.
  • In FIG. 8, an experiment for tracking the tracked object was performed using test data. Experimental Example 1 adopts the configuration of Example 1. In Comparative Example 1, an experiment was performed using only the soft tissue DRR image by a conventional template method. In Comparative Example 2, an experiment was performed using a common DRR image (soft tissue DRR image + bone structure DRR image) by the conventional template method. The experimental results are illustrated in FIG 8.
  • In FIG. 8, in Experimental Example 1, an average of pixels in error in tracking of the tracking object is 0.22 [pixel], a standard deviation is 1.7, a correlation coefficient is 0.997, and a Jaccard coefficient is an average of 0.918 (maximum 0.995, and minimum 0.764), and then the tracking accuracy of the tracking object was high, and good results were obtained. On the other hand, in Comparative Example 1, the average of the pixels in error was 4.4 [pixel], the standard deviation was 10, and the correlation coefficient was 0.113. In Comparative Example 2, the average of the pixels in error was 3.9 [pixel], the standard deviation was 9.1, and the correlation coefficient was 0.256. Therefore, in the configuration of Example 1 (Experimental Example 1), it is confirmed that the tracking accuracy is improved compared to the conventional cases (Comparative Examples 1 and 2). If one pixel is 1 mm, the average tracking error is 4 mm or more in the prior art because the average error is about 4 pixels, but in Example 1, the average tracking error is 0.22 pixels, which indicates an improvement to 0.22 mm or less.
  • Further, in Example 1, since the teacher data (teacher image feature 27) is used, it is possible to perform learning with sufficient accuracy in which the data of the superimposed image 26 required for learning is about 2000 sheets. In addition, although learning (deep learning) is possible without the teacher data, a large number of superimposed images 26 necessary to learn is required. Thereby, in the case of absence of the teacher data, the time for creating the superimposed image 26 and the time taken to learn are longer than the case of the presence of the teacher data, but in Example 1, since the teacher data (teacher image feature 27) is used, it is possible to perform the processing in a short time. Further, in Example 1, separation of the image, editing based on the random numbers, and creation of 2000 superimposed images may be performed in about one minute. In addition, deep learning could be processed in about 90 minutes. Further, in Example 1, processing for specifying (tracking) the position of the target object at the time of X-ray therapy was also possible in 25 ms per image. Furthermore, when the bone suppression method described in the background art of the present disclosure was performed by tracking processing, it took 15 seconds per image. Therefore, the processing speed during tracking has been increased by 600 times compared to the prior art.
  • Further, in Example 1, the tracking object section image 23 (soft tissue DRR image) and the separation background image 24 (bone structure DRR image) are separated from the original image 22 based on CT values, then edited and processed. Therefore, it is possible to learn according to the patients.
  • In addition, in Example 1, when editing the separation background image 24, the light and darkness (contrast) is also edited. Therefore, even if the contrasts are different between when the learning original image 22 is captured and when the X-ray fluoroscopic image for treatment is captured, learning is finished and it is easy to track even in a situation in which the contrasts are different. In addition, if learning is performed using an image in which not only the contrast of the separation background image 24 but also the entire contrast of the original image 22 are changed, the tracking accuracy may be further improved.
  • Further, in Example 1, the separation background image 24 or the obstacle image 25 is randomly edited. For accurate learning, it is sufficient when there are sufficient data uniformly distributed, but if preparing in advance without editing at random, it is troublesome and time-consuming. On the other hand, in Example 1, the separation background image 24 separated from the original image 22, or the obstacle image 25 added according to the therapeutic radiation irradiator 11 is randomly edited by random numbers, and it is possible to easily align the sufficient number of uniformly distributed data.
  • Generally, increasing the number of learning images by luminance conversion, geometric conversion, etc. of the original image is called "data expansion." The data expansion is used to prevent over-learning. This data expansion is used to prevent the generalization performance from deteriorating by learning too many detailed features due to overlearning. However, since these linear transformations can be restored to the original image by a simple transformation, an increase in images up to several tens of times is limited at best. That is, the effect of non-linear transformation is essential for data expansion of several hundred times or more. In Example 1, the superimposed images 26 are nonlinearly converted as a whole. Therefore, it is considered that there is no decrease in the generalization performance even with data expansion of several hundred times or more.
  • Further, in Example 1, when performing learning, learning is performed on the CT image of a predetermined learning period. That is, learning is performed on a so-called 4DCT, which is the CT image in which an element of time (a fourth dimension) is also added, in addition to an element of space (three dimensions).
  • Therefore, it is possible to accurately track the position and the outer shape of the tracking object which fluctuates in time due to breathing.
  • FIG. 9 is views describing a range in which the tracking object moves with the lapse of time, wherein FIG. 9A is a view describing the conventional template method, and FIG. 9B is a view describing the learning method of Example 1.
  • In the conventional template method, as illustrated in FIG. 9A, since the positions of the obstacle (bone) and the tracking object in the CT image at each time are learned, the range in which the tracking object moves as a whole is the range photographed in the CT image, that is, learning is performed only in the range which fluctuates by breathing. On the other hand, in Example 1, in the CT image at each time, the separation background image 24 etc. is randomly edited, and a situation, in which the relative positions of the tracking object with respect to the non-tracking objects such as bones and couch frames etc., and the obstacles are variously different, is learned. That is, when viewing on the basis of the bone, etc., learning is performed on a situation in which the tracking object is at various positions (a situation having a positional width), and this learning is performed on each time. Therefore, as illustrated in FIG. 9B, for the range in which the tracking object moves as a whole, learning is performed also in a range exceeding the range photographed in the CT image, that is, a range exceeding the range moving by breathing. Therefore, in Example 1, when performing learning, a situation, in which the tracking object is relatively deviated with respect to the bone, etc., is learned, such that it is possible to cope with the situation of the unexpected motion beyond the range of movement due to breathing (coughing or sneezing) acquired by the 4DCT.
  • Furthermore, in Example 1, evaluation of the created discriminator is performed by the tracking accuracy evaluation unit C14. Therefore, before performing the treatment of actually irradiating with radiation, it is possible to evaluate the accuracy of the created discriminator, and it is possible to confirm in advance before the treatment whether the accuracy of the discriminator is sufficient. Then, if the accuracy is insufficient, it is possible to execute recreation of the discriminator. Therefore, the treatment may be performed using the discriminator having a sufficient accuracy, and the treatment accuracy may be improved, as well as exposing the patient to extra radiation may also be reduced as compared to the case of using an insufficient discriminator.
  • FIG. 10 is views describing when tracking a tumor which is the target object of treatment using an image captured by an EPID, wherein FIG. 10A is a view describing a state in which the tracking object is not hidden by a collimator, and FIG. 10B is a view describing a state in which the tracking object is partially hidden by the collimator.
  • In FIG. 10, when applying to the EPID, a collimator 42 may be photographed as an obstacle with respect to the tracking object image feature 41. Then, depending on body movement such as breathing of the patient, there may be a case of changing between a state in which the tracking object image feature 41 is not hidden by the collimator 42 (FIG. 10A), and a state of being partially hidden (FIG. 10B). In such a case, when tracking is performed using the conventional template, the accuracy of estimating the outer shape of the tracking object image feature 41 in a hidden state is decreased. On the other hand, in Example 1, the obstacle image 25 such as a collimator is added and learned, and even if there is an obstacle such as the collimator 42, it is possible to accurately track the outer shape of the tracking object image feature 41.
  • Although the examples of the present invention have been described above in detail, the present invention is not limited to the above-described examples, and various modifications may be made within the scope of the present invention described in the claims. Modified examples (H01) to (H012) of the present invention will be described as an example below.
  • (H01) In the above examples, the configuration of using the CT image as the original image, the soft tissue DRR image, and the bone structure DRR image has been described as an example, but the present invention is not limited thereto. For example, an ultrasound image, MRI image, PET image, etc. may also be used.
  • That is, the present invention is not limited to the image captured by X-rays, and it is also possible to use an image captured by a magnetic resonance imaging (MRI) apparatus or an ultrasonic examination (echo).
  • In addition, the configuration, in which the separation of the soft tissue DRR image and the bone structure DRR image is performed according to the CT value, has been described as an example. However, when the CT image is not used, it is possible to set an appropriate threshold for a parameter that can separate an image in which the tracking object is photographed and an image in which the tracking object is not photographed in each image.
  • (H02) In the above examples, the tumor is exemplified as the tracking object (important site) using the image, and the bone, couch frame, and the like are exemplified as an unimportant site, but the present invention is not limited thereto. For example, in a plurality of images captured by a monitoring camera in the past, it may also be configured to track person by learning a person (important, tracking object) with respect to the background (non-important, non-tracking object).
  • (H03) In the above examples, it is preferable to use the teacher data (teacher image feature 27), but it may also be configured not to use such data. In this case, by increasing the number of superimposed images, it is possible to create the discriminator using an object appearing at almost the same position in the superimposed image as a tumor (important portion).
  • (H04) In the above examples, the configuration for performing parallel movement, enlargement/reduction, rotation, shearing, and editing of light and darkness has been described as an example, but it is not limited thereto. It is also possible to decrease or increase the content (parallel movement, enlargement/reduction, and the like) to be edited.
  • (H05) In the above examples, it is preferable to employ the configuration in which editing is performed based on the random numbers, but it may also be configured by preparing a previously superimposed image (corresponding to the bone structure DRR image).
  • (H06) In the above examples, the configuration for tracking both the region and the position of the tracking object (tumor) has been described as an example, but it is not limited thereto. For example, it may also be configured to track only the region or only the position. Further, it is also possible to perform learning so as to track elements other than the region and the position, such as brightness and color.
  • (H07) In the above examples, it is preferable to have the function of evaluating the tracking accuracy, but it may also be configured not to have such a function.
  • (H08) In the above examples, the tumor as the tracking object and the bone as the obstacle have been described as an example, but it is not limited thereto. For example, when confirming a posture of the patient at the time of treatment, it is also possible to use the bone as the tracking object and the soft tissue as the obstacle.
  • (H09) In the above examples, the DRR image is not limited to the projection angle exemplified in the examples, and it is possible to select projection in any angular direction.
  • (H010) In the above examples, the configuration in which tracking is performed without using a marker has been described as an example, but it is not limited thereto. The position of the marker may be learned using the X-ray image, etc. in which the marker is embedded, and even if the marker is hidden by the obstacle, the position and the outer shape thereof may be accurately discriminated and tracked.
  • (H011) In the above examples, the case in which there is one tumor, etc. which is the tracking object has been described as an example, but it may also be applied to a case in which there are a plurality of tracking objects. For example, when using an image having a gradation of 2 bits (22 = 4 gradations) or more as the teacher image, it is possible to cope with the case in which there are a plurality of tracking objects, by learning using one teacher image which is distinguished by changing the gradations in the first tracking object and the second tracking object. That is, generally, the teacher image is a labeled image representing region division, and the pixel value (gradation) is used as a label value. In the case of a simple binarized image, it may be expressed by one bit (0, 1), but in order to label and distinguish a plurality of regions, gradations of two or more bits are required. Further, for example, it is also possible to individually track by setting one of two tracking objects as a tracking object image feature without a marker and the other as the tracking object image feature in which the marker is embedded.
  • (H012) In the above examples, the configuration in which the tracking object device according to the present invention is applied to the radiation therapy machine 1 has been described as an example, but it is not limited thereto. For example, in the therapeutic device for crushing gallstones with ultrasonic waves, it is possible to apply to any device that requires precise tracking by using for tracking the position of gallstones, or using for tracking a photosensitive substance in photodynamic therapy (PDT), etc.
  • [Description of Reference Numerals]
  • 21
    Tracking object image feature
    22
    Original image
    23
    Tracking object section image
    24
    Separation non-tracking object image
    25
    Obstacle image
    26
    Superimposed image
    27
    Teacher image feature
    28
    Tracked image
    29
    Non-tracking object image
    30
    Teacher image
    31
    Obstacle image feature
    32
    Tracking object image feature (in teacher image)
    41
    Tracking object image feature (in EPID image)
    42
    Collimator
    51
    Learning image
    C1
    Learning image reading unit
    C2
    Image separation unit
    C3
    Obstacle image acquisition unit
    C5
    Non-tracking object image edition unit
    C6
    Superimposed image creation unit
    C7
    Teacher image input reception unit
    C8
    Teacher image adding unit
    C9
    Discriminator creation unit
    C12
    Object specifying unit
    C14
    Tracking accuracy evaluation unit.

Claims (8)

  1. An object tracking device comprising:
    a superimposed image creation unit configured to create a plurality of superimposed images in which each of a plurality of non-tracking object images which do not include an image feature of a tracking object is superimposed on a tracking object section image which includes the image feature of the tracking object;
    a discriminator creation unit configured to learn at least one of an image feature and position information of the tracking object, based on the plurality of superimposed images to create a discriminator; and
    a tracking object specifying unit configured to specify at least one of the image feature and the position information of the tracking object in a tracked image including the image feature of the tracking object, based on the discriminator and the tracked image.
  2. The object tracking device according to claim 1, comprising:
    an input unit configured to previously input a teacher image which specifies the image feature of the tracking object; and
    the discriminator creation unit configured to learn at least one of the image feature and the position information of the tracking object, based on the plurality of superimposed images and the teacher image to create a discriminator.
  3. The object tracking device according to claim 1 or 2, comprising a non-tracking object image edition unit configured to derive the number of non-tracking object images, based on a size of the image feature of the tracking object, a resolution of the image, and a preset tracking accuracy, so as to create the non-tracking object image according to the derived number.
  4. The object tracking device according to any one of claims 1 to 3, comprising:
    an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from a learning original image including the image feature of the tracking object;
    a non-tracking object image edition unit configured to edit the separation non-tracking object image to create the plurality of edited non-tracking object images; and
    the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  5. The object tracking device according to any one of claims 1 to 4, comprising:
    an image separation unit configured to separate and extract the tracking object section image which includes the image feature of the tracking object and the separation non-tracking object image which does not include the image feature of the tracking object, from an original image including the image feature of the tracking object;
    an obstacle image acquisition unit configured to acquire an image of an obstacle which is not included in the original image and is included in the tracked image;
    a non-tracking object image edition unit configured to edit at least one of the separation non-tracking object image and the image of the obstacle to create a plurality of edited non-tracking object images; and
    the superimposed image creation unit configured to create the superimposed image based on the tracking object section image and the non-tracking object image.
  6. The object tracking device according to claim 4 or 5, comprising the image separation unit configured to separate and extract the tracking object section image and the separation non-tracking object image, from the learning original image including the image feature of the tracking object, based on contrast information of the image feature of the tracking object.
  7. The object tracking device according to any one of claims 1 to 6, comprising a tracking accuracy evaluation unit configured to evaluate a tracking accuracy of the tracking object, based on an image for evaluation in which at least one of the image feature and the position information of the tracking object is preset, and the discriminator.
  8. The object tracking device according to claim 7, comprising the discriminator creation unit configured to increase the number of non-tracking object images to recreate the discriminator, when the tracking accuracy by the tracking accuracy evaluation unit does not reach a preset accuracy.
EP18761561.2A 2017-03-03 2018-03-01 Object tracking device Active EP3590577B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017041111 2017-03-03
PCT/JP2018/007873 WO2018159775A1 (en) 2017-03-03 2018-03-01 Object tracking device

Publications (3)

Publication Number Publication Date
EP3590577A1 true EP3590577A1 (en) 2020-01-08
EP3590577A4 EP3590577A4 (en) 2020-08-12
EP3590577B1 EP3590577B1 (en) 2023-12-06

Family

ID=63371288

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18761561.2A Active EP3590577B1 (en) 2017-03-03 2018-03-01 Object tracking device

Country Status (4)

Country Link
US (1) US11328434B2 (en)
EP (1) EP3590577B1 (en)
JP (1) JP6984908B2 (en)
WO (1) WO2018159775A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112263787A (en) * 2020-10-30 2021-01-26 福建自贸试验区厦门片区Manteia数据科技有限公司 Radiotherapy control method and device

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11565129B2 (en) * 2017-06-13 2023-01-31 Brainlab Ag Binary tracking of an anatomical tracking structure on medical images
JP7218118B2 (en) 2018-07-31 2023-02-06 キヤノン株式会社 Information processing device, information processing method and program
JP7127785B2 (en) * 2018-11-30 2022-08-30 オリンパス株式会社 Information processing system, endoscope system, trained model, information storage medium, and information processing method
CN111420301A (en) * 2019-01-10 2020-07-17 中国科学院沈阳自动化研究所 Robotized body surface focus area positioning and tracking system
JP2020139842A (en) * 2019-02-28 2020-09-03 ソニー株式会社 Information processing device, information processing method, and information processing system
JP7128135B2 (en) * 2019-03-08 2022-08-30 富士フイルム株式会社 Endoscope image learning device, method and program, Endoscope image recognition device
JP2021010970A (en) * 2019-07-05 2021-02-04 京セラドキュメントソリューションズ株式会社 Robot system and robot control method
JP7252847B2 (en) * 2019-07-08 2023-04-05 株式会社日立製作所 Motion tracking device, radiotherapy system, and method of operating motion tracking device
JP7209595B2 (en) * 2019-07-16 2023-01-20 富士フイルム株式会社 Radiation image processing apparatus, method and program
JP2021041089A (en) * 2019-09-13 2021-03-18 株式会社島津製作所 Medical image processing device, x-ray image processing system and generation method of learning model
JP7226207B2 (en) * 2019-09-13 2023-02-21 株式会社島津製作所 Medical image processing apparatus, X-ray image processing system, and learning model generation method
JP7292721B2 (en) * 2019-10-08 2023-06-19 国立大学法人 筑波大学 Target contour estimator and therapy device
JP7372536B2 (en) * 2019-11-19 2023-11-01 富士通株式会社 Arithmetic program, arithmetic device and arithmetic method
JP7412178B2 (en) * 2020-01-07 2024-01-12 キヤノンメディカルシステムズ株式会社 X-ray diagnostic equipment and medical image processing equipment
JP7394645B2 (en) 2020-02-05 2023-12-08 富士フイルム株式会社 Teacher image generation device, method and program, learning device, method and program, discriminator, and radiation image processing device, method and program
JP2021126501A (en) * 2020-02-13 2021-09-02 富士フイルム株式会社 Radiographic image processing device, method, and program
US11972559B2 (en) 2020-02-13 2024-04-30 Fujifilm Corporation Radiographic image processing device, radiographic image processing method, and radiographic image processing program
WO2021171394A1 (en) * 2020-02-26 2021-09-02 株式会社島津製作所 Learned model creation method, image generation method, and image processing device
WO2021182343A1 (en) * 2020-03-13 2021-09-16 富士フイルム株式会社 Learning data creation device, method, program, learning data, and machine learning device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS54148914A (en) 1978-05-15 1979-11-21 Kawasaki Heavy Ind Ltd Butterfly type speed control valve
JP3053389B1 (en) 1998-12-03 2000-06-19 三菱電機株式会社 Moving object tracking irradiation device
US7545965B2 (en) * 2003-11-10 2009-06-09 The University Of Chicago Image modification and detection using massive training artificial neural networks (MTANN)
US8989349B2 (en) * 2004-09-30 2015-03-24 Accuray, Inc. Dynamic tracking of moving targets
JP4505639B2 (en) 2005-02-24 2010-07-21 国立大学法人北海道大学 Moving body tracking irradiation apparatus and program
JP4126318B2 (en) * 2006-06-23 2008-07-30 三菱重工業株式会社 Radiotherapy apparatus control apparatus and radiotherapy apparatus control method
US8611496B2 (en) * 2008-11-12 2013-12-17 University Of Tsukuba Radiation treatment system
JP4727737B2 (en) 2009-02-24 2011-07-20 三菱重工業株式会社 Radiotherapy apparatus control apparatus and target part position measurement method
EP2604187A4 (en) * 2010-07-14 2016-09-07 Univ Tohoku Signal-processing device, signal-processing program, and computer-readable recording medium with a signal-processing program recorded thereon
CN106029171B (en) 2014-02-24 2019-01-22 国立研究开发法人量子科学技术研究开发机构 Radiation cure moving body track device, radiation cure irradiation area determination device and radiotherapy apparatus
JP2016116659A (en) * 2014-12-19 2016-06-30 株式会社東芝 Medical image processing device, treatment system, medical image processing method, and medical image processing program
JP6747771B2 (en) 2015-01-26 2020-08-26 コニカミノルタ株式会社 Image display device and image display method
JP6760485B2 (en) * 2017-03-31 2020-09-23 日本電気株式会社 Video processing equipment, video analysis systems, methods and programs

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112263787A (en) * 2020-10-30 2021-01-26 福建自贸试验区厦门片区Manteia数据科技有限公司 Radiotherapy control method and device
CN112263787B (en) * 2020-10-30 2021-08-10 福建自贸试验区厦门片区Manteia数据科技有限公司 Radiotherapy control method and device

Also Published As

Publication number Publication date
EP3590577B1 (en) 2023-12-06
JPWO2018159775A1 (en) 2020-03-19
JP6984908B2 (en) 2021-12-22
WO2018159775A1 (en) 2018-09-07
US20200005472A1 (en) 2020-01-02
US11328434B2 (en) 2022-05-10
EP3590577A4 (en) 2020-08-12

Similar Documents

Publication Publication Date Title
EP3590577B1 (en) Object tracking device
JP7181538B2 (en) Medical image processing device and radiotherapy system
CN111956958B (en) Subject positioning device, subject positioning method, recording medium, and radiation therapy system
US9317661B2 (en) Automatic implant detection from image artifacts
EP1664996B1 (en) Systems for gating medical procedures
CN111918697B (en) Medical image processing device, treatment system, and storage medium
EP2479708A1 (en) Systems and methods for tracking moving targets and monitoring object positions
EP3710109B1 (en) Three-dimensional tracking of a target in a body
US20160331463A1 (en) Method for generating a 3d reference computer model of at least one anatomical structure
US8527244B2 (en) Generating model data representing a biological body section
CN107980008B (en) Medical image processing apparatus, treatment system, and medical image processing program
US20180253838A1 (en) Systems and methods for medical imaging of patients with medical implants for use in revision surgery planning
Cresson et al. Coupling 2D/3D registration method and statistical model to perform 3D reconstruction from partial X-rays images data
JP7292721B2 (en) Target contour estimator and therapy device
WO2023068365A1 (en) Target shape estimation device and therapeutic device
JP2000197710A (en) Medical treatment planning system
JP2021104195A (en) CT image diagnosis support device, CT image diagnosis support method, and CT image diagnosis support program
Puhulwelle Gamage 3D Reconstruction of Patient Specific Bone Models for Image Guided Orthopaedic Surgery
Siddique Active Imag Guidance

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190919

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20200714

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/246 20170101ALI20200708BHEP

Ipc: A61B 34/20 20160101ALI20200708BHEP

Ipc: G06T 7/00 20170101ALI20200708BHEP

Ipc: A61B 6/03 20060101ALI20200708BHEP

Ipc: A61N 5/10 20060101AFI20200708BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20210723

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20230731

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602018062220

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240307

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20231206

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231206

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231206

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231206

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240307

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231206

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240306

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1637841

Country of ref document: AT

Kind code of ref document: T

Effective date: 20231206

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231206