WO2021193018A1 - Program, information processing method, information processing device, and model generation method - Google Patents
Program, information processing method, information processing device, and model generation method Download PDFInfo
- Publication number
- WO2021193018A1 WO2021193018A1 PCT/JP2021/009296 JP2021009296W WO2021193018A1 WO 2021193018 A1 WO2021193018 A1 WO 2021193018A1 JP 2021009296 W JP2021009296 W JP 2021009296W WO 2021193018 A1 WO2021193018 A1 WO 2021193018A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- treatment
- patient
- model
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000010365 information processing Effects 0.000 title claims description 14
- 238000003672 processing method Methods 0.000 title claims description 3
- 238000011282 treatment Methods 0.000 claims abstract description 169
- 210000000056 organ Anatomy 0.000 claims abstract description 17
- 238000003384 imaging method Methods 0.000 claims abstract description 15
- 210000004204 blood vessel Anatomy 0.000 claims description 39
- 230000003902 lesion Effects 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 31
- 238000001514 detection method Methods 0.000 claims description 21
- 230000001225 therapeutic effect Effects 0.000 claims description 21
- 238000012276 Endovascular treatment Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 description 23
- 238000012986 modification Methods 0.000 description 21
- 230000004048 modification Effects 0.000 description 21
- 238000002059 diagnostic imaging Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 8
- 210000002569 neuron Anatomy 0.000 description 7
- 238000002583 angiography Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 239000002872 contrast media Substances 0.000 description 5
- 238000013146 percutaneous coronary intervention Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 230000010339 dilation Effects 0.000 description 4
- 238000010191 image analysis Methods 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 3
- 238000012014 optical coherence tomography Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000003066 decision tree Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000002608 intravascular ultrasound Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003325 tomography Methods 0.000 description 2
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 208000034827 Neointima Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000000013 bile duct Anatomy 0.000 description 1
- 238000009534 blood test Methods 0.000 description 1
- 210000000621 bronchi Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 229940039231 contrast media Drugs 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000916 dilatatory effect Effects 0.000 description 1
- 238000002224 dissection Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 210000000936 intestine Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000000277 pancreatic duct Anatomy 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/045—Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/12—Arrangements for detecting or locating foreign bodies
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/12—Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to a program, an information processing method, an information processing device, and a model generation method.
- Patent Document 1 a designation of a range of a diseased part in an intravascular image is received from an operator, a shape of a therapeutic device suitable for the designated range of the diseased part is set, and a therapeutic device having the set shape is provided.
- a therapeutic device selection support system that searches and displays information from a database is disclosed.
- Patent Document 1 only searches a database for a therapeutic device suitable for a disease site in a range specified by an operator, and does not suitably support treatment based on a medical image.
- One aspect is to provide a program or the like that can suitably support treatment based on medical images.
- the program according to one aspect is performed on the patient when the patient information about the patient to be treated and the medical image of the patient's luminal organ are acquired and the patient information and the medical image are input.
- a computer is made to execute a process of inputting the acquired patient information and a medical image and outputting the treatment information to a model trained to output the treatment information to support the treatment.
- medical image-based treatment can be suitably supported.
- FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system.
- treatment information for supporting the treatment of the patient is presented to the user (medical worker) based on the patient information about the patient performing the intravascular treatment and the medical image of the patient's blood vessel.
- the treatment support system to be used will be described.
- the treatment support system includes an information processing device 1 and a diagnostic imaging system 2.
- the information processing device 1 and the diagnostic imaging system 2 are communication-connected to a network N such as a LAN (Local Area Network) or the Internet.
- LAN Local Area Network
- the target luminal organ is not limited to blood vessels, and may be other luminal organs such as bile duct, pancreatic duct, bronchus, and intestine. ..
- the diagnostic imaging system 2 includes an intravascular diagnostic imaging device 21, a fluoroscopic imaging device 22, and a display device 23.
- the intravascular image diagnostic device 21 is a device for imaging an intravascular tomographic image of a patient, for example, an IVUS (Intravascular Ultrasound) device that performs an ultrasonic examination using a catheter 211.
- the catheter 211 is a medical device inserted into a blood vessel of a patient, and includes a piezoelectric element that transmits ultrasonic waves and receives reflected waves from the blood vessels.
- the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image based on the signal of the reflected wave received by the catheter 211 and displays it on the display device 23.
- the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image, but for example, an optical coherence tomographic image (OCT image) may be generated.
- OCT image optical coherence tomographic image
- the fluoroscopic image capturing device 22 is a device unit for capturing a fluoroscopic image that sees through the inside of a patient, and is, for example, an angiography device that performs angiography examination.
- the fluoroscopic image capturing apparatus 22 includes an X-ray source 221 and an X-ray sensor 222, and the X-ray sensor 222 receives the X-rays emitted from the X-ray source 221 to image the X-ray fluoroscopic image of the patient.
- an X-ray opaque marker is attached to the tip of the catheter 211, and the position of the catheter 211 is visualized in a fluoroscopic image.
- the perspective imaging device 22 displays a perspective image that visualizes the position of the catheter 211 on the display device 23 and presents it together with an intravascular tomographic image.
- ultrasonic tomography optical interference tomography
- angiography are given as examples of medical images, but the medical images are computed tomography (CT) images and magnetic resonance imaging (MRI). It may be an image or the like.
- CT computed tomography
- MRI magnetic resonance imaging
- the information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer.
- the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity.
- the server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging system 2, or may be a cloud server communicated and connected via the Internet or the like.
- the server 1 is a treatment support device that outputs treatment information for supporting endovascular treatment of the patient based on the patient information about the patient to be treated and the medical image (tomographic image and fluoroscopic image) of the patient. Function.
- the server 1 prepares in advance a learning model 50 (see FIG. 4) that performs machine learning to learn predetermined training data, inputs patient information and medical images, and outputs treatment information. It is done.
- the server 1 inputs the patient information and the medical image of the patient to be treated into the learning model 50, and acquires the treatment information from the learning model 50.
- the server 1 outputs the treatment information to the diagnostic imaging system 2 and displays it on the display device 23.
- the treatment information is information for supporting the treatment of the patient, and is information for assisting the user (medical worker) who treats the patient.
- endovascular treatment is taken as an example, and information for supporting PCI (Percutaneous Coronary Intervention) using the catheter 211 is output as treatment information.
- treatment information includes the position and properties of lesions such as plaques, therapeutic devices such as stents and balloons to be used, the dose and duration of contrast media recommended for fluoroscopic imaging, and post-treatment progress information (eg). Risk of complications) etc.
- the server 1 estimates these information using the learning model 50 and displays it on the display device 23. Further, the server 1 detects an object such as a lesion or a therapeutic device from the medical image, generates a second medical image processed so that the detected object can be identified, and displays the detected object on the display device 23.
- the server 1 performs the treatment information generation process using the learning model 50.
- the learning model 50 constructed by the server 1 is installed in the diagnostic imaging system 2 and the treatment information is locally generated.
- the generation process may be executed.
- FIG. 2 is a block diagram showing a configuration example of the server 1.
- the server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
- the control unit 11 has one or more CPUs (Central Processing Units), GPUs (Graphics Processing Units), AI chips (AI semiconductors) and other arithmetic processing devices, and stores the program P stored in the auxiliary storage unit 14.
- Various information processing is performed by reading and executing.
- the main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing.
- the communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
- the auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing.
- the auxiliary storage unit 14 stores the learning model 50 and the medical care DB 141.
- the learning model 50 is a machine learning model in which training data has been trained as described above, and is a model that outputs treatment information for supporting treatment of a patient by inputting patient information and a medical image.
- the learning model 50 is expected to be used as a program module constituting artificial intelligence software.
- the medical care DB 141 is a database that stores medical care data of patients.
- the auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
- the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 includes a reading unit for reading a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), a USB (UniversalSerialBus) memory, and an external hard disk, and is provided from the portable storage medium 1a.
- the program P may be read and executed. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
- FIG. 3 is an explanatory diagram showing an example of the record layout of the medical care DB 141.
- the medical treatment DB 141 includes a medical treatment ID column, a patient information column, a treatment information column, and an image information string.
- the medical treatment ID column stores a medical treatment ID for identifying each medical treatment data.
- the patient information string, the treatment information column, and the image information string are associated with the medical treatment ID, respectively, and the patient information about the patient to be treated, the treatment information about the treatment performed for the patient, and the medical use obtained by examining the patient. I remember the image.
- Patient information is information about the patient who has undergone treatment and is a medical record of the patient.
- Treatment information includes, for example, patient age, gender, diagnosis name, risk factors (presence or absence of lifestyle disease, etc.), history, treatment history, concomitant medications, blood test results, number of affected branches of blood vessels, left chamber, etc. Includes ejection rate, history of sudden changes related to cardiovascular disease (myocardial infarction, etc.).
- the treatment information is information on endovascular treatment performed on the patient, for example, a record of PCI.
- Treatment information includes, for example, the date of PCI, the location of the treated lesion (hereinafter referred to as "lesion site"), the properties of the lesion, the puncture site of the catheter 211, the total amount of contrast agent used, and a fluoroscopic image.
- Imaging time, presence / absence and content of additional treatment before stent placement, presence / absence and content of additional treatment after stent placement, type of stent placed for example, product name
- diameter, length, total number, type of balloon, length Maximum dilation diameter, maximum dilation pressure, dilation time, decay time, treatment of bifurcation lesions, post-treatment progress information (eg, presence or absence of complications).
- the diameters of the stent and the balloon are expressed by the diameter, but it is needless to say that the diameter may be the radius.
- the medical image is an image of the patient's blood vessel, and as described above, a tomographic image inside the blood vessel (ultrasonic tomographic image, optical interference tomographic image, etc.) and a fluoroscopic image (angiography image) in which the inside of the patient is visualized by X-ray. , Computed tomography images, etc.), magnetic resonance imaging, etc.
- the image information also includes physiological functional evaluation results such as FFR (Fractional Flow Reserve).
- FFR Fractional Flow Reserve
- an ultrasonic tomographic image and an angiography image are used as medical images for inputting the learning model 50.
- FIG. 4 is an explanatory diagram showing an outline of the learning model 50.
- the learning model 50 is a machine learning model that inputs patient information and medical images and outputs treatment information for supporting treatment of the patient.
- the server 1 performs machine learning to learn predetermined training data and generates a learning model 50 in advance. Then, the server 1 inputs the medical image of the patient acquired from the diagnostic imaging system 2 and the patient information about the patient into the learning model 50, and generates the treatment information.
- the learning model 50 includes patient information stored in the medical care DB 141, a tomographic image imaged by the intravascular image diagnostic device 21, and a fluoroscopic image imaged by the fluoroscopic image capturing device 22. Is input. Then, the learning model 50 outputs the estimation results estimated for each item shown in the upper right of FIG. 4 as treatment information.
- treatment information includes information on lesions in blood vessels.
- the information about the lesion includes the lesion site (for example, the type of blood vessel in which the lesion exists) and the properties of the lesion.
- the information about the lesion may include not only the position and properties but also the length of the lesion in the longitudinal direction of the blood vessel, the size (area) of the lesion in the cross section of the blood vessel, and the like.
- the treatment information also includes information about the treatment device to be inserted into the blood vessel.
- Therapeutic devices are, for example, stents, balloons, etc. for dilating blood vessels.
- the therapeutic device is not limited to a stent or the like, and may include, for example, a rotablator for excision of a lesion.
- the learning model 50 includes the type of stent to be used (for example, product name), shape (diameter and length), total number of stents, presence / absence and content of additional procedures before stent placement, and presence / absence and contents of additional procedures after stent placement.
- Outputs stent information related to the stent such as.
- the learning model 50 includes the type of balloon to be used (for example, product name), shape (length), expansion conditions (maximum expansion pressure, maximum expansion diameter, expansion time, damping time required for balloon contraction after expansion), and the like. , Outputs balloon information related to the balloon.
- each therapeutic device may be included as information regarding the therapeutic device. Thereby, when the user selects each therapeutic device and performs the treatment, the selection of the therapeutic device can be suitably assisted.
- the treatment information includes information regarding the imaging conditions of the fluoroscopic image.
- the information regarding the imaging conditions includes, for example, the dose of the contrast medium, the imaging time of the fluoroscopic image (“transparent time” in FIG. 4), and the like.
- the server 1 causes the learning model 50 to learn the dose of the contrast medium, the imaging time, etc. when the patient is treated, and can output the recommended dose of the contrast medium, the imaging time, and the like.
- the treatment information includes progress information that estimates the progress of the patient after treatment.
- the progress information is, for example, an estimation result of estimating the risk of complications.
- the server 1 makes the learning model 50 learn the complications that have developed after the treatment for the patient who has been treated, and can output the complications that have a high probability of developing after the treatment as progress information. As will be described later, it is preferable to output a probability value for evaluating the degree of occurrence of each complication for each of a plurality of complications that may occur after treatment (see FIG. 6).
- a tomographic image when a plaque is formed in the blood vessel of the patient and a fluoroscopic image showing the position of the catheter 211 at the time of generating the tomographic image are input to the learning model 50 together with the patient information.
- the learning model 50 outputs the position and properties of the lesion corresponding to the plaque, the stent to be used and its expansion diameter, the expansion pressure, the progress information after treatment, and the like.
- the learning model 50 outputs the above-mentioned various information as treatment information, and also generates a second medical image processed so that a predetermined object in the image can be identified from the input medical image, and is one of the treatment information. Output as.
- the second medical image is an image obtained by processing an image area corresponding to an object such as a lesion as described above, and is an image displaying the image area of the object in a display mode different from that of other image areas.
- the learning model 50 generates a second medical image in which the image region corresponding to the object is represented by a color other than black and white in the tomographic image represented by black and white.
- the server 1 outputs the second medical image together with the information such as the lesion site to the display device 23 and displays it.
- the target object in the learning model 50 is not limited to the lesion, and may be, for example, a therapeutic device inserted into the blood vessel of the patient (for example, a stent already placed in the blood vessel of the patient).
- the learning model 50 may be capable of generating a second medical image that makes it possible to identify a specific object existing in the blood vessel.
- the image area corresponding to the object in the medical image is referred to as the "object area”.
- FIG. 5 is an explanatory diagram showing the details of the learning model 50.
- the learning model 50 a model composed of the first model 51 and the second model 52 is illustrated as an example. The details of the learning model 50 will be described with reference to FIG.
- the first model 51 is a machine learning model that inputs a medical image (tomographic image and a perspective image) and outputs a detection result of detecting an object in the image.
- the first model 51 is a neural network model generated by deep learning, and is a CNN (Convolutional Neural Network) that extracts features of an input image from a large number of convolutional layers.
- the first model 51 includes an intermediate layer (hidden layer) in which a convolution layer for convolving the pixel information of the input image and a pooling layer for mapping the convoluted pixel information are alternately connected, and a feature amount (feature) of the input image is provided. Quantity map) is extracted.
- the first model 51 is described as being a CNN, but for example, GAN (Generative Adversarial Network), RNN (Recurrent Neural Network), SVM (Support Vector Machine), decision tree, and other learning. It may be a model based on an algorithm.
- the server 1 generates a first model 51 that identifies on a pixel-by-pixel basis whether or not each pixel in the input medical image is a pixel corresponding to an object area.
- the server 1 generates a semantic segmentation model (U-net or the like) or MASK R-CNN (Region CNN) as the first model 51.
- the semantic segmentation model is a type of CNN, and is a type of EncoderDecoder model that generates output data from input data.
- the semantic segmentation model includes a deconvolution layer that maps (enlarges) the features obtained by compression to the original image size, in addition to the convolution layer that compresses the data of the input image.
- a label image that identifies which object exists in which position in the image on a pixel-by-pixel basis based on the feature amount extracted by the convolution layer, and binarizes which object each pixel corresponds to. To generate.
- MASK R-CNN is a modification of Faster R-CNN mainly used for object detection, and is a model in which a deconvolutional layer is connected to Faster R-CNN.
- the feature amount of the image extracted by CNN and the coordinate range of the target object extracted by RPN are input to the reverse convolution layer, and finally the image area of the object in the input image.
- RPN Registered Proposal Network
- the server 1 generates these models as the first model 51 and uses them for object detection. It should be noted that all of the above models are examples, and the first model 51 may be sufficient as long as it can identify the position and shape of the object in the medical image. In the present embodiment, as an example, the first model 51 will be described as being a semantic segmentation model.
- the object detected by the first model 51 is, for example, a lesion such as a plaque, a therapeutic device such as an indwelling stent, or the like.
- the server 1 inputs a medical image into the first model 51 and detects various objects.
- plaque is an example of a lesion, and for example, a narrowed part of a blood vessel, a calcified tissue, a dissection of a blood vessel wall (flap), a neointima, and other parts may be detected.
- the stent is an example of a therapeutic device, and other devices may be detected.
- the lesion site and the therapeutic device are examples of objects, and other objects may be detected.
- the server 1 learns the medical image for training by using the training data in which the data indicating the object area illustrated in FIG. 4 is labeled. Specifically, in the training data, labels (metadata) indicating the coordinate range corresponding to the object area and the type of the object are given to the medical image for training.
- the server 1 inputs a medical image for training into the first model 51, and acquires the detection result of detecting the object as an output. Specifically, for each pixel in the object area, a label image in which a value indicating the type of the object is labeled is acquired as an output.
- the server 1 compares the detection result output from the first model 51 with the coordinate range of the correct object area and the type of the object indicated by the training data, and parameters such as weights between neurons so that both can be approximated. Optimize. As a result, the server 1 generates the first model 51.
- the server 1 may prepare two first models 51 corresponding to each image, or the first model 51. May be configured as a multimodal model having two networks for processing tomographic and fluoroscopic images, respectively. This makes it possible to detect an object from each image.
- the first model 51 accepts a plurality of frames of medical images (moving images) that are continuous in time series as input, and detects an object from the medical images of each frame. Specifically, the first model 51 receives a plurality of frames of medical images (for example, a tomographic image) that are continuous along the longitudinal direction of a blood vessel as input according to the scanning of the catheter 211. The first model 51 detects an object from a medical image of each frame continuous along the time axis.
- a plurality of frames of medical images for example, a tomographic image
- the server 1 may input a plurality of frame images into the first model 51 one by one and detect an object individually from each frame image, but may input a plurality of consecutive frame images at the same time and input a plurality of frames at the same time. It is preferable to be able to detect the object area from the image at the same time.
- the server 1 uses the first model 51 as a 3D-CNN (for example, 3D U-net) that handles three-dimensional input data. Then, the server 1 handles the two-dimensional frame image as three-dimensional data having the coordinates as two axes and the time when each frame image is acquired (the time when the frame image is generated) as one axis.
- 3D-CNN for example, 3D U-net
- the server 1 inputs a plurality of frame images (for example, 16 frames) for a predetermined unit time into the first model 51 as a set, and simultaneously outputs a label image in which an object area is labeled for each of the plurality of frame images.
- a plurality of frame images for example, 16 frames
- the object can be detected in consideration of the frame images before and after the continuous time series, and the detection accuracy can be improved.
- the server 1 may be able to detect an object from a plurality of consecutive frame images by using the first model 51 as a model in which CNN and RNN are combined.
- the first model 51 as a model in which CNN and RNN are combined.
- an RSTM (Long-Short Term Memory) layer is inserted after the intermediate layer related to CNN, and the object is detected by referring to the feature amounts extracted from the frame images before and after.
- the processing can be performed in consideration of the frame images before and after, and the detection accuracy can be improved.
- the second model 52 is a machine learning model that inputs patient information and medical images and outputs treatment information for supporting endovascular treatment of the patient.
- the second model 52 is a CNN, includes an intermediate layer in which convolution layers and pooling layers are alternately connected, and extracts features of input data.
- the second model 52 is described as being a CNN in the present embodiment, it may be a model based on other learning algorithms such as GAN, RNN, SVM, and decision tree.
- the server 1 learns the patient information for training and the medical image by using the training data to which the correct treatment information is added.
- the patient information and medical image for training are patient information and medical image of a patient who has undergone treatment, and are medical records of the patient and medical images such as tomographic images and fluoroscopic images obtained at the time of treatment.
- the correct treatment information is a record of the treatment (PCI) given to the patient.
- the server 1 gives the patient information and the medical image stored in the medical treatment DB 141 and the treatment information as training data to the second model 52, and performs learning.
- the server 1 inputs data such as the patient's age, gender, and diagnosis name included in the patient information into the second model 52 as a categorical variable indicating the attributes of the medical image.
- the server 1 inputs the patient information as a categorical variable to the second model 52 together with the tomographic image and the fluoroscopic image, and acquires the treatment information of each item illustrated in the upper right of FIG. 4 as an output.
- the output items of the second model 52 include items such as "lesion site” and “lesion property”, which are indicated by classification results, and “total contrast time” and “perspective time”. There are also items that are expressed as continuous values, such as ". In this way, the second model 52 handles the so-called classification problem and the regression problem at the same time. For example, all items are regarded as regression problems and continuous values are output, and the items to be shown as the classification result are changed from continuous values to binary values. You just have to convert. Alternatively, an output layer corresponding to each item may be provided separately, and the feature amount extracted in the intermediate layer may be input to each output layer to estimate each item separately. Alternatively, the second model 52 itself may be prepared for each item and estimated separately.
- the server 1 may use the medical image of the blood vessel of the patient as it is for the input of the second model 52, but in the present embodiment, the object region detected by the first model 51 is processed so as to be identifiable.
- 2 Medical images are generated and input to the second model 52.
- the second medical image is an image in which the object area is displayed in different display modes depending on the type of the object. For example, the label image output from the first model 51 is superimposed on the original medical image.
- the server 1 processes the label image output from the first model 51 into a translucent mask and superimposes it on the original medical image.
- the server 1 changes the display mode of each object area according to the type of the object, such as changing the display color of the mask according to the type of the object.
- the server 1 generates a medical image that displays the object area in a display mode different from that of the other areas, and inputs it to the second model 52.
- the server 1 inputs the patient information for training and the medical image obtained by processing the object area into the second model 52, and acquires the treatment information as an output.
- the server 1 compares the output treatment information with the correct treatment information, and optimizes parameters such as weights between neurons so that the two can be approximated. As a result, the server 1 generates the second model 52.
- the first model 51 for object detection and the second model 52 for generating treatment information are combined, but the object can be detected on a rule basis. good. That is, the configuration in which the learning model 50 includes the first model 51 is not essential, and an object may be detected on a rule basis as a preprocessing for input to the learning model 50, and the detection result may be input to the learning model 50. ..
- the server 1 When generating treatment information for a patient to be newly treated, the server 1 acquires a medical image of the patient from the diagnostic imaging system 2 and inputs it to the learning model 50 to generate treatment information including the second medical image. do.
- the processing may be performed in real time at the time of image acquisition, or the recorded medical images (moving images) may be collectively acquired and processed after the fact.
- the server 1 outputs the generated treatment information to the display device 23 and displays it.
- the treatment information is output to the diagnostic imaging system 2, but the treatment information may be output to a device other than the diagnostic imaging system 2 (for example, a personal computer) to display the treatment information.
- a device other than the diagnostic imaging system 2 for example, a personal computer
- FIG. 6 is an explanatory diagram showing an example of a display screen of the display device 23.
- the display device 23 displays each treatment information output from the server 1 and presents it to the user. For example, the display device 23 displays a list 71, a progress information column 72, a tomographic image 73, and a perspective image 74.
- List 71 is a table showing the estimation results of each item output from the second model 52 in a list.
- List 71 includes, for example, information about lesions, information about therapeutic devices, information about conditions for taking fluoroscopic images, and other information.
- the display device 23 presents these treatment information to the user in the list 71 to support endovascular treatment.
- the progress information column 72 is a display column showing the estimation result of estimating the progress of the user after the treatment, and is the estimation result regarding the onset of complications as described above.
- the server 1 uses the learning model 50 to calculate the probability value of onset for each of a plurality of complications, and displays the probability value in a list in the progress information column 72 in association with the complication name.
- the display device 23 displays the tomographic image 73 generated by the intravascular image diagnostic device 21 and the fluoroscopic image 74 taken by the fluoroscopic image capturing device 22.
- the display device 23 displays the second medical image obtained by processing the object area of the original medical image as the tomographic image 73 and the fluoroscopic image 74.
- FIG. 7 is a flowchart showing the procedure of the generation process of the learning model 50. Based on FIG. 7, the processing content when learning the training data and generating the learning model 50 will be described.
- the control unit 11 of the server 1 acquires training data to which correct treatment information is added to the patient information for training and the medical image (step S11).
- the patient information and medical image for training are the patient information and medical image of the patient who has been treated.
- the treatment information is information on the treatment performed on the patient, and includes the treatment record stored in the medical treatment DB 141 and the label data indicating the object area.
- the control unit 11 Based on the training data, the control unit 11 generates a first model 51 that detects an object when a medical image is input (step S12). For example, the control unit 11 generates the CNN related to the semantic segmentation as the first model 51 as described above. The control unit 11 inputs a medical image for training to the first model, and acquires the detection result of detecting the object area as an output. The control unit 11 compares the detected object region with the label data of the correct answer, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the first model 51.
- control unit 11 generates a second model 52 that outputs treatment information for supporting the treatment of the patient when the patient information and the medical image are input based on the training data (step S13). Specifically, the control unit 11 generates a second model 52 that outputs treatment information by inputting patient information and a second medical image obtained by processing an object area based on the detection result of the first model 51. The control unit 11 inputs the patient information for training and the second medical image obtained by processing the object area based on the label data of the correct answer into the second model, and acquires the treatment information as an output. The control unit 11 compares the output treatment information with the correct treatment information, optimizes parameters such as weights between neurons so that the two are close to each other, and generates the second model 52. The control unit 11 ends a series of processes.
- FIG. 8 is a flowchart showing a procedure for outputting treatment information. Based on FIG. 8, the processing content when outputting the treatment information of the patient to be treated using the learning model 50 will be described.
- the control unit 11 acquires patient information about the patient to be treated and a medical image of the blood vessel of the patient (step S31).
- the control unit 11 inputs the acquired medical image into the first model 51 and detects an object (step S32).
- the control unit 11 inputs the tomographic image and the fluoroscopic image imaged by the intravascular image diagnostic apparatus 21 and the fluoroscopic image capturing apparatus 22 into the first model 51, and detects the tomographic image and the object in the fluoroscopic image. do.
- the control unit 11 inputs the second medical image obtained by processing the object area based on the detection result in step S32 and the patient information into the second model 52 to generate treatment information (step S33).
- the control unit 11 outputs the generated treatment information to the display device 23 and displays it (step S34). Specifically, the control unit 11 outputs information such as a lesion in a blood vessel, a therapeutic device to be used, and imaging conditions of a fluoroscopic image, as well as a second medical image obtained by processing an object area as treatment information. It is displayed on the display device 23.
- the control unit 11 ends a series of processes.
- the first embodiment it is possible to suitably support the treatment based on the medical image by using the learning model 50 in which the training data has been learned.
- the position, properties, etc. of the lesion in the blood vessel can be estimated and presented to the user.
- the first embodiment it is possible to estimate the type, shape, number of uses, order of use, etc. of the therapeutic device to be used and present it to the user.
- the first embodiment it is possible to estimate the progress information after the treatment such as the onset of complications and present it to the user.
- the first model 51 that detects an object from a medical image and the second model 52 that generates treatment information based on the detection result by the first model 51 are combined.
- the correct position, shape, etc. of the object can be given to the second model 52, and the estimation accuracy of the treatment information can be improved.
- the first embodiment it is possible to suitably assist the confirmation of the lesion portion or the like by generating the second medical image obtained by processing the object region.
- the estimation accuracy of the treatment information can be improved by inputting a plurality of frame images generated in time series into the learning model 50.
- the learning model 50 by using not only the tomographic image in the blood vessel (luminal organ) but also the perspective image as the medical image, not only the local image but also the image of the entire blood vessel can be used as the learning model 50. It can be given and the estimation accuracy can be improved.
- Modification example 1 In the first embodiment, a mode in which the treatment information is output using the learning model 50 has been described. In this modification, a mode in which a correction input for correcting the output treatment information is received from the user and re-learning is performed based on the corrected treatment information will be described.
- FIG. 9 is a flowchart showing a procedure for outputting treatment information according to the first modification.
- the server 1 executes the following processing.
- the control unit 11 of the server 1 receives the correction input of the treatment information output to the display device 23 from the user (step S35).
- the control unit 11 receives a correction input for correcting the information of each item displayed in the list 71 on the display screen illustrated in FIG.
- the server 1 accepts input of the correct coordinate range, type, etc. of the second medical image displayed as the tomographic image 73 when the coordinate range of the object area, the type of the object, etc. are different from the actual ones.
- the control unit 11 When the correction input of the treatment information is received, the control unit 11 performs re-learning using the patient information and the medical image input to the learning model 50 and the corrected treatment information as training data, and updates the learning model 50. (Step S36). That is, the control unit 11 optimizes parameters such as weights between neurons so that the treatment information output from the learning model 50 approximates the modified treatment information, and regenerates the learning model 50. The control unit 11 ends a series of processes.
- the learning model 50 can be optimized through the operation of this system.
- Modification 2 In the first embodiment, an embodiment in which an object is detected by the first model 51 and a second medical image obtained by processing an object area is input to the second model 52 has been described. In this modification, a mode in which the type, dimensions, etc. of the object are specified from the detection result of the object, and the object information indicating the specified type, dimensions, etc. of the object is used as the input of the second model 52 will be described.
- FIG. 10 is an explanatory diagram of the learning model 50 according to the second modification.
- the learning model 50 according to the present modification also includes the first model 51 and the second model 52 as in the first embodiment.
- the server 1 performs image analysis of the object area detected by the first model 51 to specify the type, dimensions, and the like of the object.
- the server 1 specifies the type, size, etc. of the stent already placed in the patient's blood vessel.
- the object to be specified is a stent, but it may be another object such as a lesion.
- the server 1 performs image analysis of the object region detected as the stent, identifies the name of the indwelling stent, and specifies the diameter, length, and the like of the stent.
- the server 1 inputs data such as the specified stent type and dimensions into the second model 52 as object information, and generates treatment information.
- the server 1 may input not only the object information (text data) but also the original medical image input to the first model 51 into the second model 52.
- the server 1 may input the second medical image obtained by processing the object area into the second model 52 as in the first embodiment.
- image analysis is performed as preprocessing of the second model 52, object information is specified, and the object information is input to the second model 52.
- data such as the type and dimensions of the object can be given to the second model 52, and the estimation accuracy can be improved.
- FIG. 11 is an explanatory diagram of the learning model 50 according to the second embodiment. Similar to the first embodiment, the learning model 50 according to the present embodiment also includes the first model 51 and the second model 52, and generates a second medical image obtained by processing the object region and other treatment information. As shown in the upper right of FIG. 11, the treatment information includes stent information regarding the stent to be placed in the blood vessel of the patient, and the type, shape (diameter, length) and the like of the stent to be used are output.
- the learning model 50 further includes a third model 53.
- the third model 53 is a machine learning model that estimates a target position in a blood vessel in which a stent should be placed and a target dilation diameter when a medical image before treatment is input. Specifically, the third model 53 takes a medical image as an input and detects an image region in which the stent should be placed and expanded. In addition to the stent information generated by the second model 52, the server 1 detects an image region in which the stent should be placed and expanded by the third model 53, generates a second medical image showing the detected region, and generates the stent information. It is displayed on the display device 23 as one of the above.
- target area the area where the stent should be placed and expanded.
- the server 1 uses Mask R-CNN as the third model 53.
- the Mask R-CNN is a CNN that detects a target image area from the input image, and is a model that can identify the target image area on a pixel-by-pixel basis.
- the server 1 generates a third model 53 using the data with a label indicating the coordinate range of the stent placed in the blood vessel of the patient as training data for the fluoroscopic image of the patient who has been treated.
- the third model 53 is not limited to the Mask R-CNN, and may be another CNN such as U-net or another machine learning model such as GAN.
- the third model 53 detects the target area as a rectangular bounding box as shown in FIG.
- the target medical image may be a tomographic image as well as a fluoroscopic image.
- the target region may be any one as long as the stent is placed and the region to be expanded by the balloon is correctly indicated, and the shape of the target region is not limited to the rectangular shape.
- the server 1 may directly input the fluoroscopic image acquired from the fluoroscopic image capturing device 22 to the third model 53, but as shown in FIG. 11, the first model 51 processes an object area such as a lesion. 2 It is preferable to input a medical image. As a result, the third model 53 can determine the target area in consideration of the position, shape, and the like of the lesion to be treated.
- the type and dimensions of the object such as the lesion may be specified from the second medical image, and the specified object information may be given to the third model 53.
- the server 1 should expand the lesion and the stent by superimposing a bounding box indicating the target region detected by the third model 53 on the second medical image obtained by processing the object region corresponding to the lesion. Generate a second medical image showing the range at the same time.
- the server 1 may generate an image surrounding only the target area as a second medical image.
- the server 1 outputs the generated second medical image as a fluoroscopic image 74 to the display device 23.
- the first model 51 for detecting an object such as a lesion and the third model 53 for detecting a target area where a stent should be placed and expanded are separately prepared. May be the same model, and the object detection and the target area detection may be performed at the same time.
- 12 and 13 are explanatory views showing a display example of a second medical image according to the second embodiment.
- the display device 23 displays a second medical image showing the target area in which the stent should be placed and expanded, and displays the stent currently inserted in the blood vessel in a distinguishable manner in the second medical image.
- Assist in stent placement and expansion. 12A to 13B show in chronological order how the stent is inserted into the blood vessel, reaches the target position, and expands.
- FIGS. 12A and 12B show how the stent is inserted to the target region.
- the display device 23 identifiablely displays the object area 741 and the target area 742 in the perspective image 74 by a method such as color display.
- the object region 741 corresponds to the lesion and the target region 742 corresponds to the region within the blood vessel where the stent 743 should be placed.
- the display device 23 displays the stent 743 inserted into the blood vessel and the rectangular stent region 744 representing the current position of the stent 743.
- the server 1 detects at least the current position of the stent and the current expansion diameter (hereinafter referred to as “current diameter”) from the fluoroscopic image acquired from the fluoroscopic image capturing apparatus 22, and displays the stent in an identifiable manner by a method such as color display. Let me.
- the stent 743 may be detected by using the first model 51 or by image recognition by pattern matching.
- the server 1 calculates the difference value between the current position of the detected stent 743 and the target position of the stent 743 indicated by the target area 742, and displays it in the upper left of the fluoroscopic image 74.
- the current position of the stent 743 may be, for example, the midpoint of the elongated stent 743 as the current position, or the tip of the stent 743 as the current position, and any point of the stent 743 can be detected as the current position. ..
- the target position of the stent 743 may be, for example, the center of gravity of the rectangular target area 742 as the target position, or the midpoint of the short side of the target area 742 located on the opposite side of the stent 743 as the target position. , Any point in the target area 742 can be set as the target position.
- the server 1 sequentially detects the stent 743 from the fluoroscopic image, calculates the difference value between the current position and the target position, and displays it on the display device 23.
- the server 1 determines whether or not the stent 743 has reached the target position based on the difference value between the current position and the target position. When it is determined that the target position has been reached, the server 1 calculates the expansion rate obtained by dividing the current diameter by the target expansion diameter based on the current diameter of the stent 743 and the target expansion diameter of the stent 743 indicated by the target region 742. , It is displayed in the upper left of the fluoroscopic image 74.
- the target expansion diameter is the width of the target region 742 in the direction orthogonal to the longitudinal direction of the blood vessel, and is the length of the short side of the rectangular target region 742.
- FIGS. 13A and 13B show the state from when the stent 743 reaches the target position until the expansion is completed.
- the display device 23 switches the display on the upper left of the fluoroscopic image 74 to the expansion rate.
- the server 1 sequentially detects the current diameter of the stent 743, calculates the expansion rate, and displays it on the display device 23.
- FIG. 14 is a flowchart showing a procedure for generating the learning model 50 according to the second embodiment.
- the control unit 11 of the server 1 acquires training data for generating the learning model 50 (step S201).
- the training data according to the present embodiment includes a medical image of a patient who has undergone treatment, label data indicating an object area such as a lesion, and a label indicating an image area (target area) of a stent placed in a blood vessel. Includes data.
- the control unit 11 shifts the process to step S12.
- the control unit 11 After executing the process of step S13, the control unit 11 generates a third model 53 that detects the target region in which the stent should be placed and expanded when the medical image is input (step S202). Specifically, as described above, the control unit 11 generates the Mask R-CNN as the third model 53. For example, the control unit 11 inputs the second medical image obtained by processing the object area of the medical image for training according to the label data into the third model 53, and acquires the detection result of detecting the target area as an output. The control unit 11 compares the acquired target region with the label data of the correct answer, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the third model 53. The control unit 11 ends a series of processes.
- FIG. 15 is a flowchart showing a procedure for outputting treatment information according to the second embodiment.
- the server 1 executes the following processing.
- the control unit 11 inputs the pre-treatment medical image of the patient into the third model 53 to detect the target region in which the stent should be placed (step S221). Specifically, as described above, the control unit 11 inputs the second medical image (perspective image) generated by using the first model 51 into the third model 53, and detects the target region.
- the control unit 11 detects the stent inserted into the patient's blood vessel from the medical image. (Step S222). The control unit 11 generates a second medical image showing the detected stent and the target region, and outputs the second medical image to the display device 23 together with the treatment information generated in step S33 (step S223).
- the control unit 11 calculates the difference value between the current position of the stent and the target position of the stent indicated by the target area, and outputs the difference value to the display device 23 (step S224).
- the control unit 11 determines whether or not the stent has reached the target position based on the difference value between the current position and the target position (step S225). If it is determined that the target position has not been reached (S225: NO), the control unit 11 returns the process to step S224. When it is determined that the target position has been reached (S225: YES), the control unit 11 calculates the expansion rate from the current diameter of the stent and the target expansion diameter, and outputs the expansion rate to the display device 23 (step S226).
- the control unit 11 determines whether or not the expansion to the target expansion diameter is completed (step S227). If it is determined that the expansion is not completed (S227: NO), the control unit 11 returns the process to step S226. When it is determined that the expansion is completed (S227: YES), the control unit 11 ends a series of processes.
- the type and shape of the stent to be used in the learning model 50 are automatically predicted, and the target position and expansion conditions of the stent are predicted, but this embodiment is limited to this. It's not something.
- the server 1 receives a designated input for designating the type and shape of the stent used for endovascular treatment from the user, inputs the designated stent information (first stent information) into the learning model 50, and inputs the designated stent information (first stent information) to the learning model 50.
- Information such as a target position and expansion conditions (second stent information) may be predicted.
- the server 1 inputs the first stent information specified by the user into the third model 53 together with the medical image, and detects the target area when the stent specified by the user is used. This makes it possible to support stent placement and expansion according to the user's request.
- Modification example 3 In the first modification, a mode in which the treatment information is output, the correction input is received, and the re-learning is performed has been described. Similarly, in the second embodiment, after the stent information is output, a correction input may be received and re-learning may be performed.
- FIG. 16 is a flowchart showing a procedure for outputting treatment information according to the third modification.
- the server 1 executes the following processing.
- the control unit 11 of the server 1 receives the correction input of the output stent information (step S241). For example, the control unit 11 receives an input for correcting the type, shape, etc. of the stent displayed as the stent information. Further, the control unit 11 receives an input for correcting the coordinate range of the target region for the second medical image.
- the control unit 11 shifts the process to step S224.
- the control unit 11 When it is determined that the expansion of the stent is completed (S227: YES), the control unit 11 relearns using the original medical image input to the learning model 50 and the modified stent information as training data, and learns. Model 50 is updated (step S242). That is, the control unit 11 optimizes parameters such as weights between neurons so that the stent information output from the learning model 50 approximates the modified stent information, and regenerates the learning model 50. The control unit 11 ends a series of processes.
- the learning model 50 can be optimized through the operation of this system.
- Control unit 11 Control unit 12 Main storage unit 13 Communication unit 14 Auxiliary storage unit P program 141 Medical care DB 50 Learning model 51 1st model 52 2nd model 53 3rd model 2 diagnostic imaging system 21 intravascular diagnostic imaging device 22 fluoroscopic imaging device 23 display device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Radiology & Medical Imaging (AREA)
- Public Health (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
This program causes a computer to execute a process of acquiring patient information about a patient who is to undergo treatment and a medical image obtained by imaging a hollow organ of the patient, and inputting the acquired patient information and the acquired medical image to a model learned so as to output treatment information for providing assistance in the treatment to be performed on the patient when the patient information and the medical image are inputted thereto, to output the treatment information.
Description
本発明は、プログラム、情報処理方法、情報処理装置及びモデル生成方法に関する。
The present invention relates to a program, an information processing method, an information processing device, and a model generation method.
超音波画像、光干渉(OCT:Optical Coherence Tomography)画像、X線画像など、人体内部を可視化した医用画像に基づく治療を支援する種々の手法が提案されている。例えば特許文献1では、血管内画像における疾患部位の範囲の指定を操作者から受け付け、指定された範囲の疾患部位に適した治療用デバイスの形状を設定し、設定した形状を有する治療用デバイスの情報をデータベースから検索して表示する治療用デバイス選択支援システムが開示されている。
Various methods have been proposed to support treatment based on medical images that visualize the inside of the human body, such as ultrasonic images, optical coherence tomography (OCT) images, and X-ray images. For example, in Patent Document 1, a designation of a range of a diseased part in an intravascular image is received from an operator, a shape of a therapeutic device suitable for the designated range of the diseased part is set, and a therapeutic device having the set shape is provided. A therapeutic device selection support system that searches and displays information from a database is disclosed.
しかしながら、医用画像から疾患部位等の情報を読み取って治療に反映させることは簡単ではなく、豊富な知識及び臨床経験を要する。特許文献1に係る発明は、操作者によって指定された範囲の疾患部位に適した治療用デバイスをデータベースから検索するのみで、医用画像に基づく治療を好適に支援するに至っていない。
However, it is not easy to read information such as the diseased part from medical images and reflect it in treatment, and abundant knowledge and clinical experience are required. The invention according to Patent Document 1 only searches a database for a therapeutic device suitable for a disease site in a range specified by an operator, and does not suitably support treatment based on a medical image.
一つの側面では、医用画像に基づく治療を好適に支援することができるプログラム等を提供することを目的とする。
One aspect is to provide a program or the like that can suitably support treatment based on medical images.
一つの側面に係るプログラムは、治療を実施する患者に関する患者情報と、該患者の管腔器官をイメージングした医用画像とを取得し、前記患者情報及び医用画像を入力した場合に、前記患者に実施する治療を支援する治療情報を出力するよう学習済みのモデルに、取得した前記患者情報及び医用画像を入力して前記治療情報を出力する処理をコンピュータに実行させる。
The program according to one aspect is performed on the patient when the patient information about the patient to be treated and the medical image of the patient's luminal organ are acquired and the patient information and the medical image are input. A computer is made to execute a process of inputting the acquired patient information and a medical image and outputting the treatment information to a model trained to output the treatment information to support the treatment.
一つの側面では、医用画像に基づく治療を好適に支援することができる。
On one side, medical image-based treatment can be suitably supported.
以下、本発明をその実施の形態を示す図面に基づいて詳述する。
(実施の形態1)
図1は、治療支援システムの構成例を示す説明図である。本実施の形態では、血管内治療を実施する患者に関する患者情報と、当該患者の血管をイメージングした医用画像とに基づき、患者の治療を支援するための治療情報をユーザ(医療従事者)に提示する治療支援システムについて説明する。治療支援システムは、情報処理装置1、画像診断システム2を有する。情報処理装置1及び画像診断システム2は、LAN(Local Area Network)、インターネット等のネットワークNに通信接続されている。 Hereinafter, the present invention will be described in detail with reference to the drawings showing the embodiments thereof.
(Embodiment 1)
FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system. In the present embodiment, treatment information for supporting the treatment of the patient is presented to the user (medical worker) based on the patient information about the patient performing the intravascular treatment and the medical image of the patient's blood vessel. The treatment support system to be used will be described. The treatment support system includes aninformation processing device 1 and a diagnostic imaging system 2. The information processing device 1 and the diagnostic imaging system 2 are communication-connected to a network N such as a LAN (Local Area Network) or the Internet.
(実施の形態1)
図1は、治療支援システムの構成例を示す説明図である。本実施の形態では、血管内治療を実施する患者に関する患者情報と、当該患者の血管をイメージングした医用画像とに基づき、患者の治療を支援するための治療情報をユーザ(医療従事者)に提示する治療支援システムについて説明する。治療支援システムは、情報処理装置1、画像診断システム2を有する。情報処理装置1及び画像診断システム2は、LAN(Local Area Network)、インターネット等のネットワークNに通信接続されている。 Hereinafter, the present invention will be described in detail with reference to the drawings showing the embodiments thereof.
(Embodiment 1)
FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system. In the present embodiment, treatment information for supporting the treatment of the patient is presented to the user (medical worker) based on the patient information about the patient performing the intravascular treatment and the medical image of the patient's blood vessel. The treatment support system to be used will be described. The treatment support system includes an
なお、本実施の形態では血管内治療を一例に説明するが、対象とする管腔器官は血管に限定されず、例えば胆管、膵管、気管支、腸などのその他の管腔器官であってもよい。
In the present embodiment, endovascular treatment will be described as an example, but the target luminal organ is not limited to blood vessels, and may be other luminal organs such as bile duct, pancreatic duct, bronchus, and intestine. ..
画像診断システム2は、血管内画像診断装置21、透視画像撮影装置22、表示装置23を有する。血管内画像診断装置21は、患者の血管内断層像をイメージングするための装置であり、例えばカテーテル211を用いた超音波検査を行うIVUS(Intravascular Ultrasound)装置である。カテーテル211は患者の血管内に挿入される医用器具であり、超音波を送信すると共に血管内からの反射波を受信する圧電素子を備える。血管内画像診断装置21は、カテーテル211で受信した反射波の信号に基づいて超音波断層像を生成し、表示装置23に表示させる。
The diagnostic imaging system 2 includes an intravascular diagnostic imaging device 21, a fluoroscopic imaging device 22, and a display device 23. The intravascular image diagnostic device 21 is a device for imaging an intravascular tomographic image of a patient, for example, an IVUS (Intravascular Ultrasound) device that performs an ultrasonic examination using a catheter 211. The catheter 211 is a medical device inserted into a blood vessel of a patient, and includes a piezoelectric element that transmits ultrasonic waves and receives reflected waves from the blood vessels. The intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image based on the signal of the reflected wave received by the catheter 211 and displays it on the display device 23.
なお、本実施の形態では血管内画像診断装置21が超音波断層像を生成するものとするが、例えば光干渉断層像(OCT画像)を生成してもよい。
In the present embodiment, the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image, but for example, an optical coherence tomographic image (OCT image) may be generated.
透視画像撮影装置22は、患者体内を透視した透視画像を撮影するための装置ユニットであり、例えば血管造影検査を行うアンギオグラフィ装置である。透視画像撮影装置22は、X線源221、X線センサ222を備え、X線源221から照射されたX線をX線センサ222が受信することにより、患者のX線透視画像をイメージングする。例えばカテーテル211の先端にはX線不透過マーカが装着されており、透視画像においてカテーテル211の位置が可視化される。透視画像撮影装置22は、カテーテル211の位置を可視化した透視画像を表示装置23に表示させ、血管内断層像と共に提示する。
The fluoroscopic image capturing device 22 is a device unit for capturing a fluoroscopic image that sees through the inside of a patient, and is, for example, an angiography device that performs angiography examination. The fluoroscopic image capturing apparatus 22 includes an X-ray source 221 and an X-ray sensor 222, and the X-ray sensor 222 receives the X-rays emitted from the X-ray source 221 to image the X-ray fluoroscopic image of the patient. For example, an X-ray opaque marker is attached to the tip of the catheter 211, and the position of the catheter 211 is visualized in a fluoroscopic image. The perspective imaging device 22 displays a perspective image that visualizes the position of the catheter 211 on the display device 23 and presents it together with an intravascular tomographic image.
なお、上記では医用画像の一例として超音波断層像、光干渉断層像、アンギオグラフィ画像を挙げたが、医用画像はコンピュータ断層撮影(CT;Computed Tomography)画像、磁気共鳴(MRI;Magnetic Resonance Imaging)画像などであってもよい。
In the above, ultrasonic tomography, optical interference tomography, and angiography are given as examples of medical images, but the medical images are computed tomography (CT) images and magnetic resonance imaging (MRI). It may be an image or the like.
情報処理装置1は、種々の情報処理、情報の送受信が可能な情報処理装置であり、例えばサーバコンピュータ、パーソナルコンピュータ等である。本実施の形態では情報処理装置1がサーバコンピュータであるものとし、以下では簡潔のためサーバ1と読み替える。なお、サーバ1は画像診断システム2と同じ施設(病院等)に設置されたローカルサーバであってもよく、インターネット等を介して通信接続されたクラウドサーバであってもよい。サーバ1は、治療を実施する患者に関する患者情報と、当該患者の医用画像(断層像及び透視画像)とに基づき、当該患者の血管内治療を支援するための治療情報を出力する治療支援装置として機能する。具体的には後述のように、サーバ1は、所定の訓練データを学習する機械学習を行い、患者情報及び医用画像を入力として、治療情報を出力する学習モデル50(図4参照)を予め用意してある。サーバ1は、治療対象の患者の患者情報及び医用画像を学習モデル50に入力し、治療情報を学習モデル50から取得する。サーバ1は、治療情報を画像診断システム2に出力し、表示装置23に表示させる。
The information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer. In the present embodiment, it is assumed that the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity. The server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging system 2, or may be a cloud server communicated and connected via the Internet or the like. The server 1 is a treatment support device that outputs treatment information for supporting endovascular treatment of the patient based on the patient information about the patient to be treated and the medical image (tomographic image and fluoroscopic image) of the patient. Function. Specifically, as will be described later, the server 1 prepares in advance a learning model 50 (see FIG. 4) that performs machine learning to learn predetermined training data, inputs patient information and medical images, and outputs treatment information. It is done. The server 1 inputs the patient information and the medical image of the patient to be treated into the learning model 50, and acquires the treatment information from the learning model 50. The server 1 outputs the treatment information to the diagnostic imaging system 2 and displays it on the display device 23.
治療情報は、患者の治療を支援するための情報であり、患者を治療するユーザ(医療従事者)を補助するための情報である。本実施の形態では血管内治療を一例にして、カテーテル211を用いたPCI(Percutaneous Coronary Intervention;経皮的冠動脈形成術)を支援するための情報を治療情報として出力する。例えば治療情報は、プラーク等の病変部の位置や性状、使用すべきステントやバルーン等の治療用デバイス、透視画像撮影時に推奨される造影剤の投与量や撮影時間、治療後の経過情報(例えば合併症のリスク)などを含む。サーバ1は、学習モデル50を用いてこれらの情報を推定し、表示装置23に表示させる。また、サーバ1は、病変部や治療用デバイス等のオブジェクトを医用画像内から検出し、検出したオブジェクトを識別可能に加工した第2医用画像を生成して表示装置23に表示させる。
The treatment information is information for supporting the treatment of the patient, and is information for assisting the user (medical worker) who treats the patient. In the present embodiment, endovascular treatment is taken as an example, and information for supporting PCI (Percutaneous Coronary Intervention) using the catheter 211 is output as treatment information. For example, treatment information includes the position and properties of lesions such as plaques, therapeutic devices such as stents and balloons to be used, the dose and duration of contrast media recommended for fluoroscopic imaging, and post-treatment progress information (eg). Risk of complications) etc. The server 1 estimates these information using the learning model 50 and displays it on the display device 23. Further, the server 1 detects an object such as a lesion or a therapeutic device from the medical image, generates a second medical image processed so that the detected object can be identified, and displays the detected object on the display device 23.
なお、本実施の形態ではサーバ1において学習モデル50を用いた治療情報の生成処理を行うものとするが、サーバ1が構築した学習モデル50を画像診断システム2にインストールし、ローカルで治療情報の生成処理を実行してもよい。
In the present embodiment, the server 1 performs the treatment information generation process using the learning model 50. However, the learning model 50 constructed by the server 1 is installed in the diagnostic imaging system 2 and the treatment information is locally generated. The generation process may be executed.
図2は、サーバ1の構成例を示すブロック図である。サーバ1は、制御部11、主記憶部12、通信部13、及び補助記憶部14を備える。
制御部11は、一又は複数のCPU(Central Processing Unit)、GPU(Graphics Processing Unit)、AIチップ(AI用半導体)等の演算処理装置を有し、補助記憶部14に記憶されたプログラムPを読み出して実行することにより、種々の情報処理を行う。主記憶部12は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部11が演算処理を実行するために必要なデータを一時的に記憶する。通信部13は、通信に関する処理を行うための通信モジュールであり、外部と情報の送受信を行う。 FIG. 2 is a block diagram showing a configuration example of theserver 1. The server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
Thecontrol unit 11 has one or more CPUs (Central Processing Units), GPUs (Graphics Processing Units), AI chips (AI semiconductors) and other arithmetic processing devices, and stores the program P stored in the auxiliary storage unit 14. Various information processing is performed by reading and executing. The main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing. Remember. The communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
制御部11は、一又は複数のCPU(Central Processing Unit)、GPU(Graphics Processing Unit)、AIチップ(AI用半導体)等の演算処理装置を有し、補助記憶部14に記憶されたプログラムPを読み出して実行することにより、種々の情報処理を行う。主記憶部12は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部11が演算処理を実行するために必要なデータを一時的に記憶する。通信部13は、通信に関する処理を行うための通信モジュールであり、外部と情報の送受信を行う。 FIG. 2 is a block diagram showing a configuration example of the
The
補助記憶部14は、大容量メモリ、ハードディスク等の不揮発性記憶領域であり、制御部11が処理を実行するために必要なプログラムP、その他のデータを記憶している。また、補助記憶部14は、学習モデル50、診療DB141を記憶している。学習モデル50は、上述の如く訓練データを学習済みの機械学習モデルであり、患者情報及び医用画像を入力として、患者の治療を支援するための治療情報を出力するモデルである。学習モデル50は、人工知能ソフトウェアを構成するプログラムモジュールとしての利用が想定される。診療DB141は、患者の診療データを格納するデータベースである。
The auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. In addition, the auxiliary storage unit 14 stores the learning model 50 and the medical care DB 141. The learning model 50 is a machine learning model in which training data has been trained as described above, and is a model that outputs treatment information for supporting treatment of a patient by inputting patient information and a medical image. The learning model 50 is expected to be used as a program module constituting artificial intelligence software. The medical care DB 141 is a database that stores medical care data of patients.
なお、補助記憶部14はサーバ1に接続された外部記憶装置であってもよい。また、サーバ1は複数のコンピュータからなるマルチコンピュータであっても良く、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。
The auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
また、本実施の形態においてサーバ1は上記の構成に限られず、例えば操作入力を受け付ける入力部、画像を表示する表示部等を含んでもよい。また、サーバ1は、CD(Compact Disk)、DVD(Digital Versatile Disc)、USB(Universal Serial Bus)メモリ、外付けハードディスク等の可搬型記憶媒体1aを読み取る読取部を備え、可搬型記憶媒体1aからプログラムPを読み取って実行するようにしても良い。あるいはサーバ1は、半導体メモリ1bからプログラムPを読み込んでも良い。
Further, in the present embodiment, the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 includes a reading unit for reading a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), a USB (UniversalSerialBus) memory, and an external hard disk, and is provided from the portable storage medium 1a. The program P may be read and executed. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
図3は、診療DB141のレコードレイアウトの一例を示す説明図である。診療DB141は、診療ID列、患者情報列、治療情報列、画像情報列を含む。診療ID列は、各診療データを識別するための診療IDを記憶している。患者情報列、治療情報列、及び画像情報列はそれぞれ、診療IDと対応付けて、診療対象の患者に関する患者情報、当該患者に実施した治療に関する治療情報、及び当該患者を検査して得た医用画像を記憶している。
FIG. 3 is an explanatory diagram showing an example of the record layout of the medical care DB 141. The medical treatment DB 141 includes a medical treatment ID column, a patient information column, a treatment information column, and an image information string. The medical treatment ID column stores a medical treatment ID for identifying each medical treatment data. The patient information string, the treatment information column, and the image information string are associated with the medical treatment ID, respectively, and the patient information about the patient to be treated, the treatment information about the treatment performed for the patient, and the medical use obtained by examining the patient. I remember the image.
患者情報は、治療を実施した患者に関する情報であり、患者の診療記録である。治療情報は、例えば患者の年齢、性別、診断名、危険因子(生活習慣病の有無等)、既往歴などのほかに、治療歴、併用薬、血液検査結果、血管の罹患枝数、左室駆出率、心血管に関連する急変事態(心筋梗塞等)の発生歴などを含む。
Patient information is information about the patient who has undergone treatment and is a medical record of the patient. Treatment information includes, for example, patient age, gender, diagnosis name, risk factors (presence or absence of lifestyle disease, etc.), history, treatment history, concomitant medications, blood test results, number of affected branches of blood vessels, left chamber, etc. Includes ejection rate, history of sudden changes related to cardiovascular disease (myocardial infarction, etc.).
治療情報は、患者に実施した血管内治療に関する情報であり、例えばPCIの記録である。治療情報は、例えばPCIの実施日、治療した病変部の位置(以下、「病変部位」と呼ぶ)、病変部の性状、カテーテル211の穿刺部位、使用した造影剤の総量のほかに、透視画像の撮影時間、ステント留置前の追加手技の有無及び内容、ステント留置後の追加手技の有無及び内容、留置したステントの種類(例えば製品名)、直径、長さ、総数、バルーンの種類、長さ、最大拡張径、最大拡張圧、拡張時間、減衰時間、分岐部病変の治療法、治療後の経過情報(例えば合併症発症の有無)を含む。なお、本実施の形態ではステント及びバルーンの径を直径で表現するが、半径であってもよいことは勿論である。
The treatment information is information on endovascular treatment performed on the patient, for example, a record of PCI. Treatment information includes, for example, the date of PCI, the location of the treated lesion (hereinafter referred to as "lesion site"), the properties of the lesion, the puncture site of the catheter 211, the total amount of contrast agent used, and a fluoroscopic image. Imaging time, presence / absence and content of additional treatment before stent placement, presence / absence and content of additional treatment after stent placement, type of stent placed (for example, product name), diameter, length, total number, type of balloon, length , Maximum dilation diameter, maximum dilation pressure, dilation time, decay time, treatment of bifurcation lesions, post-treatment progress information (eg, presence or absence of complications). In the present embodiment, the diameters of the stent and the balloon are expressed by the diameter, but it is needless to say that the diameter may be the radius.
医用画像は、患者の血管をイメージングした画像であり、上述の如く、血管内部の断層像(超音波断層像、光干渉断層像など)、X線によって患者体内を可視化した透視画像(アンギオグラフィ画像、コンピュータ断層撮影画像など)、磁気共鳴画像などである。また、画像情報には、FFR(Fractional Flow Reserve:冠血流予備量比)などの生理機能的評価結果も含まれる。本実施の形態では一例として、超音波断層像及びアンギオグラフィ画像を医用画像として学習モデル50の入力に用いる。
The medical image is an image of the patient's blood vessel, and as described above, a tomographic image inside the blood vessel (ultrasonic tomographic image, optical interference tomographic image, etc.) and a fluoroscopic image (angiography image) in which the inside of the patient is visualized by X-ray. , Computed tomography images, etc.), magnetic resonance imaging, etc. The image information also includes physiological functional evaluation results such as FFR (Fractional Flow Reserve). In the present embodiment, as an example, an ultrasonic tomographic image and an angiography image are used as medical images for inputting the learning model 50.
図4は、学習モデル50の概要を示す説明図である。学習モデル50は、患者情報及び医用画像を入力として、患者の治療を支援するための治療情報を出力する機械学習モデルである。サーバ1は、所定の訓練データを学習する機械学習を行って学習モデル50を事前に生成しておく。そしてサーバ1は、画像診断システム2から取得した患者の医用画像と、当該患者に関する患者情報とを学習モデル50に入力し、治療情報を生成する。
FIG. 4 is an explanatory diagram showing an outline of the learning model 50. The learning model 50 is a machine learning model that inputs patient information and medical images and outputs treatment information for supporting treatment of the patient. The server 1 performs machine learning to learn predetermined training data and generates a learning model 50 in advance. Then, the server 1 inputs the medical image of the patient acquired from the diagnostic imaging system 2 and the patient information about the patient into the learning model 50, and generates the treatment information.
具体的には図4に示すように、学習モデル50は、診療DB141に格納される患者情報、血管内画像診断装置21でイメージングされた断層像、及び透視画像撮影装置22でイメージングされた透視画像を入力とする。そして学習モデル50は、図4右上に示す各項目について推定した推定結果を治療情報として出力する。
Specifically, as shown in FIG. 4, the learning model 50 includes patient information stored in the medical care DB 141, a tomographic image imaged by the intravascular image diagnostic device 21, and a fluoroscopic image imaged by the fluoroscopic image capturing device 22. Is input. Then, the learning model 50 outputs the estimation results estimated for each item shown in the upper right of FIG. 4 as treatment information.
例えば治療情報は、血管内の病変部に関する情報を含む。具体的には、病変部に関する情報は、病変部位(例えば病変部が存在する血管の種類)、病変部の性状を含む。なお、病変部に関する情報は位置及び性状だけでなく、血管長手方向における病変部の長さ、血管断面における病変部の大きさ(面積)などを含んでもよい。
For example, treatment information includes information on lesions in blood vessels. Specifically, the information about the lesion includes the lesion site (for example, the type of blood vessel in which the lesion exists) and the properties of the lesion. The information about the lesion may include not only the position and properties but also the length of the lesion in the longitudinal direction of the blood vessel, the size (area) of the lesion in the cross section of the blood vessel, and the like.
また、治療情報は、血管に挿入する治療用デバイスに関する情報を含む。治療用デバイスは、例えば血管を拡張するためのステント、バルーン等である。なお、治療用デバイスはステント等に限定されず、例えば病変部切除のためのロータブレータ等を含んでもよい。学習モデル50は、使用すべきステントの種類(例えば製品名)、形状(直径及び長さ)、総ステント本数、ステント留置前の追加手技の有無及び内容、ステント留置後の追加手技の有無及び内容など、ステントに関連するステント情報を出力する。また、学習モデル50は、使用すべきバルーンの種類(例えば製品名)、形状(長さ)、拡張条件(最大拡張圧、最大拡張径、拡張時間、拡張後のバルーン収縮に要する減衰時間)など、バルーンに関連するバルーン情報を出力する。
The treatment information also includes information about the treatment device to be inserted into the blood vessel. Therapeutic devices are, for example, stents, balloons, etc. for dilating blood vessels. The therapeutic device is not limited to a stent or the like, and may include, for example, a rotablator for excision of a lesion. The learning model 50 includes the type of stent to be used (for example, product name), shape (diameter and length), total number of stents, presence / absence and content of additional procedures before stent placement, and presence / absence and contents of additional procedures after stent placement. Outputs stent information related to the stent, such as. In addition, the learning model 50 includes the type of balloon to be used (for example, product name), shape (length), expansion conditions (maximum expansion pressure, maximum expansion diameter, expansion time, damping time required for balloon contraction after expansion), and the like. , Outputs balloon information related to the balloon.
なお、治療用デバイスに関する情報として、各治療用デバイスの使用順序を含めてもよい。これにより、ユーザが各治療用デバイスを選択して治療を実施する際に、治療用デバイスの選択を好適に補助することができる。
Note that the order of use of each therapeutic device may be included as information regarding the therapeutic device. Thereby, when the user selects each therapeutic device and performs the treatment, the selection of the therapeutic device can be suitably assisted.
また、治療情報は、透視画像の撮影条件に関する情報を含む。撮影条件に関する情報は、例えば造影剤の投与量、透視画像の撮影時間(図4では「透視時間」)などを含む。サーバ1は、患者に治療を実施した際の造影剤の投与量、撮影時間等を学習モデル50に学習させ、推奨される造影剤の投与量、撮影時間等を出力可能とする。
In addition, the treatment information includes information regarding the imaging conditions of the fluoroscopic image. The information regarding the imaging conditions includes, for example, the dose of the contrast medium, the imaging time of the fluoroscopic image (“transparent time” in FIG. 4), and the like. The server 1 causes the learning model 50 to learn the dose of the contrast medium, the imaging time, etc. when the patient is treated, and can output the recommended dose of the contrast medium, the imaging time, and the like.
また、治療情報は、治療後の患者の経過を推定した経過情報を含む。経過情報は、例えば合併症のリスクを推定した推定結果である。サーバ1は、治療を実施済みの患者について、実施後に発症した合併症を学習モデル50に学習させ、治療後に発症する確率が高い合併症を経過情報として出力可能とする。なお、後述のように、治療後に発症し得る複数の合併症毎に、各合併症の発症度合いを評価した確率値を出力すると好適である(図6参照)。
In addition, the treatment information includes progress information that estimates the progress of the patient after treatment. The progress information is, for example, an estimation result of estimating the risk of complications. The server 1 makes the learning model 50 learn the complications that have developed after the treatment for the patient who has been treated, and can output the complications that have a high probability of developing after the treatment as progress information. As will be described later, it is preferable to output a probability value for evaluating the degree of occurrence of each complication for each of a plurality of complications that may occur after treatment (see FIG. 6).
図4では一例として、患者の血管内にプラークが生じている場合の断層像と、当該断層像を生成時のカテーテル211の位置を示す透視画像とが、患者情報と共に学習モデル50に入力される場合を図示している。この場合、学習モデル50はプラークに相当する病変部の位置や性状、使用すべきステントやその拡張径、拡張圧、治療後の経過情報などを出力する。
In FIG. 4, as an example, a tomographic image when a plaque is formed in the blood vessel of the patient and a fluoroscopic image showing the position of the catheter 211 at the time of generating the tomographic image are input to the learning model 50 together with the patient information. The case is illustrated. In this case, the learning model 50 outputs the position and properties of the lesion corresponding to the plaque, the stent to be used and its expansion diameter, the expansion pressure, the progress information after treatment, and the like.
学習モデル50は、上記の各種情報を治療情報として出力するほか、入力された医用画像に対し、画像内の所定のオブジェクトを識別可能に加工した第2医用画像を生成し、治療情報の一つとして出力する。第2医用画像は、上述のように病変部などのオブジェクトに対応する画像領域を加工した画像であり、当該オブジェクトの画像領域を、その他の画像領域と異なる表示態様で表示する画像である。例えば学習モデル50は、白黒で表現される断層像において、オブジェクトに対応する画像領域を白黒以外の色で表現した第2医用画像を生成する。サーバ1は、病変部位等の情報と共に第2医用画像を表示装置23に出力し、表示させる。
The learning model 50 outputs the above-mentioned various information as treatment information, and also generates a second medical image processed so that a predetermined object in the image can be identified from the input medical image, and is one of the treatment information. Output as. The second medical image is an image obtained by processing an image area corresponding to an object such as a lesion as described above, and is an image displaying the image area of the object in a display mode different from that of other image areas. For example, the learning model 50 generates a second medical image in which the image region corresponding to the object is represented by a color other than black and white in the tomographic image represented by black and white. The server 1 outputs the second medical image together with the information such as the lesion site to the display device 23 and displays it.
なお、学習モデル50において対象とするオブジェクトは病変部に限定されず、例えば患者の血管に挿入される治療用デバイス(例えば患者の血管内に既に留置されているステント)などであってもよい。学習モデル50は、血管に存在する特定のオブジェクトを識別可能とした第2医用画像を生成可能であればよい。
The target object in the learning model 50 is not limited to the lesion, and may be, for example, a therapeutic device inserted into the blood vessel of the patient (for example, a stent already placed in the blood vessel of the patient). The learning model 50 may be capable of generating a second medical image that makes it possible to identify a specific object existing in the blood vessel.
なお、以下の説明では簡潔のため、医用画像内のオブジェクトに対応する画像領域を「オブジェクト領域」と呼ぶ。
For the sake of brevity in the following explanation, the image area corresponding to the object in the medical image is referred to as the "object area".
図5は、学習モデル50の詳細を示す説明図である。図5では学習モデル50として、第1モデル51及び第2モデル52から成るモデルを一例として図示してある。図5に基づき、学習モデル50の詳細を説明する。
FIG. 5 is an explanatory diagram showing the details of the learning model 50. In FIG. 5, as the learning model 50, a model composed of the first model 51 and the second model 52 is illustrated as an example. The details of the learning model 50 will be described with reference to FIG.
第1モデル51は、医用画像(断層像及び透視画像)を入力として、画像内のオブジェクトを検出した検出結果を出力する機械学習モデルである。例えば第1モデル51は、深層学習によって生成されるニューラルネットワークモデルであり、多数の畳み込み層で入力画像の特徴量を抽出するCNN(Convolutional Neural Network)である。第1モデル51は、入力画像の画素情報を畳み込む畳み込み層と、畳み込んだ画素情報をマッピングするプーリング層とが交互に連結された中間層(隠れ層)を備え、入力画像の特徴量(特徴量マップ)を抽出する。
The first model 51 is a machine learning model that inputs a medical image (tomographic image and a perspective image) and outputs a detection result of detecting an object in the image. For example, the first model 51 is a neural network model generated by deep learning, and is a CNN (Convolutional Neural Network) that extracts features of an input image from a large number of convolutional layers. The first model 51 includes an intermediate layer (hidden layer) in which a convolution layer for convolving the pixel information of the input image and a pooling layer for mapping the convoluted pixel information are alternately connected, and a feature amount (feature) of the input image is provided. Quantity map) is extracted.
なお、本実施の形態では第1モデル51がCNNであるものとして説明するが、例えばGAN(Generative Adversarial Network)、RNN(Recurrent Neural Network)、SVM(Support Vector Machine)、決定木等、その他の学習アルゴリズムに基づくモデルであってもよい。
In this embodiment, the first model 51 is described as being a CNN, but for example, GAN (Generative Adversarial Network), RNN (Recurrent Neural Network), SVM (Support Vector Machine), decision tree, and other learning. It may be a model based on an algorithm.
本実施の形態でサーバ1は、入力される医用画像内の各画素がオブジェクト領域に対応する画素であるか否か、画素単位で識別する第1モデル51を生成する。例えばサーバ1は、第1モデル51として、セマンティックセグメンテーションモデル(U-net等)、あるいはMASK R-CNN(Region CNN)などを生成する。
In the present embodiment, the server 1 generates a first model 51 that identifies on a pixel-by-pixel basis whether or not each pixel in the input medical image is a pixel corresponding to an object area. For example, the server 1 generates a semantic segmentation model (U-net or the like) or MASK R-CNN (Region CNN) as the first model 51.
セマンティックセグメンテーションモデルはCNNの一種であり、入力データから出力データを生成するEncoderDecoderモデルの一種である。セマンティックセグメンテーションモデルは、入力画像のデータを圧縮する畳み込み層以外に、圧縮して得た特徴量を元の画像サイズにマッピング(拡大)する逆畳み込み層(Deconvolution Layer)を備える。逆畳み込み層では、畳み込み層で抽出した特徴量に基づいて画像内にどの物体がどの位置に存在するかを画素単位で識別し、各画素がどの物体に対応するかを二値化したラベル画像を生成する。
The semantic segmentation model is a type of CNN, and is a type of EncoderDecoder model that generates output data from input data. The semantic segmentation model includes a deconvolution layer that maps (enlarges) the features obtained by compression to the original image size, in addition to the convolution layer that compresses the data of the input image. In the deconvolution layer, a label image that identifies which object exists in which position in the image on a pixel-by-pixel basis based on the feature amount extracted by the convolution layer, and binarizes which object each pixel corresponds to. To generate.
MASK R-CNNは、主に物体検出に用いられるFaster R-CNNの変形であり、Faster R-CNNに逆畳み込み層を連結したモデルである。MASK R-CNNでは、CNNで抽出した画像の特徴量と、RPN(Region Proposal Network)で抽出した対象物体の座標範囲とを逆畳み込み層に入力し、最終的に入力画像内の物体の画像領域をマスクするマスク画像を生成する。
MASK R-CNN is a modification of Faster R-CNN mainly used for object detection, and is a model in which a deconvolutional layer is connected to Faster R-CNN. In MASK R-CNN, the feature amount of the image extracted by CNN and the coordinate range of the target object extracted by RPN (Region Proposal Network) are input to the reverse convolution layer, and finally the image area of the object in the input image. Generate a mask image to mask.
サーバ1は、これらのモデルを第1モデル51として生成し、オブジェクトの検出に用いる。なお、上記のモデルはいずれも例示であって、第1モデル51は、医用画像内のオブジェクトの位置や形状を識別可能であればよい。本実施の形態では一例として、第1モデル51がセマンティックセグメンテーションモデルであるものとして説明する。
The server 1 generates these models as the first model 51 and uses them for object detection. It should be noted that all of the above models are examples, and the first model 51 may be sufficient as long as it can identify the position and shape of the object in the medical image. In the present embodiment, as an example, the first model 51 will be described as being a semantic segmentation model.
図4右側に、血管内断層像及び透視画像におけるオブジェクト領域をハッチングで概念的に図示している。第1モデル51で検出するオブジェクトは、例えばプラーク等の病変部、あるいは留置済みのステント等の治療用デバイスなどである。サーバ1は、第1モデル51に医用画像を入力し、各種オブジェクトを検出する。
On the right side of FIG. 4, the object area in the intravascular tomographic image and the fluoroscopic image is conceptually shown by hatching. The object detected by the first model 51 is, for example, a lesion such as a plaque, a therapeutic device such as an indwelling stent, or the like. The server 1 inputs a medical image into the first model 51 and detects various objects.
なお、プラークは病変部の一例であり、例えば血管の狭窄部、石灰化した組織、血管壁の解離(フラップ)、新生内膜(Neointima)、その他の部分を検出してもよい。また、ステントは治療用デバイスの一例であり、その他のデバイスを検出してもよい。また、病変部位、治療用デバイスはオブジェクトの一例であり、その他のオブジェクトを検出してもよい。
Note that plaque is an example of a lesion, and for example, a narrowed part of a blood vessel, a calcified tissue, a dissection of a blood vessel wall (flap), a neointima, and other parts may be detected. In addition, the stent is an example of a therapeutic device, and other devices may be detected. Further, the lesion site and the therapeutic device are examples of objects, and other objects may be detected.
図5に戻って説明を続ける。サーバ1は、訓練用の医用画像に対し、図4で例示したオブジェクト領域を示すデータがラベリングされた訓練データを用いて学習を行う。具体的には、訓練データでは、訓練用の医用画像に対し、オブジェクト領域に対応する座標範囲と、オブジェクトの種類とを表すラベル(メタデータ)が付与されている。
Return to Fig. 5 and continue the explanation. The server 1 learns the medical image for training by using the training data in which the data indicating the object area illustrated in FIG. 4 is labeled. Specifically, in the training data, labels (metadata) indicating the coordinate range corresponding to the object area and the type of the object are given to the medical image for training.
サーバ1は、訓練用の医用画像を第1モデル51に入力して、オブジェクトを検出した検出結果を出力として取得する。具体的には、オブジェクト領域の各画素に対し、オブジェクトの種類を示す値がラベリングされたラベル画像を出力として取得する。
The server 1 inputs a medical image for training into the first model 51, and acquires the detection result of detecting the object as an output. Specifically, for each pixel in the object area, a label image in which a value indicating the type of the object is labeled is acquired as an output.
サーバ1は、第1モデル51から出力された検出結果を、訓練データが示す正解のオブジェクト領域の座標範囲、及びオブジェクトの種類と比較し、両者が近似するように、ニューロン間の重み等のパラメータを最適化する。これにより、サーバ1は第1モデル51を生成する。
The server 1 compares the detection result output from the first model 51 with the coordinate range of the correct object area and the type of the object indicated by the training data, and parameters such as weights between neurons so that both can be approximated. Optimize. As a result, the server 1 generates the first model 51.
なお、本実施の形態では断層像及び透視画像の2種類の画像を処理するが、例えばサーバ1は、各画像に対応する2つの第1モデル51を用意してもよく、又は第1モデル51を、断層像及び透視画像をそれぞれ処理するための2つのネットワークを有するマルチモーダルモデルとして構成してもよい。これにより、各画像からオブジェクトを検出することができる。
In the present embodiment, two types of images, a tomographic image and a fluoroscopic image, are processed. For example, the server 1 may prepare two first models 51 corresponding to each image, or the first model 51. May be configured as a multimodal model having two networks for processing tomographic and fluoroscopic images, respectively. This makes it possible to detect an object from each image.
本実施の形態では、第1モデル51は、時系列で連続する複数フレームの医用画像(動画)を入力として受け付け、各フレームの医用画像からオブジェクトを検出する。具体的には、第1モデル51は、カテーテル211の走査に従い、血管の長手方向に沿って連続する複数フレームの医用画像(例えば断層像)を入力として受け付ける。第1モデル51は、時間軸に沿って連続する各フレームの医用画像からオブジェクトを検出する。
In the present embodiment, the first model 51 accepts a plurality of frames of medical images (moving images) that are continuous in time series as input, and detects an object from the medical images of each frame. Specifically, the first model 51 receives a plurality of frames of medical images (for example, a tomographic image) that are continuous along the longitudinal direction of a blood vessel as input according to the scanning of the catheter 211. The first model 51 detects an object from a medical image of each frame continuous along the time axis.
なお、以下の説明では便宜上、連続する各フレームの医用画像を単に「フレーム画像」と呼ぶ。
In the following description, for convenience, the medical images of each continuous frame are simply referred to as "frame images".
サーバ1は、複数のフレーム画像を第1モデル51に一枚ずつ入力し、各フレーム画像から個別にオブジェクトを検出してもよいが、連続する複数のフレーム画像を同時に入力して、複数のフレーム画像からオブジェクト領域を同時に検出できるようにすると好適である。例えばサーバ1は、第1モデル51を、3次元の入力データを取り扱う3D-CNN(例えば3D U-net)とする。そしてサーバ1は、2次元のフレーム画像の座標を2軸とし、各フレーム画像を取得した時刻(フレーム画像の生成時点)を1軸とする3次元データとして取り扱う。サーバ1は、所定の単位時間分の複数フレーム画像(例えば16フレーム)を一セットとして第1モデル51に入力し、複数のフレーム画像それぞれに対してオブジェクト領域をラベリングしたラベル画像を同時に出力する。これにより、時系列で連続する前後のフレーム画像も考慮してオブジェクトを検出することができ、検出精度を向上させることができる。
The server 1 may input a plurality of frame images into the first model 51 one by one and detect an object individually from each frame image, but may input a plurality of consecutive frame images at the same time and input a plurality of frames at the same time. It is preferable to be able to detect the object area from the image at the same time. For example, the server 1 uses the first model 51 as a 3D-CNN (for example, 3D U-net) that handles three-dimensional input data. Then, the server 1 handles the two-dimensional frame image as three-dimensional data having the coordinates as two axes and the time when each frame image is acquired (the time when the frame image is generated) as one axis. The server 1 inputs a plurality of frame images (for example, 16 frames) for a predetermined unit time into the first model 51 as a set, and simultaneously outputs a label image in which an object area is labeled for each of the plurality of frame images. As a result, the object can be detected in consideration of the frame images before and after the continuous time series, and the detection accuracy can be improved.
なお、上記では時間軸も含めた3次元データとすることで時系列のフレーム画像を処理可能としたが、本実施の形態はこれに限定されるものではない。例えばサーバ1は、第1モデル51を、CNNとRNNとを組み合わせたモデルとすることで、連続する複数のフレーム画像からオブジェクトを検出可能としてもよい。この場合、例えばCNNに係る中間層の後ろにLSTM(Long-Short Term Memory)層を挿入し、前後のフレーム画像から抽出した特徴量を参照してオブジェクトの検出を行う。この場合でも上記と同様に、前後のフレーム画像を考慮して処理を行うことができ、検出精度を向上させることができる。
In the above, it is possible to process a time-series frame image by using three-dimensional data including the time axis, but the present embodiment is not limited to this. For example, the server 1 may be able to detect an object from a plurality of consecutive frame images by using the first model 51 as a model in which CNN and RNN are combined. In this case, for example, an RSTM (Long-Short Term Memory) layer is inserted after the intermediate layer related to CNN, and the object is detected by referring to the feature amounts extracted from the frame images before and after. Even in this case, similarly to the above, the processing can be performed in consideration of the frame images before and after, and the detection accuracy can be improved.
次に、第2モデル52について説明する。第2モデル52は、患者情報及び医用画像を入力として、患者の血管内治療を支援するための治療情報を出力する機械学習モデルである。例えば第2モデル52はCNNであり、畳み込み層及びプーリング層が交互に連結された中間層を備え、入力データの特徴量を抽出する。
Next, the second model 52 will be described. The second model 52 is a machine learning model that inputs patient information and medical images and outputs treatment information for supporting endovascular treatment of the patient. For example, the second model 52 is a CNN, includes an intermediate layer in which convolution layers and pooling layers are alternately connected, and extracts features of input data.
なお、本実施の形態では第2モデル52がCNNであるものとして説明するが、GAN、RNN、SVM、決定木等、その他の学習アルゴリズムに基づくモデルであってもよい。
Although the second model 52 is described as being a CNN in the present embodiment, it may be a model based on other learning algorithms such as GAN, RNN, SVM, and decision tree.
サーバ1は、訓練用の患者情報及び医用画像に対し、正解の治療情報が付与された訓練データを用いて学習を行う。訓練用の患者情報及び医用画像は、治療を実施済みの患者の患者情報及び医用画像であり、当該患者の診療記録、及び治療時に得た断層像、透視画像などの医用画像である。正解の治療情報は、当該患者に対して行った治療(PCI)の記録である。サーバ1は、診療DB141に記憶されている患者情報及び医用画像と、治療情報とを訓練データとして第2モデル52に与え、学習を行う。
The server 1 learns the patient information for training and the medical image by using the training data to which the correct treatment information is added. The patient information and medical image for training are patient information and medical image of a patient who has undergone treatment, and are medical records of the patient and medical images such as tomographic images and fluoroscopic images obtained at the time of treatment. The correct treatment information is a record of the treatment (PCI) given to the patient. The server 1 gives the patient information and the medical image stored in the medical treatment DB 141 and the treatment information as training data to the second model 52, and performs learning.
具体的には、サーバ1は、患者情報に含まれる患者の年齢、性別、診断名等のデータを、医用画像の属性を示すカテゴリ変数として第2モデル52に入力する。サーバ1は、患者情報をカテゴリ変数として断層像及び透視画像と共に第2モデル52に入力し、図4右上に例示した各項目の治療情報を出力として取得する。
Specifically, the server 1 inputs data such as the patient's age, gender, and diagnosis name included in the patient information into the second model 52 as a categorical variable indicating the attributes of the medical image. The server 1 inputs the patient information as a categorical variable to the second model 52 together with the tomographic image and the fluoroscopic image, and acquires the treatment information of each item illustrated in the upper right of FIG. 4 as an output.
なお、図4右上に示したように、第2モデル52の出力項目は「病変部位」、「病変性状」のように分類結果で示される項目もあれば、「総造影時間」、「透視時間」のように連続値で表現される項目もある。このように第2モデル52は、いわゆる分類問題と回帰問題とを同時に扱うが、例えば全ての項目を回帰問題と見なして連続値を出力し、分類結果として示すべき項目を連続値から二値に変換すればよい。あるいは、各項目に対応する出力層を別々に設け、中間層で抽出された特徴量を各出力層に入力して、各項目の推定を別々に行うようにしてもよい。あるいは、第2モデル52自体を項目別に用意し、別々に推定を行ってもよい。
As shown in the upper right of FIG. 4, the output items of the second model 52 include items such as "lesion site" and "lesion property", which are indicated by classification results, and "total contrast time" and "perspective time". There are also items that are expressed as continuous values, such as ". In this way, the second model 52 handles the so-called classification problem and the regression problem at the same time. For example, all items are regarded as regression problems and continuous values are output, and the items to be shown as the classification result are changed from continuous values to binary values. You just have to convert. Alternatively, an output layer corresponding to each item may be provided separately, and the feature amount extracted in the intermediate layer may be input to each output layer to estimate each item separately. Alternatively, the second model 52 itself may be prepared for each item and estimated separately.
サーバ1は、患者の血管をイメージングした医用画像をそのまま第2モデル52の入力に用いてもよいが、本実施の形態では、第1モデル51で検出されたオブジェクト領域を識別可能に加工した第2医用画像を生成し、第2モデル52に入力する。第2医用画像は、オブジェクト領域を、オブジェクトの種類に応じて異なる表示態様で表示する画像であり、例えば第1モデル51から出力されたラベル画像を元の医用画像に重畳した画像である。
The server 1 may use the medical image of the blood vessel of the patient as it is for the input of the second model 52, but in the present embodiment, the object region detected by the first model 51 is processed so as to be identifiable. 2 Medical images are generated and input to the second model 52. The second medical image is an image in which the object area is displayed in different display modes depending on the type of the object. For example, the label image output from the first model 51 is superimposed on the original medical image.
例えばサーバ1は、第1モデル51から出力されたラベル画像を半透明マスクに加工し、元の医用画像に重畳する。この場合にサーバ1は、マスクの表示色をオブジェクトの種類に応じて変更するなど、各オブジェクト領域の表示態様をオブジェクトの種類に応じて異ならせる。これによりサーバ1は、オブジェクト領域をその他の領域と異なる表示態様で表示する医用画像を生成し、第2モデル52に入力する。
For example, the server 1 processes the label image output from the first model 51 into a translucent mask and superimposes it on the original medical image. In this case, the server 1 changes the display mode of each object area according to the type of the object, such as changing the display color of the mask according to the type of the object. As a result, the server 1 generates a medical image that displays the object area in a display mode different from that of the other areas, and inputs it to the second model 52.
サーバ1は、訓練用の患者情報と、オブジェクト領域を加工した医用画像とを第2モデル52に入力し、治療情報を出力として取得する。サーバ1は、出力された治療情報を正解の治療情報と比較し、両者が近似するように、ニューロン間の重み等のパラメータを最適化する。これによりサーバ1は、第2モデル52を生成する。
The server 1 inputs the patient information for training and the medical image obtained by processing the object area into the second model 52, and acquires the treatment information as an output. The server 1 compares the output treatment information with the correct treatment information, and optimizes parameters such as weights between neurons so that the two can be approximated. As a result, the server 1 generates the second model 52.
なお、本実施の形態では、オブジェクト検出用の第1モデル51と、治療情報生成用の第2モデル52との2種類のモデルを組み合わせるものとするが、オブジェクトの検出はルールベースで行ってもよい。すなわち、学習モデル50が第1モデル51を備える構成は必須ではなく、学習モデル50への入力の前処理としてルールベースでオブジェクトを検出し、検出結果を学習モデル50に入力するようにしてもよい。
In the present embodiment, two types of models, the first model 51 for object detection and the second model 52 for generating treatment information, are combined, but the object can be detected on a rule basis. good. That is, the configuration in which the learning model 50 includes the first model 51 is not essential, and an object may be detected on a rule basis as a preprocessing for input to the learning model 50, and the detection result may be input to the learning model 50. ..
新たに治療を実施する患者について治療情報を生成する場合、サーバ1は、当該患者の医用画像を画像診断システム2から取得して学習モデル50に入力し、第2医用画像を含む治療情報を生成する。なお、当該処理は画像取得時のリアルタイムで行ってもよく、あるいは録画された医用画像(動画)をまとめて取得し、事後的に処理してもよい。
When generating treatment information for a patient to be newly treated, the server 1 acquires a medical image of the patient from the diagnostic imaging system 2 and inputs it to the learning model 50 to generate treatment information including the second medical image. do. The processing may be performed in real time at the time of image acquisition, or the recorded medical images (moving images) may be collectively acquired and processed after the fact.
サーバ1は、生成した治療情報を表示装置23に出力し、表示させる。なお、本実施の形態では治療情報の出力先が画像診断システム2であるものとして説明するが、画像診断システム2以外の装置(例えばパーソナルコンピュータ)に出力し、治療情報を表示させてもよいことは勿論である。
The server 1 outputs the generated treatment information to the display device 23 and displays it. In the present embodiment, the treatment information is output to the diagnostic imaging system 2, but the treatment information may be output to a device other than the diagnostic imaging system 2 (for example, a personal computer) to display the treatment information. Of course.
図6は、表示装置23の表示画面例を示す説明図である。表示装置23は、サーバ1から出力された各治療情報を表示し、ユーザに提示する。例えば表示装置23は、一覧表71、経過情報欄72、断層像73、透視画像74を表示する。
FIG. 6 is an explanatory diagram showing an example of a display screen of the display device 23. The display device 23 displays each treatment information output from the server 1 and presents it to the user. For example, the display device 23 displays a list 71, a progress information column 72, a tomographic image 73, and a perspective image 74.
一覧表71は、第2モデル52から出力された各項目の推定結果を一覧で示す表である。一覧表71は、例えば病変部に関する情報、治療用デバイスに関する情報、透視画像の撮影条件に関する情報、その他の情報を含む。表示装置23は一覧表71でこれらの治療情報をユーザに提示し、血管内治療を支援する。
List 71 is a table showing the estimation results of each item output from the second model 52 in a list. List 71 includes, for example, information about lesions, information about therapeutic devices, information about conditions for taking fluoroscopic images, and other information. The display device 23 presents these treatment information to the user in the list 71 to support endovascular treatment.
経過情報欄72は、治療後のユーザの経過を推定した推定結果を示す表示欄であり、上述の如く、合併症発症に関する推定結果である。サーバ1は学習モデル50を用いて、複数の合併症毎に発症する確率値を算出し、合併症名と関連付けて確率値を経過情報欄72に一覧表示させる。
The progress information column 72 is a display column showing the estimation result of estimating the progress of the user after the treatment, and is the estimation result regarding the onset of complications as described above. The server 1 uses the learning model 50 to calculate the probability value of onset for each of a plurality of complications, and displays the probability value in a list in the progress information column 72 in association with the complication name.
また、表示装置23は、血管内画像診断装置21で生成された断層像73と、透視画像撮影装置22で撮影した透視画像74とを表示する。この場合に表示装置23は、元の医用画像のオブジェクト領域を加工した第2医用画像を断層像73及び透視画像74として表示する。
Further, the display device 23 displays the tomographic image 73 generated by the intravascular image diagnostic device 21 and the fluoroscopic image 74 taken by the fluoroscopic image capturing device 22. In this case, the display device 23 displays the second medical image obtained by processing the object area of the original medical image as the tomographic image 73 and the fluoroscopic image 74.
図7は、学習モデル50の生成処理の手順を示すフローチャートである。図7に基づき、訓練データを学習して学習モデル50を生成する際の処理内容について説明する。
サーバ1の制御部11は、訓練用の患者情報及び医用画像に対し、正解の治療情報が付与された訓練データを取得する(ステップS11)。訓練用の患者情報及び医用画像は、治療を実施済みの患者の患者情報及び医用画像である。治療情報は、当該患者に実施した治療に関する情報であり、診療DB141に記憶された治療記録のほか、オブジェクト領域を示すラベルデータを含む。 FIG. 7 is a flowchart showing the procedure of the generation process of thelearning model 50. Based on FIG. 7, the processing content when learning the training data and generating the learning model 50 will be described.
Thecontrol unit 11 of the server 1 acquires training data to which correct treatment information is added to the patient information for training and the medical image (step S11). The patient information and medical image for training are the patient information and medical image of the patient who has been treated. The treatment information is information on the treatment performed on the patient, and includes the treatment record stored in the medical treatment DB 141 and the label data indicating the object area.
サーバ1の制御部11は、訓練用の患者情報及び医用画像に対し、正解の治療情報が付与された訓練データを取得する(ステップS11)。訓練用の患者情報及び医用画像は、治療を実施済みの患者の患者情報及び医用画像である。治療情報は、当該患者に実施した治療に関する情報であり、診療DB141に記憶された治療記録のほか、オブジェクト領域を示すラベルデータを含む。 FIG. 7 is a flowchart showing the procedure of the generation process of the
The
制御部11は訓練データに基づき、医用画像を入力した場合にオブジェクトを検出する第1モデル51を生成する(ステップS12)。例えば制御部11は、上述の如く、セマンティックセグメンテーションに係るCNNを第1モデル51として生成する。制御部11は、訓練用の医用画像を第1モデルに入力し、オブジェクト領域を検出した検出結果を出力として取得する。制御部11は、検出したオブジェクト領域を正解のラベルデータと比較し、両者が近似するようにニューロン間の重み等のパラメータを最適化して第1モデル51を生成する。
Based on the training data, the control unit 11 generates a first model 51 that detects an object when a medical image is input (step S12). For example, the control unit 11 generates the CNN related to the semantic segmentation as the first model 51 as described above. The control unit 11 inputs a medical image for training to the first model, and acquires the detection result of detecting the object area as an output. The control unit 11 compares the detected object region with the label data of the correct answer, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the first model 51.
また、制御部11は訓練データに基づき、患者情報及び医用画像を入力した場合に、患者の治療を支援するための治療情報を出力する第2モデル52を生成する(ステップS13)。具体的には、制御部11は、患者情報と、第1モデル51での検出結果に基づきオブジェクト領域を加工した第2医用画像とを入力として治療情報を出力する第2モデル52を生成する。制御部11は、訓練用の患者情報と、正解のラベルデータに基づいてオブジェクト領域を加工した第2医用画像とを第2モデルに入力し、治療情報を出力として取得する。制御部11は、出力された治療情報を正解の治療情報と比較し、両者が近似するようにニューロン間の重み等のパラメータを最適化して第2モデル52を生成する。制御部11は一連の処理を終了する。
Further, the control unit 11 generates a second model 52 that outputs treatment information for supporting the treatment of the patient when the patient information and the medical image are input based on the training data (step S13). Specifically, the control unit 11 generates a second model 52 that outputs treatment information by inputting patient information and a second medical image obtained by processing an object area based on the detection result of the first model 51. The control unit 11 inputs the patient information for training and the second medical image obtained by processing the object area based on the label data of the correct answer into the second model, and acquires the treatment information as an output. The control unit 11 compares the output treatment information with the correct treatment information, optimizes parameters such as weights between neurons so that the two are close to each other, and generates the second model 52. The control unit 11 ends a series of processes.
図8は、治療情報の出力処理の手順を示すフローチャートである。図8に基づき、学習モデル50を用いて治療対象の患者の治療情報を出力する際の処理内容を説明する。
制御部11は、治療を実施する患者に関する患者情報と、当該患者の血管をイメージングした医用画像とを取得する(ステップS31)。制御部11は、取得した医用画像を第1モデル51に入力して、オブジェクトを検出する(ステップS32)。具体的には、制御部11は、血管内画像診断装置21及び透視画像撮影装置22でイメージングした断層像及び透視画像を第1モデル51に入力して、断層像及び透視画像内のオブジェクトを検出する。 FIG. 8 is a flowchart showing a procedure for outputting treatment information. Based on FIG. 8, the processing content when outputting the treatment information of the patient to be treated using thelearning model 50 will be described.
Thecontrol unit 11 acquires patient information about the patient to be treated and a medical image of the blood vessel of the patient (step S31). The control unit 11 inputs the acquired medical image into the first model 51 and detects an object (step S32). Specifically, the control unit 11 inputs the tomographic image and the fluoroscopic image imaged by the intravascular image diagnostic apparatus 21 and the fluoroscopic image capturing apparatus 22 into the first model 51, and detects the tomographic image and the object in the fluoroscopic image. do.
制御部11は、治療を実施する患者に関する患者情報と、当該患者の血管をイメージングした医用画像とを取得する(ステップS31)。制御部11は、取得した医用画像を第1モデル51に入力して、オブジェクトを検出する(ステップS32)。具体的には、制御部11は、血管内画像診断装置21及び透視画像撮影装置22でイメージングした断層像及び透視画像を第1モデル51に入力して、断層像及び透視画像内のオブジェクトを検出する。 FIG. 8 is a flowchart showing a procedure for outputting treatment information. Based on FIG. 8, the processing content when outputting the treatment information of the patient to be treated using the
The
制御部11は、ステップS32での検出結果に基づいてオブジェクト領域を加工した第2医用画像と、患者情報とを第2モデル52に入力して、治療情報を生成する(ステップS33)。制御部11は、生成した治療情報を表示装置23に出力し、表示させる(ステップS34)。具体的には、制御部11は、血管内の病変部、使用すべき治療用デバイス、透視画像の撮影条件等の情報のほか、オブジェクト領域を加工した第2医用画像を治療情報として出力し、表示装置23に表示させる。制御部11は一連の処理を終了する。
The control unit 11 inputs the second medical image obtained by processing the object area based on the detection result in step S32 and the patient information into the second model 52 to generate treatment information (step S33). The control unit 11 outputs the generated treatment information to the display device 23 and displays it (step S34). Specifically, the control unit 11 outputs information such as a lesion in a blood vessel, a therapeutic device to be used, and imaging conditions of a fluoroscopic image, as well as a second medical image obtained by processing an object area as treatment information. It is displayed on the display device 23. The control unit 11 ends a series of processes.
以上より、本実施の形態1によれば、訓練データを学習済みの学習モデル50を用いることで、医用画像に基づく治療を好適に支援することができる。
From the above, according to the first embodiment, it is possible to suitably support the treatment based on the medical image by using the learning model 50 in which the training data has been learned.
また、本実施の形態1によれば、血管(管腔器官)内の病変部の位置、性状等を推定し、ユーザに提示することができる。
Further, according to the first embodiment, the position, properties, etc. of the lesion in the blood vessel (luminal organ) can be estimated and presented to the user.
また、本実施の形態1によれば、使用すべき治療用デバイスの種類、形状、使用数、使用順序などを推定し、ユーザに提示することができる。
Further, according to the first embodiment, it is possible to estimate the type, shape, number of uses, order of use, etc. of the therapeutic device to be used and present it to the user.
また、本実施の形態1によれば、透視画像の好適な撮影条件を推定して提示することができる。
Further, according to the first embodiment, it is possible to estimate and present suitable imaging conditions for the fluoroscopic image.
また、本実施の形態1によれば、合併症の発症など、治療後の経過情報を推定してユーザに提示することができる。
Further, according to the first embodiment, it is possible to estimate the progress information after the treatment such as the onset of complications and present it to the user.
また、本実施の形態1によれば、医用画像からオブジェクトの検出を行う第1モデル51と、第1モデル51による検出結果に基づいて治療情報を生成する第2モデル52とを組み合わせることで、オブジェクトの正しい位置、形状等を第2モデル52に与えることができ、治療情報の推定精度を向上させることができる。
Further, according to the first embodiment, the first model 51 that detects an object from a medical image and the second model 52 that generates treatment information based on the detection result by the first model 51 are combined. The correct position, shape, etc. of the object can be given to the second model 52, and the estimation accuracy of the treatment information can be improved.
また、本実施の形態1によれば、オブジェクト領域を加工した第2医用画像を生成することで、病変部等の確認を好適に補助することができる。
Further, according to the first embodiment, it is possible to suitably assist the confirmation of the lesion portion or the like by generating the second medical image obtained by processing the object region.
また、本実施の形態1によれば、時系列で生成された複数のフレーム画像を学習モデル50に入力することで、治療情報の推定精度を向上させることができる。
Further, according to the first embodiment, the estimation accuracy of the treatment information can be improved by inputting a plurality of frame images generated in time series into the learning model 50.
また、本実施の形態1によれば、医用画像として血管(管腔器官)内の断層像だけでなく透視画像を用いることで、局所的な画像だけでなく血管全体の画像を学習モデル50に与え、推定精度を向上させることができる。
Further, according to the first embodiment, by using not only the tomographic image in the blood vessel (luminal organ) but also the perspective image as the medical image, not only the local image but also the image of the entire blood vessel can be used as the learning model 50. It can be given and the estimation accuracy can be improved.
(変形例1)
実施の形態1では、学習モデル50を用いて治療情報を出力する形態について説明した。本変形例では、出力した治療情報を修正する修正入力をユーザから受け付け、修正後の治療情報に基づく再学習を行う形態について説明する。 (Modification example 1)
In the first embodiment, a mode in which the treatment information is output using thelearning model 50 has been described. In this modification, a mode in which a correction input for correcting the output treatment information is received from the user and re-learning is performed based on the corrected treatment information will be described.
実施の形態1では、学習モデル50を用いて治療情報を出力する形態について説明した。本変形例では、出力した治療情報を修正する修正入力をユーザから受け付け、修正後の治療情報に基づく再学習を行う形態について説明する。 (Modification example 1)
In the first embodiment, a mode in which the treatment information is output using the
図9は、変形例1に係る治療情報の出力処理の手順を示すフローチャートである。治療情報を出力した後(ステップS34)、サーバ1は以下の処理を実行する。
サーバ1の制御部11は、表示装置23に出力した治療情報の修正入力をユーザから受け付ける(ステップS35)。例えば制御部11は、図6で例示した表示画面において、一覧表71で表示した各項目の情報を修正する修正入力を受け付ける。また、サーバ1は、断層像73として表示した第2医用画像について、オブジェクト領域の座標範囲、オブジェクトの種類などが実際と異なる場合は、正しい座標範囲、種類などの入力を受け付ける。 FIG. 9 is a flowchart showing a procedure for outputting treatment information according to the first modification. After outputting the treatment information (step S34), theserver 1 executes the following processing.
Thecontrol unit 11 of the server 1 receives the correction input of the treatment information output to the display device 23 from the user (step S35). For example, the control unit 11 receives a correction input for correcting the information of each item displayed in the list 71 on the display screen illustrated in FIG. Further, the server 1 accepts input of the correct coordinate range, type, etc. of the second medical image displayed as the tomographic image 73 when the coordinate range of the object area, the type of the object, etc. are different from the actual ones.
サーバ1の制御部11は、表示装置23に出力した治療情報の修正入力をユーザから受け付ける(ステップS35)。例えば制御部11は、図6で例示した表示画面において、一覧表71で表示した各項目の情報を修正する修正入力を受け付ける。また、サーバ1は、断層像73として表示した第2医用画像について、オブジェクト領域の座標範囲、オブジェクトの種類などが実際と異なる場合は、正しい座標範囲、種類などの入力を受け付ける。 FIG. 9 is a flowchart showing a procedure for outputting treatment information according to the first modification. After outputting the treatment information (step S34), the
The
治療情報の修正入力を受け付けた場合、制御部11は、学習モデル50に入力した患者情報及び医用画像と、修正後の治療情報とを訓練データとする再学習を行い、学習モデル50を更新する(ステップS36)。すなわち制御部11は、学習モデル50から出力される治療情報が修正後の治療情報に近似するようニューロン間の重み等のパラメータを最適化し、学習モデル50を再生成する。制御部11は一連の処理を終了する。
When the correction input of the treatment information is received, the control unit 11 performs re-learning using the patient information and the medical image input to the learning model 50 and the corrected treatment information as training data, and updates the learning model 50. (Step S36). That is, the control unit 11 optimizes parameters such as weights between neurons so that the treatment information output from the learning model 50 approximates the modified treatment information, and regenerates the learning model 50. The control unit 11 ends a series of processes.
以上より、本変形例1によれば、本システムの運用を通じて学習モデル50を最適化することができる。
From the above, according to the present modification 1, the learning model 50 can be optimized through the operation of this system.
(変形例2)
実施の形態1では、第1モデル51によりオブジェクトを検出し、オブジェクト領域を加工した第2医用画像を第2モデル52に入力する形態について説明した。本変形例では、オブジェクトの検出結果からその種類、寸法等を特定し、特定されたオブジェクトの種類、寸法等を示すオブジェクト情報を第2モデル52の入力として用いる形態について説明する。 (Modification 2)
In the first embodiment, an embodiment in which an object is detected by thefirst model 51 and a second medical image obtained by processing an object area is input to the second model 52 has been described. In this modification, a mode in which the type, dimensions, etc. of the object are specified from the detection result of the object, and the object information indicating the specified type, dimensions, etc. of the object is used as the input of the second model 52 will be described.
実施の形態1では、第1モデル51によりオブジェクトを検出し、オブジェクト領域を加工した第2医用画像を第2モデル52に入力する形態について説明した。本変形例では、オブジェクトの検出結果からその種類、寸法等を特定し、特定されたオブジェクトの種類、寸法等を示すオブジェクト情報を第2モデル52の入力として用いる形態について説明する。 (Modification 2)
In the first embodiment, an embodiment in which an object is detected by the
図10は、変形例2に係る学習モデル50に関する説明図である。本変形例に係る学習モデル50も実施の形態1と同様に、第1モデル51及び第2モデル52を備える。本変形例においてサーバ1は、第1モデル51により検出されたオブジェクト領域の画像解析を行い、オブジェクトの種類、寸法等を特定する。
FIG. 10 is an explanatory diagram of the learning model 50 according to the second modification. The learning model 50 according to the present modification also includes the first model 51 and the second model 52 as in the first embodiment. In this modification, the server 1 performs image analysis of the object area detected by the first model 51 to specify the type, dimensions, and the like of the object.
例えばサーバ1は、患者の血管内に既に留置されているステントの種類、寸法等を特定する。なお、本変形例では特定対象とするオブジェクトがステントであるものとするが、病変部などのその他のオブジェクトであってもよい。サーバ1は、ステントとして検出されたオブジェクト領域の画像解析を行い、留置されているステントの名称を特定すると共に、ステントの直径、長さ等を特定する。サーバ1は、特定したステントの種類、寸法等のデータをオブジェクト情報として第2モデル52に入力し、治療情報を生成する。
For example, the server 1 specifies the type, size, etc. of the stent already placed in the patient's blood vessel. In this modification, the object to be specified is a stent, but it may be another object such as a lesion. The server 1 performs image analysis of the object region detected as the stent, identifies the name of the indwelling stent, and specifies the diameter, length, and the like of the stent. The server 1 inputs data such as the specified stent type and dimensions into the second model 52 as object information, and generates treatment information.
なお、この場合にサーバ1は、オブジェクト情報(テキストデータ)だけでなく、第1モデル51に入力した元の医用画像も第2モデル52に入力してもよい。あるいはサーバ1は、実施の形態1と同様に、オブジェクト領域を加工した第2医用画像も第2モデル52に入力してもよい。
In this case, the server 1 may input not only the object information (text data) but also the original medical image input to the first model 51 into the second model 52. Alternatively, the server 1 may input the second medical image obtained by processing the object area into the second model 52 as in the first embodiment.
上述の如く、本変形例では第2モデル52の前処理として画像解析を行い、オブジェクト情報を特定して第2モデル52に入力する。これにより、オブジェクトの種類や寸法などのデータを第2モデル52に与え、推定精度を高めることができる。
As described above, in this modification, image analysis is performed as preprocessing of the second model 52, object information is specified, and the object information is input to the second model 52. As a result, data such as the type and dimensions of the object can be given to the second model 52, and the estimation accuracy can be improved.
第2モデル52の前処理として画像解析を行う点以外は実施の形態1と同様であるため、本変形例ではフローチャートその他の詳細な説明は省略する。
Since it is the same as the first embodiment except that the image analysis is performed as the preprocessing of the second model 52, the flowchart and other detailed explanations will be omitted in this modification.
(実施の形態2)
本実施の形態では血管内治療に関し、特にステントの配置及び拡張を支援するための形態について述べる。なお、実施の形態1と重複する内容については同一の符号を付して説明を省略する。 (Embodiment 2)
In this embodiment, endovascular treatment will be described, and in particular, a mode for supporting the placement and expansion of a stent will be described. The contents overlapping with the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
本実施の形態では血管内治療に関し、特にステントの配置及び拡張を支援するための形態について述べる。なお、実施の形態1と重複する内容については同一の符号を付して説明を省略する。 (Embodiment 2)
In this embodiment, endovascular treatment will be described, and in particular, a mode for supporting the placement and expansion of a stent will be described. The contents overlapping with the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
図11は、実施の形態2に係る学習モデル50の説明図である。本実施の形態に係る学習モデル50も実施の形態1と同様に、第1モデル51及び第2モデル52を備え、オブジェクト領域を加工した第2医用画像と、その他の治療情報とを生成する。図11右上に示すように、治療情報は、患者の血管内に留置するステントに関するステント情報を含み、使用すべきステントの種類、形状(直径、長さ)等が出力される。
FIG. 11 is an explanatory diagram of the learning model 50 according to the second embodiment. Similar to the first embodiment, the learning model 50 according to the present embodiment also includes the first model 51 and the second model 52, and generates a second medical image obtained by processing the object region and other treatment information. As shown in the upper right of FIG. 11, the treatment information includes stent information regarding the stent to be placed in the blood vessel of the patient, and the type, shape (diameter, length) and the like of the stent to be used are output.
本実施の形態において学習モデル50はさらに、第3モデル53を備える。第3モデル53は、治療前の医用画像を入力した場合に、ステントを配置すべき血管内の目標位置、及び目標拡張径を推定する機械学習モデルである。具体的には、第3モデル53は、医用画像を入力として、当該画像においてステントを配置及び拡張すべき画像領域を検出する。サーバ1は、第2モデル52により生成したステント情報のほか、ステントを配置及び拡張すべき画像領域を第3モデル53により検出し、検出した領域を示す第2医用画像を生成して、ステント情報の一つとして表示装置23に表示させる。
In the present embodiment, the learning model 50 further includes a third model 53. The third model 53 is a machine learning model that estimates a target position in a blood vessel in which a stent should be placed and a target dilation diameter when a medical image before treatment is input. Specifically, the third model 53 takes a medical image as an input and detects an image region in which the stent should be placed and expanded. In addition to the stent information generated by the second model 52, the server 1 detects an image region in which the stent should be placed and expanded by the third model 53, generates a second medical image showing the detected region, and generates the stent information. It is displayed on the display device 23 as one of the above.
なお、以下の説明では便宜上、ステントを配置及び拡張すべき領域を「目標領域」と呼ぶ。
In the following description, for convenience, the area where the stent should be placed and expanded is referred to as the "target area".
例えばサーバ1は、第3モデル53としてMask R-CNNを用いる。上述の如く、Mask R-CNNは、入力画像内から目標とする画像領域を検出するCNNであり、目標とする画像領域を画素単位で識別可能なモデルである。例えばサーバ1は、治療を実施済みの患者の透視画像に対し、当該患者の血管内に留置したステントの座標範囲を示すラベルが付与されたデータを訓練データとして第3モデル53を生成する。
For example, the server 1 uses Mask R-CNN as the third model 53. As described above, the Mask R-CNN is a CNN that detects a target image area from the input image, and is a model that can identify the target image area on a pixel-by-pixel basis. For example, the server 1 generates a third model 53 using the data with a label indicating the coordinate range of the stent placed in the blood vessel of the patient as training data for the fluoroscopic image of the patient who has been treated.
なお、第3モデル53はMask R-CNNに限定されず、U-net等の他のCNN、あるいはGANなどの他の機械学習モデルであってもよい。
The third model 53 is not limited to the Mask R-CNN, and may be another CNN such as U-net or another machine learning model such as GAN.
例えば第3モデル53は、図11に示すように、目標領域を矩形状のバウンディングボックスとして検出する。なお、対象とする医用画像は透視画像だけではなく断層像であってもよい。また、目標領域は、ステントを配置し、バルーンにより拡張すべき領域を正しく示すものであればよく、目標領域の形状は矩形状に限定されない。
For example, the third model 53 detects the target area as a rectangular bounding box as shown in FIG. The target medical image may be a tomographic image as well as a fluoroscopic image. Further, the target region may be any one as long as the stent is placed and the region to be expanded by the balloon is correctly indicated, and the shape of the target region is not limited to the rectangular shape.
サーバ1は、透視画像撮影装置22から取得した透視画像をそのまま第3モデル53への入力としてもよいが、図11に示すように、第1モデル51によって病変部等のオブジェクト領域を加工した第2医用画像を入力とすると好適である。これにより、第3モデル53は、治療対象となる病変部の位置、形状等を考慮して目標領域を定めることができる。
The server 1 may directly input the fluoroscopic image acquired from the fluoroscopic image capturing device 22 to the third model 53, but as shown in FIG. 11, the first model 51 processes an object area such as a lesion. 2 It is preferable to input a medical image. As a result, the third model 53 can determine the target area in consideration of the position, shape, and the like of the lesion to be treated.
なお、変形例2と同様に、第2医用画像から病変部等のオブジェクトの種類や寸法を特定し、特定したオブジェクト情報を第3モデル53に与えてもよい。
Note that, as in the modified example 2, the type and dimensions of the object such as the lesion may be specified from the second medical image, and the specified object information may be given to the third model 53.
例えばサーバ1は、病変部に対応するオブジェクト領域を加工した第2医用画像に、さらに第3モデル53で検出した目標領域を示すバウンディングボックスを重畳することで、病変部と、ステントにより拡張すべき範囲とを同時に示す第2医用画像を生成する。なお、サーバ1は、目標領域のみ囲んだ画像を第2医用画像として生成してもよい。サーバ1は、生成した第2医用画像を透視画像74として表示装置23に出力する。
For example, the server 1 should expand the lesion and the stent by superimposing a bounding box indicating the target region detected by the third model 53 on the second medical image obtained by processing the object region corresponding to the lesion. Generate a second medical image showing the range at the same time. The server 1 may generate an image surrounding only the target area as a second medical image. The server 1 outputs the generated second medical image as a fluoroscopic image 74 to the display device 23.
なお、本実施の形態では、病変部等のオブジェクトを検出する第1モデル51と、ステントを配置及び拡張すべき目標領域を検出する第3モデル53とを別々に用意するものとするが、両者を同一のモデルとし、オブジェクトの検出と目標領域の検出とを同時に行ってもよい。
In the present embodiment, the first model 51 for detecting an object such as a lesion and the third model 53 for detecting a target area where a stent should be placed and expanded are separately prepared. May be the same model, and the object detection and the target area detection may be performed at the same time.
図12、図13は、実施の形態2に係る第2医用画像の表示例を示す説明図である。本実施の形態において表示装置23は、ステントを配置及び拡張すべき目標領域を示す第2医用画像を表示すると共に、現在血管内に挿入されているステントを第2医用画像において識別可能に表示し、ステントの配置及び拡張を支援する。図12A~図13Bでは、ステントが血管内に挿入され、目標位置に到達して拡張される様子を時系列で図示している。
12 and 13 are explanatory views showing a display example of a second medical image according to the second embodiment. In the present embodiment, the display device 23 displays a second medical image showing the target area in which the stent should be placed and expanded, and displays the stent currently inserted in the blood vessel in a distinguishable manner in the second medical image. , Assist in stent placement and expansion. 12A to 13B show in chronological order how the stent is inserted into the blood vessel, reaches the target position, and expands.
図12A、Bでは、ステントが目標領域まで挿入される様子を図示している。表示装置23は、透視画像74において、カラー表示等の方法でオブジェクト領域741及び目標領域742を識別可能に表示する。上述の如く、オブジェクト領域741は病変部に対応し、目標領域742はステント743を配置すべき血管内の領域に対応する。
FIGS. 12A and 12B show how the stent is inserted to the target region. The display device 23 identifiablely displays the object area 741 and the target area 742 in the perspective image 74 by a method such as color display. As described above, the object region 741 corresponds to the lesion and the target region 742 corresponds to the region within the blood vessel where the stent 743 should be placed.
さらに表示装置23は、血管内に挿入されたステント743と、ステント743の現在位置を表す矩形状のステント領域744とを表示する。サーバ1は、透視画像撮影装置22から取得した透視画像から、少なくともステントの現在位置及び現在の拡張径(以下、「現在径」と呼ぶ)を検出し、カラー表示等の方法で識別可能に表示させる。
Further, the display device 23 displays the stent 743 inserted into the blood vessel and the rectangular stent region 744 representing the current position of the stent 743. The server 1 detects at least the current position of the stent and the current expansion diameter (hereinafter referred to as “current diameter”) from the fluoroscopic image acquired from the fluoroscopic image capturing apparatus 22, and displays the stent in an identifiable manner by a method such as color display. Let me.
なお、ステント743の検出は第1モデル51を用いて行ってもよく、あるいはパターンマッチングによる画像認識によって行ってもよい。
The stent 743 may be detected by using the first model 51 or by image recognition by pattern matching.
さらにサーバ1は、検出したステント743の現在位置と、目標領域742が示すステント743の目標位置との差分値を算出し、透視画像74の左上に表示させる。なお、ステント743の現在位置とは、例えば細長のステント743の中点を現在位置としてもよく、又はステント743の先端を現在位置としてもよく、ステント743の任意の点が現在位置として検出され得る。また、ステント743の目標位置とは、例えば長方形状の目標領域742の重心を目標位置としてもよく、又はステント743とは反対側に位置する目標領域742の短辺中点を目標位置としてもよく、目標領域742内の任意の点が目標位置として設定され得る。サーバ1は、透視画像からステント743を逐次検出して現在位置と目標位置との差分値を算出して表示装置23に表示させる。
Further, the server 1 calculates the difference value between the current position of the detected stent 743 and the target position of the stent 743 indicated by the target area 742, and displays it in the upper left of the fluoroscopic image 74. The current position of the stent 743 may be, for example, the midpoint of the elongated stent 743 as the current position, or the tip of the stent 743 as the current position, and any point of the stent 743 can be detected as the current position. .. Further, the target position of the stent 743 may be, for example, the center of gravity of the rectangular target area 742 as the target position, or the midpoint of the short side of the target area 742 located on the opposite side of the stent 743 as the target position. , Any point in the target area 742 can be set as the target position. The server 1 sequentially detects the stent 743 from the fluoroscopic image, calculates the difference value between the current position and the target position, and displays it on the display device 23.
サーバ1は、現在位置と目標位置との差分値に基づき、ステント743が目標位置に到達したか否かを判定する。目標位置に到達したと判定した場合、サーバ1は、ステント743の現在径と、目標領域742が示すステント743の目標拡張径とに基づき、現在径を目標拡張径で除算した拡張率を算出し、透視画像74の左上に表示させる。なお、目標拡張径は、血管長手方向と直交する方向における目標領域742の幅であり、長方形状の目標領域742の短辺の長さである。
The server 1 determines whether or not the stent 743 has reached the target position based on the difference value between the current position and the target position. When it is determined that the target position has been reached, the server 1 calculates the expansion rate obtained by dividing the current diameter by the target expansion diameter based on the current diameter of the stent 743 and the target expansion diameter of the stent 743 indicated by the target region 742. , It is displayed in the upper left of the fluoroscopic image 74. The target expansion diameter is the width of the target region 742 in the direction orthogonal to the longitudinal direction of the blood vessel, and is the length of the short side of the rectangular target region 742.
図13A、Bに、ステント743が目標位置に到達してから拡張が完了するまでの様子を図示している。ステント743が目標位置に到達した場合、表示装置23は透視画像74の左上の表示を拡張率に切り換える。サーバ1は、ステント743の現在径を逐次検出して拡張率を算出し、表示装置23に表示させる。
FIGS. 13A and 13B show the state from when the stent 743 reaches the target position until the expansion is completed. When the stent 743 reaches the target position, the display device 23 switches the display on the upper left of the fluoroscopic image 74 to the expansion rate. The server 1 sequentially detects the current diameter of the stent 743, calculates the expansion rate, and displays it on the display device 23.
図14は、実施の形態2に係る学習モデル50の生成処理の手順を示すフローチャートである。
サーバ1の制御部11は、学習モデル50を生成するための訓練データを取得する(ステップS201)。本実施の形態に係る訓練データは、治療を実施済みの患者の医用画像と、病変部等のオブジェクト領域を示すラベルデータと、血管内に留置されたステントの画像領域(目標領域)を示すラベルデータとを含む。制御部11は処理をステップS12に移行する。 FIG. 14 is a flowchart showing a procedure for generating thelearning model 50 according to the second embodiment.
Thecontrol unit 11 of the server 1 acquires training data for generating the learning model 50 (step S201). The training data according to the present embodiment includes a medical image of a patient who has undergone treatment, label data indicating an object area such as a lesion, and a label indicating an image area (target area) of a stent placed in a blood vessel. Includes data. The control unit 11 shifts the process to step S12.
サーバ1の制御部11は、学習モデル50を生成するための訓練データを取得する(ステップS201)。本実施の形態に係る訓練データは、治療を実施済みの患者の医用画像と、病変部等のオブジェクト領域を示すラベルデータと、血管内に留置されたステントの画像領域(目標領域)を示すラベルデータとを含む。制御部11は処理をステップS12に移行する。 FIG. 14 is a flowchart showing a procedure for generating the
The
ステップS13の処理を実行後、制御部11は、医用画像を入力した場合に、ステントを配置及び拡張すべき目標領域を検出する第3モデル53を生成する(ステップS202)。具体的には上述の如く、制御部11は、Mask R-CNNを第3モデル53として生成する。例えば制御部11は、ラベルデータに従って訓練用の医用画像のオブジェクト領域を加工した第2医用画像を第3モデル53に入力して、目標領域を検出した検出結果を出力として取得する。制御部11は、取得した目標領域を正解のラベルデータと比較し、両者が近似するようにニューロン間の重み等のパラメータを最適化して第3モデル53を生成する。制御部11は一連の処理を終了する。
After executing the process of step S13, the control unit 11 generates a third model 53 that detects the target region in which the stent should be placed and expanded when the medical image is input (step S202). Specifically, as described above, the control unit 11 generates the Mask R-CNN as the third model 53. For example, the control unit 11 inputs the second medical image obtained by processing the object area of the medical image for training according to the label data into the third model 53, and acquires the detection result of detecting the target area as an output. The control unit 11 compares the acquired target region with the label data of the correct answer, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the third model 53. The control unit 11 ends a series of processes.
図15は、実施の形態2に係る治療情報の出力処理の手順を示すフローチャートである。第2モデル52を用いて治療情報を生成した後(ステップS33)、サーバ1は以下の処理を実行する。
制御部11は、患者の治療前の医用画像を第3モデル53に入力して、ステントを留置すべき目標領域を検出する(ステップS221)。具体的には上述の如く、制御部11は、第1モデル51を用いて生成した第2医用画像(透視画像)を第3モデル53に入力して、目標領域を検出する。 FIG. 15 is a flowchart showing a procedure for outputting treatment information according to the second embodiment. After generating the treatment information using the second model 52 (step S33), theserver 1 executes the following processing.
Thecontrol unit 11 inputs the pre-treatment medical image of the patient into the third model 53 to detect the target region in which the stent should be placed (step S221). Specifically, as described above, the control unit 11 inputs the second medical image (perspective image) generated by using the first model 51 into the third model 53, and detects the target region.
制御部11は、患者の治療前の医用画像を第3モデル53に入力して、ステントを留置すべき目標領域を検出する(ステップS221)。具体的には上述の如く、制御部11は、第1モデル51を用いて生成した第2医用画像(透視画像)を第3モデル53に入力して、目標領域を検出する。 FIG. 15 is a flowchart showing a procedure for outputting treatment information according to the second embodiment. After generating the treatment information using the second model 52 (step S33), the
The
制御部11は、医用画像から患者の血管内に挿入されたステントを検出する。(ステップS222)。制御部11は、検出したステントと、目標領域とを示す第2医用画像を生成し、ステップS33で生成した治療情報と共に表示装置23に出力する(ステップS223)。
The control unit 11 detects the stent inserted into the patient's blood vessel from the medical image. (Step S222). The control unit 11 generates a second medical image showing the detected stent and the target region, and outputs the second medical image to the display device 23 together with the treatment information generated in step S33 (step S223).
制御部11は、ステントの現在位置と、目標領域が示すステントの目標位置との差分値を算出し、表示装置23に出力する(ステップS224)。制御部11は、現在位置と目標位置との差分値に基づき、ステントが目標位置に到達したか否かを判定する(ステップS225)。目標位置に到達していないと判定した場合(S225:NO)、制御部11は処理をステップS224に戻す。目標位置に到達したと判定した場合(S225:YES)、制御部11は、ステントの現在径と目標拡張径とから拡張率を算出し、表示装置23に出力する(ステップS226)。
The control unit 11 calculates the difference value between the current position of the stent and the target position of the stent indicated by the target area, and outputs the difference value to the display device 23 (step S224). The control unit 11 determines whether or not the stent has reached the target position based on the difference value between the current position and the target position (step S225). If it is determined that the target position has not been reached (S225: NO), the control unit 11 returns the process to step S224. When it is determined that the target position has been reached (S225: YES), the control unit 11 calculates the expansion rate from the current diameter of the stent and the target expansion diameter, and outputs the expansion rate to the display device 23 (step S226).
制御部11は、目標拡張径まで拡張が完了したか否かを判定する(ステップS227)。拡張が完了していないと判定した場合(S227:NO)、制御部11は処理をステップS226に戻す。拡張が完了したと判定した場合(S227:YES)、制御部11は一連の処理を終了する。
The control unit 11 determines whether or not the expansion to the target expansion diameter is completed (step S227). If it is determined that the expansion is not completed (S227: NO), the control unit 11 returns the process to step S226. When it is determined that the expansion is completed (S227: YES), the control unit 11 ends a series of processes.
なお、上記では学習モデル50において使用すべきステントの種類や形状などを自動的に予測し、そのステントの目標位置や拡張条件などを予測するものとしたが、本実施の形態はこれに限定されるものではない。例えばサーバ1は、血管内治療に使用するステントの種類や形状を指定する指定入力をユーザから受け付け、指定されたステントの情報(第1ステント情報)を学習モデル50に入力して、当該ステントの目標位置や拡張条件などの情報(第2ステント情報)を予測してもよい。例えばサーバ1は、ユーザが指定した第1ステント情報を医用画像と共に第3モデル53に入力して、ユーザが指定したステントを利用した場合の目標領域を検出する。これにより、ユーザの要望に応じたステント配置及び拡張を支援することができる。
In the above, the type and shape of the stent to be used in the learning model 50 are automatically predicted, and the target position and expansion conditions of the stent are predicted, but this embodiment is limited to this. It's not something. For example, the server 1 receives a designated input for designating the type and shape of the stent used for endovascular treatment from the user, inputs the designated stent information (first stent information) into the learning model 50, and inputs the designated stent information (first stent information) to the learning model 50. Information such as a target position and expansion conditions (second stent information) may be predicted. For example, the server 1 inputs the first stent information specified by the user into the third model 53 together with the medical image, and detects the target area when the stent specified by the user is used. This makes it possible to support stent placement and expansion according to the user's request.
以上より、本実施の形態2によれば、血管内治療の際のステントの配置及び拡張を好適に支援することができる。
From the above, according to the second embodiment, it is possible to suitably support the placement and expansion of the stent during endovascular treatment.
(変形例3)
変形例1では、治療情報を出力後に修正入力を受け付けて再学習を行う形態について説明した。実施の形態2も同様に、ステント情報の出力後に修正入力を受け付けて再学習を行ってもよい。 (Modification example 3)
In the first modification, a mode in which the treatment information is output, the correction input is received, and the re-learning is performed has been described. Similarly, in the second embodiment, after the stent information is output, a correction input may be received and re-learning may be performed.
変形例1では、治療情報を出力後に修正入力を受け付けて再学習を行う形態について説明した。実施の形態2も同様に、ステント情報の出力後に修正入力を受け付けて再学習を行ってもよい。 (Modification example 3)
In the first modification, a mode in which the treatment information is output, the correction input is received, and the re-learning is performed has been described. Similarly, in the second embodiment, after the stent information is output, a correction input may be received and re-learning may be performed.
図16は、変形例3に係る治療情報の出力処理の手順を示すフローチャートである。ステント情報を含む治療情報を出力後(ステップS223)、サーバ1は以下の処理を実行する。
サーバ1の制御部11は、出力したステント情報の修正入力を受け付ける(ステップS241)。例えば制御部11は、ステント情報として表示されたステントの種類、形状等を修正する入力を受け付ける。また、制御部11は、第2医用画像に対し、目標領域の座標範囲を修正する入力を受け付ける。制御部11は処理をステップS224に移行する。 FIG. 16 is a flowchart showing a procedure for outputting treatment information according to the third modification. After outputting the treatment information including the stent information (step S223), theserver 1 executes the following processing.
Thecontrol unit 11 of the server 1 receives the correction input of the output stent information (step S241). For example, the control unit 11 receives an input for correcting the type, shape, etc. of the stent displayed as the stent information. Further, the control unit 11 receives an input for correcting the coordinate range of the target region for the second medical image. The control unit 11 shifts the process to step S224.
サーバ1の制御部11は、出力したステント情報の修正入力を受け付ける(ステップS241)。例えば制御部11は、ステント情報として表示されたステントの種類、形状等を修正する入力を受け付ける。また、制御部11は、第2医用画像に対し、目標領域の座標範囲を修正する入力を受け付ける。制御部11は処理をステップS224に移行する。 FIG. 16 is a flowchart showing a procedure for outputting treatment information according to the third modification. After outputting the treatment information including the stent information (step S223), the
The
ステントの拡張が完了したと判定した場合(S227:YES)、制御部11は、学習モデル50に入力した元の医用画像と、修正後のステント情報とを訓練データとする再学習を行い、学習モデル50を更新する(ステップS242)。すなわち制御部11は、学習モデル50から出力されるステント情報が修正後のステント情報に近似するようニューロン間の重み等のパラメータを最適化し、学習モデル50を再生成する。制御部11は一連の処理を終了する。
When it is determined that the expansion of the stent is completed (S227: YES), the control unit 11 relearns using the original medical image input to the learning model 50 and the modified stent information as training data, and learns. Model 50 is updated (step S242). That is, the control unit 11 optimizes parameters such as weights between neurons so that the stent information output from the learning model 50 approximates the modified stent information, and regenerates the learning model 50. The control unit 11 ends a series of processes.
以上より、本変形例3によれば、本システムの運用を通じて学習モデル50を最適化することができる。
From the above, according to the present modification 3, the learning model 50 can be optimized through the operation of this system.
今回開示された実施の形態はすべての点で例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。
It should be considered that the embodiment disclosed this time is an example in all respects and is not restrictive. The scope of the present invention is indicated by the scope of claims, not the above-mentioned meaning, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims.
1 サーバ(情報処理装置)
11 制御部
12 主記憶部
13 通信部
14 補助記憶部
P プログラム
141 診療DB
50 学習モデル
51 第1モデル
52 第2モデル
53 第3モデル
2 画像診断システム
21 血管内画像診断装置
22 透視画像撮影装置
23 表示装置 1 Server (information processing device)
11Control unit 12 Main storage unit 13 Communication unit 14 Auxiliary storage unit P program 141 Medical care DB
50Learning model 51 1st model 52 2nd model 53 3rd model 2 diagnostic imaging system 21 intravascular diagnostic imaging device 22 fluoroscopic imaging device 23 display device
11 制御部
12 主記憶部
13 通信部
14 補助記憶部
P プログラム
141 診療DB
50 学習モデル
51 第1モデル
52 第2モデル
53 第3モデル
2 画像診断システム
21 血管内画像診断装置
22 透視画像撮影装置
23 表示装置 1 Server (information processing device)
11
50
Claims (19)
- 治療を実施する患者に関する患者情報と、該患者の管腔器官をイメージングした医用画像とを取得し、
前記患者情報及び医用画像を入力した場合に、前記患者に実施する治療を支援する治療情報を出力するよう学習済みのモデルに、取得した前記患者情報及び医用画像を入力して前記治療情報を出力する
処理をコンピュータに実行させるプログラム。 Obtaining patient information about the patient undergoing treatment and a medical image of the patient's luminal organs,
When the patient information and the medical image are input, the acquired patient information and the medical image are input to the trained model to output the treatment information to support the treatment to be performed on the patient, and the treatment information is output. A program that causes a computer to perform processing. - 前記管腔器官内の病変部に関する前記治療情報を出力する
請求項1に記載のプログラム。 The program according to claim 1, which outputs the treatment information regarding the lesion portion in the luminal organ. - 前記管腔器官に挿入する治療用デバイスに関する前記治療情報を出力する
請求項1又は2に記載のプログラム。 The program according to claim 1 or 2, which outputs the therapeutic information regarding the therapeutic device to be inserted into the luminal organ. - 前記治療は血管内治療であり、
前記血管に留置するステントに関する前記治療情報を出力する
請求項1~3のいずれか1項に記載のプログラム。 The treatment is endovascular treatment
The program according to any one of claims 1 to 3, which outputs the treatment information regarding the stent to be placed in the blood vessel. - 前記ステントの留置前又は留置後に実施する手技に関する前記治療情報を出力する
請求項4に記載のプログラム。 The program according to claim 4, which outputs the treatment information regarding the procedure to be performed before or after the placement of the stent. - 前記治療は血管内治療であり、
前記血管を拡張するバルーンに関する前記治療情報を出力する
請求項1~5のいずれか1項に記載のプログラム。 The treatment is endovascular treatment
The program according to any one of claims 1 to 5, which outputs the treatment information regarding the balloon that dilates the blood vessel. - 前記医用画像は、X線による前記管腔器官の透視画像を含み、
前記透視画像の撮影条件に関する前記治療情報を出力する
請求項1~6のいずれか1項に記載のプログラム。 The medical image includes a fluoroscopic image of the luminal organ by X-ray.
The program according to any one of claims 1 to 6, which outputs the treatment information regarding the imaging conditions of the fluoroscopic image. - 前記患者の治療後の経過を推定した経過情報を前記治療情報として出力する
請求項1~7のいずれか1項に記載のプログラム。 The program according to any one of claims 1 to 7, which outputs progress information in which the progress after treatment of the patient is estimated as the treatment information. - 治療後に発症し得る合併症に関する前記経過情報を出力する
請求項8に記載のプログラム。 The program according to claim 8, which outputs the progress information regarding complications that may occur after treatment. - 前記医用画像を入力した場合に、前記医用画像内の所定のオブジェクトを検出するよう学習済みの第1モデルに、取得した前記医用画像を入力して前記オブジェクトを検出し、
前記オブジェクトの検出結果と、前記患者情報とを入力した場合に前記治療情報を出力するよう学習済みの第2モデルに、前記第1モデルによる前記オブジェクトの検出結果と、取得した前記患者情報とを入力して前記治療情報を出力する
請求項1~9のいずれか1項に記載のプログラム。 When the medical image is input, the acquired medical image is input to the first model trained to detect a predetermined object in the medical image to detect the object.
The detection result of the object by the first model and the acquired patient information are added to the second model trained to output the treatment information when the detection result of the object and the patient information are input. The program according to any one of claims 1 to 9, which is input and outputs the treatment information. - 前記第1モデルによる前記オブジェクトの検出結果に基づき、前記医用画像において前記オブジェクトに対応する画像領域を他の画像領域と異なる表示態様で表示する第2医用画像を生成し、
生成した前記第2医用画像を前記第2モデルに入力して前記治療情報を出力する
請求項10に記載のプログラム。 Based on the detection result of the object by the first model, a second medical image in which the image area corresponding to the object in the medical image is displayed in a display mode different from that of other image areas is generated.
The program according to claim 10, wherein the generated second medical image is input to the second model and the treatment information is output. - 前記第1モデルによる前記オブジェクトの検出結果に基づき、前記オブジェクトの種類及び寸法を示すオブジェクト情報を特定し、
前記オブジェクト情報を前記第2モデルに入力して前記治療情報を出力する
請求項10に記載のプログラム。 Based on the detection result of the object by the first model, the object information indicating the type and dimensions of the object is specified, and the object information is specified.
The program according to claim 10, wherein the object information is input to the second model and the treatment information is output. - 前記管腔器官の長手方向に沿って生成された複数の前記医用画像を取得し、
前記複数の医用画像を前記モデルに入力して、前記治療情報を出力する
請求項1~12のいずれか1項に記載のプログラム。 A plurality of the medical images generated along the longitudinal direction of the luminal organ were acquired and obtained.
The program according to any one of claims 1 to 12, which inputs the plurality of medical images into the model and outputs the treatment information. - 前記管腔器官内に挿入されたカテーテルで検出した信号に基づき生成された断層像と、前記管腔器官の透視画像とを取得し、
前記断層像及び透視画像を前記モデルに入力して前記治療情報を出力する
請求項1~13のいずれか1項に記載のプログラム。 A tomographic image generated based on a signal detected by a catheter inserted into the luminal organ and a fluoroscopic image of the luminal organ are acquired.
The program according to any one of claims 1 to 13, which inputs the tomographic image and the fluoroscopic image into the model and outputs the treatment information. - 前記医用画像は、前記管腔器官の超音波断層像、光干渉断層像、X線透視画像の少なくともいずれか一つを含む
請求項1~14のいずれか1項に記載のプログラム。 The program according to any one of claims 1 to 14, wherein the medical image includes at least one of an ultrasonic tomographic image, an optical interference tomographic image, and an X-ray fluoroscopic image of the lumen organ. - 出力した前記治療情報を修正する修正入力を受け付け、
前記モデルに入力した前記患者情報及び医用画像と、修正後の前記治療情報とに基づく再学習を行い、前記モデルを更新する
請求項1~15のいずれか1項に記載のプログラム。 Accepts the correction input to correct the output treatment information,
The program according to any one of claims 1 to 15, which relearns based on the patient information and the medical image input to the model and the modified treatment information, and updates the model. - 治療を実施する患者に関する患者情報と、該患者の管腔器官をイメージングした医用画像とを取得し、
前記患者情報及び医用画像を入力した場合に、前記患者に実施する治療を支援する治療情報を出力するよう学習済みのモデルに、取得した前記患者情報及び医用画像を入力して前記治療情報を出力する
処理をコンピュータに実行させる情報処理方法。 Obtaining patient information about the patient undergoing treatment and a medical image of the patient's luminal organs,
When the patient information and the medical image are input, the acquired patient information and the medical image are input to the trained model to output the treatment information to support the treatment to be performed on the patient, and the treatment information is output. An information processing method that causes a computer to perform processing. - 治療を実施する患者に関する患者情報と、該患者の管腔器官をイメージングした医用画像とを取得する取得部と、
前記患者情報及び医用画像を入力した場合に、前記患者に実施する治療を支援する治療情報を出力するよう学習済みのモデルに、取得した前記患者情報及び医用画像を入力して前記治療情報を出力する出力部と
を備える情報処理装置。 An acquisition unit that acquires patient information about a patient to be treated and a medical image of the patient's tract organs.
When the patient information and the medical image are input, the acquired patient information and the medical image are input to the trained model to output the treatment information to support the treatment to be performed on the patient, and the treatment information is output. An information processing device equipped with an output unit. - 治療を実施済みの患者に関する患者情報、及び該患者の管腔器官をイメージングした医用画像に対し、該患者に実施済みの治療に関する治療情報が付与された訓練データを取得し、
前記訓練データに基づき、前記患者情報及び医用画像を入力した場合に前記治療情報を出力する学習済みモデルを生成する
処理をコンピュータが実行するモデル生成方法。 For the patient information about the patient who has been treated and the medical image which imaged the lumen organ of the patient, the training data in which the treatment information about the treatment which has been performed is given to the patient is acquired.
A model generation method in which a computer executes a process of generating a learned model that outputs the treatment information when the patient information and the medical image are input based on the training data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022509533A JPWO2021193018A1 (en) | 2020-03-27 | 2021-03-09 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020058993 | 2020-03-27 | ||
JP2020-058993 | 2020-03-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021193018A1 true WO2021193018A1 (en) | 2021-09-30 |
Family
ID=77891481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/009296 WO2021193018A1 (en) | 2020-03-27 | 2021-03-09 | Program, information processing method, information processing device, and model generation method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2021193018A1 (en) |
WO (1) | WO2021193018A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018768A (en) * | 2022-05-16 | 2022-09-06 | 中国人民解放军空军军医大学 | Automatic evaluation system for stent neointimal coverage based on OCT platform |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016516466A (en) * | 2013-03-12 | 2016-06-09 | ライトラボ・イメージング・インコーポレーテッド | Blood vessel data processing and image registration system, method and device |
JP2017503561A (en) * | 2013-12-18 | 2017-02-02 | ハートフロー, インコーポレイテッド | System and method for predicting coronary plaque vulnerability from patient-specific anatomical image data |
WO2019063575A1 (en) * | 2017-09-28 | 2019-04-04 | Koninklijke Philips N.V. | Guiding an intravascular us catheter |
-
2021
- 2021-03-09 WO PCT/JP2021/009296 patent/WO2021193018A1/en active Application Filing
- 2021-03-09 JP JP2022509533A patent/JPWO2021193018A1/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016516466A (en) * | 2013-03-12 | 2016-06-09 | ライトラボ・イメージング・インコーポレーテッド | Blood vessel data processing and image registration system, method and device |
JP2017503561A (en) * | 2013-12-18 | 2017-02-02 | ハートフロー, インコーポレイテッド | System and method for predicting coronary plaque vulnerability from patient-specific anatomical image data |
WO2019063575A1 (en) * | 2017-09-28 | 2019-04-04 | Koninklijke Philips N.V. | Guiding an intravascular us catheter |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018768A (en) * | 2022-05-16 | 2022-09-06 | 中国人民解放军空军军医大学 | Automatic evaluation system for stent neointimal coverage based on OCT platform |
Also Published As
Publication number | Publication date |
---|---|
JPWO2021193018A1 (en) | 2021-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11741613B2 (en) | Systems and methods for classification of arterial image regions and features thereof | |
CN114126491B (en) | Assessment of coronary artery calcification in angiographic images | |
WO2021193019A1 (en) | Program, information processing method, information processing device, and model generation method | |
WO2022071121A1 (en) | Information processing device, information processing method, and program | |
WO2022071181A1 (en) | Information processing device, information processing method, program, and model generation method | |
WO2021193018A1 (en) | Program, information processing method, information processing device, and model generation method | |
WO2021193024A1 (en) | Program, information processing method, information processing device and model generating method | |
US20230012527A1 (en) | Program, information processing method, information processing apparatus, and model generation method | |
CN116309346A (en) | Medical image detection method, device, equipment, storage medium and program product | |
WO2022209657A1 (en) | Computer program, information processing method, and information processing device | |
WO2021193022A1 (en) | Information processing device, information processing method, and program | |
WO2021193021A1 (en) | Program, information processing method, information processing device, and model generation method | |
WO2021199967A1 (en) | Program, information processing method, learning model generation method, learning model relearning method, and information processing system | |
WO2021199966A1 (en) | Program, information processing method, training model generation method, retraining method for training model, and information processing system | |
JP7561833B2 (en) | COMPUTER PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS | |
US20220028079A1 (en) | Diagnosis support device, diagnosis support system, and diagnosis support method | |
US20240005459A1 (en) | Program, image processing method, and image processing device | |
WO2021199960A1 (en) | Program, information processing method, and information processing system | |
WO2023100838A1 (en) | Computer program, information processing device, information processing method, and training model generation method | |
WO2024071322A1 (en) | Information processing method, learning model generation method, computer program, and information processing device | |
JP2023130134A (en) | Program, information processing method, and information processing device | |
JP2022142607A (en) | Program, image processing method, image processing device, and model generation method | |
JP2024025980A (en) | Program, information processing device, information processing method, information processing system and generation method of learning model | |
JP2023112551A (en) | Program, information processing method, information processing device, and catheter system | |
JP2024142137A (en) | PROGRAM, IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21776228 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022509533 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21776228 Country of ref document: EP Kind code of ref document: A1 |