WO2021193021A1 - Program, information processing method, information processing device, and model generation method - Google Patents

Program, information processing method, information processing device, and model generation method Download PDF

Info

Publication number
WO2021193021A1
WO2021193021A1 PCT/JP2021/009301 JP2021009301W WO2021193021A1 WO 2021193021 A1 WO2021193021 A1 WO 2021193021A1 JP 2021009301 W JP2021009301 W JP 2021009301W WO 2021193021 A1 WO2021193021 A1 WO 2021193021A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
treatment
image
input
device information
Prior art date
Application number
PCT/JP2021/009301
Other languages
French (fr)
Japanese (ja)
Inventor
陽 井口
悠介 関
雄紀 坂口
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Priority to JP2022509536A priority Critical patent/JPWO2021193021A1/ja
Publication of WO2021193021A1 publication Critical patent/WO2021193021A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/12Devices for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a program, an information processing method, an information processing device, and a model generation method.
  • Treatment is performed based on medical images that visualize the inside of the human body, such as ultrasonic images, optical coherence tomography (OCT) images, and X-ray images.
  • OCT optical coherence tomography
  • X-ray images various techniques for processing medical images have been proposed so that image observers can preferably observe medical images.
  • Patent Document 1 it is an ultrasonic diagnostic apparatus that displays an ultrasonic image of a lesion portion of a subject, and generates an image in which an ultrasonic image before treatment and an ultrasonic image after treatment are superimposed to generate a lesion.
  • An ultrasonic diagnostic apparatus for displaying a treated area and an untreated area of a part in different colors is disclosed.
  • Patent Document 1 merely superimposes the image acquired before the treatment and the image acquired after the treatment, and does not generate the image after the treatment.
  • One aspect is to provide a program or the like that can obtain a medical image that predicts the state after treatment from a medical image before treatment.
  • a program acquires a first medical image of a patient's luminal organ before treatment and device information related to a therapeutic device used for the treatment of the luminal organ, and obtains the first medical image.
  • FIG. It is explanatory drawing which shows the configuration example of the treatment support system. It is a block diagram which shows the configuration example of a server. It is explanatory drawing which shows the outline of Embodiment 1.
  • FIG. It is explanatory drawing about the generative model. It is explanatory drawing which shows the display screen example of a medical image. It is explanatory drawing about the treatment stage of endovascular treatment.
  • It is a flowchart which shows the procedure of the generation process of a generation model.
  • It is a flowchart which shows the procedure of the 2nd medical image generation processing It is explanatory drawing about the generation model which concerns on Embodiment 2.
  • It is a flowchart which shows the procedure of the 2nd medical image generation processing which concerns on Embodiment 3.
  • FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system.
  • first medical image a medical image
  • second medical image a medical image that predicts the state of blood vessels after treatment.
  • the treatment support system includes an information processing device 1 and a diagnostic imaging system 2.
  • the information processing device 1 and the diagnostic imaging system 2 are communication-connected to a network N such as a LAN (Local Area Network) or the Internet.
  • the target luminal organ is not limited to a blood vessel, and may be another luminal organ such as a bile duct, pancreatic duct, bronchus, or intestine. ..
  • the diagnostic imaging system 2 includes an intravascular diagnostic imaging device 21, a fluoroscopic imaging device 22, and a display device 23.
  • the intravascular image diagnostic device 21 is a device for imaging an intravascular tomographic image of a patient, for example, an IVUS (Intravascular Ultrasound) device that performs an ultrasonic examination using a catheter 211.
  • Catheter 211 is a medical device that is inserted into a patient's blood vessel and comprises an imaging core that transmits ultrasonic waves and receives reflected waves from within the blood vessel.
  • the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image based on the signal of the reflected wave received by the catheter 211 and displays it on the display device 23.
  • the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image.
  • an optical interference tomographic image is taken by an optical method such as OCT or OFDI (Optical Frequency Domain Imaging). May be good.
  • the fluoroscopic image capturing device 22 is a device unit for capturing a fluoroscopic image that sees through the inside of a patient, and is, for example, an angiography device that performs angiography examination.
  • the fluoroscopic image capturing device 22 includes an X-ray source 221 and an X-ray sensor 222, and the X-ray sensor 222 receives the X-rays emitted from the X-ray source 221 to capture an X-ray fluoroscopic image of the patient.
  • ultrasonic tomography optical interference tomography
  • angiography are given as examples of medical images, but the medical images are computed tomography (CT) images and magnetic resonance imaging (MRI). It may be an image or the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer.
  • the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity.
  • the server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging system 2, or may be a cloud server communicated and connected via the Internet or the like.
  • the server 1 functions as a generator that generates a second medical image from the first medical image (ultrasonic tomographic image and X-ray perspective image) generated by the diagnostic imaging system 2, and the generated second medical image is image-diagnosed. Output to system 2.
  • the server 1 performs machine learning to learn predetermined training data in advance, and generates a second medical image by inputting the first medical image and device information (FIG. 5). See 3rd grade).
  • the device information will be described later.
  • the server 1 acquires the first medical image and device information from the diagnostic imaging system 2 and inputs them to the generation model 50 to generate the second medical image and display it on the display device 23.
  • the second medical image is generated on the server 1 separate from the diagnostic imaging system 2, but the generation model 50 generated by the server 1 by machine learning is installed in the diagnostic imaging system 2.
  • the diagnostic imaging system 2 may be capable of generating a second medical image.
  • FIG. 2 is a block diagram showing a configuration example of the server 1.
  • the server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
  • the control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed.
  • the main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing.
  • the communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the generation model 50.
  • the generative model 50 is a machine learning model in which training data has been trained as described above, and is a model that generates a second medical image by inputting a first medical image and device information.
  • the generation model 50 is expected to be used as a program module constituting artificial intelligence software.
  • the auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 is provided with a reading unit for reading a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), or a USB (UniversalSerialBus) memory, and reads a program P from the portable storage medium 1a. You may try to execute it. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
  • a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), or a USB (UniversalSerialBus) memory
  • FIG. 3 is an explanatory diagram showing an outline of the first embodiment.
  • FIG. 3 conceptually illustrates how a second medical image is generated from the first medical image and text information using the generative model 50. An outline of the present embodiment will be described with reference to FIG.
  • the generation model 50 generates a second medical image that predicts the state after the treatment by inputting the first medical image acquired by the diagnostic imaging system 2 before the treatment of the patient and the device information about the treatment device used for the treatment. It is a machine learning model to do.
  • the first medical image is an ultrasonic tomographic image and an X-ray fluoroscopic image acquired by the intravascular image diagnostic apparatus 21 and the fluoroscopic imaging apparatus 22, respectively, and is an image in which the blood vessel of the patient is inspected before treatment.
  • the device information is information related to a device used for endovascular treatment, for example, information related to a stent, a balloon, etc. used for PCI (Percutaneous Coronary Intervention).
  • device information includes the length, diameter, type, placement position, balloon type, diameter after expansion by the balloon (hereinafter referred to as "expansion diameter"), expansion pressure, and expansion time of the stent to be placed in the patient's blood vessel.
  • expansion diameter the length, diameter, type, placement position, balloon type, diameter after expansion by the balloon
  • expansion pressure expansion time of the stent to be placed in the patient's blood vessel.
  • the therapeutic device is not limited to the stent and the balloon, and may be another device such as a rotablator.
  • the second medical image is an image that predicts the state in the blood vessel after the treatment using the therapeutic device, and is an ultrasonic tomographic image and an X-ray fluoroscopic image after the treatment.
  • the server 1 may generate only one of the ultrasonic tomographic image and the X-ray perspective image as the second medical image, or may generate three or more types of images.
  • the server 1 learns the training data including the first medical image and device information for training and the second medical image, and generates the generation model 50.
  • the training data is, for example, data of a patient who has undergone endovascular treatment, and is an ultrasonic tomographic image and an X-ray fluoroscopic image (first medical image) before treatment, and a device related to a therapeutic device used for treating the patient. Includes information and post-treatment ultrasound tomographic and fluoroscopic images.
  • the server 1 performs machine learning using these data as training data, and generates a generation model 50 in advance.
  • the training data is not limited to the actual patient data, and may be virtual data inflated by a data generation means such as GAN (Generative Adversarial Network).
  • GAN Generic Adversarial Network
  • the server 1 acquires the first medical image and device information from the diagnostic imaging system 2 at the time of treating the patient, generates the second medical image after the treatment, and displays it on the display device 23. Specifically, the server 1 receives a plurality of ultrasonic tomographic images (frame images) continuously imaged along the longitudinal direction of the blood vessel from the intravascular image diagnostic apparatus 21 during the ultrasonic examination using the catheter 211. Acquire sequentially. Further, the server 1 acquires an X-ray fluoroscopic image taken at the same time as the ultrasonic inspection from the fluoroscopic image capturing device 22.
  • the server 1 sequentially inputs a plurality of continuous ultrasonic tomographic images and an X-ray fluoroscopic image at the time of acquisition of each ultrasonic tomographic image into the generation model 50 together with device information, and predicts the state after treatment. Images and fluoroscopic images are sequentially generated.
  • an ultrasonic tomographic image and an X-ray fluoroscopic image of a blood vessel after a stent is placed and expanded by a balloon are generated.
  • the portion corresponding to the stent is shown by hatching.
  • the server 1 predicts the state of the blood vessel after the treatment using the generative model 50 and presents it as a second medical image.
  • the second medical image is generated at the time of treatment and output to the diagnostic imaging system 2.
  • the recorded first medical image is input to the generation model 50 after the fact and used for the second medical.
  • an image may be generated.
  • FIG. 4 is an explanatory diagram of the generative model 50.
  • GAN is used as the generative model 50.
  • the GAN includes a generator that generates output data from input data and a discriminator that discriminates the authenticity of the data generated by the generator, and the generator and the discriminator compete with each other for learning. Build a network by doing.
  • the generator related to GAN accepts random noise (latent variable) input and generates output data.
  • the discriminator learns the truth of the data given by the generator by using the true data given for learning and the data given by the generator.
  • the network is constructed so that the loss function of the generator is finally minimized and the loss function of the discriminator is maximized.
  • the generator of the generation model 50 includes an encoder that converts input data into a latent variable and a decoder that generates output data from the latent variable, and includes a first medical image and a second medical image from device information.
  • the generator of the generative model 50 includes three encoders that accept inputs of three types of input data: an ultrasonic tomographic image before treatment, an X-ray fluoroscopic image, and device information.
  • the generator also comprises two decoders that generate a post-treatment ultrasound tomographic image and a fluoroscopic image.
  • the generative model 50 includes two classifiers for discriminating the authenticity of the data output from each decoder.
  • the generator extracts the features of the input data in each encoder, inputs the latent variable that combines the features of each input data to each decoder, and predicts the state after treatment.
  • Ultrasonic tomographic image and X-ray fluoroscopy Generate an image.
  • the two classifiers discriminate between the authenticity of the ultrasonic tomographic image and the fluoroscopic image, respectively.
  • the server 1 performs learning using the first medical image and device information given for training and the second medical image, and generates a generation model 50. For example, the server 1 first fixes the parameters (weights, etc.) of the generator, inputs the first medical image for training and the device information to the generator, and generates the second medical image. Then, the server 1 uses the second medical image generated by the generator as fake data, gives the second medical image for training as true data to the classifier, and optimizes the parameters of the classifier. Next, the server 1 fixes the parameter of the classifier to the optimum value and learns the generator. The server 1 optimizes the parameters of the generator so that the probability of authenticity is close to 50% when the second medical image generated by the generator is input to the classifier. As a result, the server 1 generates the generative model 50. When actually generating the second medical image from the first medical image, only the generator is used.
  • the generative model 50 is not limited to GAN, and may be a model based on a neural network such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • VAE Variational Autoencoder
  • CNN for example, U-net
  • another learning algorithm such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • the network structure of the generation model 50 shown in FIG. 4 is an example, and the present embodiment is not limited to this.
  • two images, an ultrasonic tomographic image and an X-ray perspective image may be combined and regarded as a single image, and two images may be processed simultaneously by one type of encoder and decoder.
  • the encoder for extracting the feature amount of the device information is not prepared, and the device information (text data) may be encoded by a means such as One-hot encoding and directly mapped (combined) to the latent space.
  • various changes can be considered in the network structure of the generative model 50.
  • the server 1 uses a second medical image that displays an object of interest to the user, such as a stent, in a display mode different from that of other image areas, as illustrated in FIG. Suitable.
  • the server 1 uses an image in which an image area corresponding to an object such as a stent is labeled (processed) by color coding, edge enhancement, or the like. This allows the server 1 to generate a second medical image that allows the user to identify the object of interest.
  • the labeled object is not limited to a therapeutic device such as a stent, and may be another object such as a lesion (plaque or the like) in a blood vessel or a lumen boundary.
  • the server 1 inputs lesion information regarding a lesion portion in a blood vessel into the generation model 50.
  • the lesion information includes, for example, the type of lesion (plaque, calcified tissue, vascular stenosis, etc.), position, and properties (hardness of lesion, etc.).
  • the server 1 also inputs the lesion information into the encoder corresponding to the stent information and converts it into a latent variable.
  • the generative model 50 can generate a second medical image while also referring to the nature and state of the lesion.
  • FIG. 5 is an explanatory diagram showing an example of a display screen of a medical image.
  • FIG. 5 is an example of a screen displayed by the display device 23, and shows an example of a screen displaying a first medical image and a second medical image. The processing flow executed at the time of endovascular treatment will be described with reference to FIG.
  • the display device 23 displays the ultrasonic tomographic image and the X-ray perspective image of the current blood vessel corresponding to the first medical image as the current tomographic image 501 and the current perspective image 502. Further, in order to accept the designated input of the device information, the display device 23 displays the designated fields 503, 503, 503 ... In the menu bar on the left side of the screen. In addition, the display device 23 displays a status column 504 for accepting a designated input of the current treatment stage on the menu bar.
  • the display device 23 accepts a designated input of device information on the screen. Specifically, the display device 23 accepts the operation input for drawing a rectangular frame on the current fluoroscopic image 502, thereby accepting the designated input of the site (position of the lesion portion) in the blood vessel to be treated. Further, the display device 23 accepts designation inputs such as the length and diameter of the stent and the expansion diameter by the balloon in each designation field 503.
  • the device information of each item is individually input in the plurality of designation fields 503.
  • each item can be specified at the same time. good.
  • the display device 23 accepts the designated input of the treatment stage of the current endovascular treatment via the status column 504.
  • the server 1 generates a second medical image according to the treatment stage specified in the status column 504 and outputs it to the display device 23.
  • FIG. 6 is an explanatory diagram regarding the treatment stage of endovascular treatment.
  • FIG. 6 conceptually illustrates a general treatment flow when performing PCI.
  • PCI is carried out by the following procedure.
  • a guide wire for guiding the catheter 211 is inserted into the patient's blood vessel.
  • the catheter 211 is inserted to obtain an ultrasonic tomographic image, and the condition of the lesion is confirmed together with the fluoroscopic image.
  • the balloon is then vasodilated prior to placement, if necessary, so that the stent can be placed. Balloon expansion before stent placement may be omitted.
  • a stent is placed in the blood vessel and the lesion is expanded with a balloon.
  • the ultrasonic image is acquired again, and the condition of the lesion after expansion is confirmed together with the fluoroscopic image.
  • the blood vessel is dilated again by the balloon. Balloon expansion after stent placement may be omitted.
  • the condition of the lesion is confirmed on each image, and a series of treatments is completed.
  • pre-dilation balloon dilation before stent placement
  • post-dilation balloon dilation after stent placement
  • the server 1 generates a second medical image after the treatment from the first medical image before the treatment at each treatment stage by the generation model 50 in accordance with the above-mentioned pre-dilation, stent expansion, and post-dilation. do.
  • the server 1 configures the generative model 50 as a model in which the class label of the input data can be input, such as the Conditional GAN.
  • the label data indicating the class of the first medical image is used before the pre-expansion procedure, before the stent expansion, or after the post-expansion. Enter a value that indicates whether it is before the treatment of.
  • the server 1 generates a second medical image after any of the treatments according to the input label data. As a result, the user can confirm the second medical image that predicts the state after the treatment to be performed step by step.
  • FIG. 5 illustrates an example of a display screen when the stent is expanded.
  • Server 1 produces a second medical image after stent expansion according to the treatment stage specified in status column 504. That is, the server 1 generates an ultrasonic tomographic image generated by the intravascular diagnostic imaging apparatus 21 before stent expansion, an X-ray perspective image captured by the perspective imaging apparatus 22, and device information specified by the user. It is input to the model 50 to generate an ultrasonic tomographic image and an X-ray perspective image that predict the state after stent expansion. In this case, for example, the server 1 generates an ultrasonic tomographic image and an X-ray perspective image showing the image region corresponding to the stent in a display mode different from that of other regions as a second medical image.
  • the server 1 outputs the generated second medical image and displays it on the display device 23.
  • the display device 23 displays the ultrasonic tomographic image and the X-ray perspective image corresponding to the second medical image as the predicted tomographic image 505 and the predicted perspective image 506.
  • FIG. 5 shows a two-dimensional ultrasonic tomographic image and an X-ray fluoroscopic image as the second medical image
  • the server 1 reconstructs the two-dimensional ultrasonic tomographic image and the X-ray fluoroscopic image
  • 3 A dimensional blood vessel image may be generated and displayed on the display device 23.
  • the server 1 sequentially inputs a plurality of ultrasonic tomographic images (transverse layer images) sequentially acquired from the intravascular diagnostic imaging apparatus 21 and an X-ray perspective image into the generation model 50, and then sequentially inputs the plurality of ultrasonic tomographic images (transverse layer images) to the generation model 50.
  • a plurality of ultrasonic tomographic images predicting the state after treatment and an X-ray perspective image are generated as a second medical image.
  • the server 1 aligns each ultrasonic tomographic image with the X-ray fluoroscopic image according to the position of the catheter 211 detected by means such as an X-ray opaque marker during the ultrasonic examination, and uses a known method to perform a three-dimensional image. Convert to. Thereby, the state in the blood vessel after the treatment can be expressed in three dimensions and presented to the user.
  • one pattern of designated input of device information is accepted and one pattern of the second medical image is displayed.
  • a plurality of patterns of designated input of device information are accepted to support a plurality of patterns.
  • a plurality of second medical images may be displayed.
  • the server 1 accepts designated inputs of a plurality of stents as candidates for stents used for treatment, separately generates a second medical image when each stent is used, and displays the preview on the display device 23. Thereby, the convenience of the user can be improved.
  • FIG. 7 is a flowchart showing the procedure of the generation process of the generation model 50. Based on FIG. 7, the processing content when the generation model 50 is generated by machine learning will be described.
  • the control unit 11 of the server 1 is data about a patient who has been treated, and is a first medical image which is a medical image before treatment, device information about a treatment device used for treatment, and a medical after treatment.
  • the training data including the second medical image which is an image is acquired (step S11). Specifically, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image before treatment as a first medical image for a patient who has undergone endovascular treatment.
  • control unit 11 acquires information on the stent, balloon, and the like used for endovascular treatment as device information. In addition, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image after endovascular treatment as a second medical image.
  • the control unit 11 Based on the training data, the control unit 11 generates a generation model 50 that generates a second medical image when the first medical image and device information are input (step S12). Specifically, the control unit 11 generates a GAN that inputs the ultrasonic tomographic image before the treatment, the X-ray fluoroscopic image, and the device information, and outputs the ultrasonic tomographic image and the X-ray fluoroscopic image after the treatment. .. The control unit 11 ends a series of processes.
  • FIG. 8 is a flowchart showing a procedure for generating a second medical image. Based on FIG. 8, the processing content when generating the second medical image will be described.
  • the control unit 11 of the server 1 acquires device information regarding a therapeutic device used for treating a patient's luminal organ from the diagnostic imaging system 2 (step S31). Further, the control unit 11 acquires the first medical image from the diagnostic imaging system 2 (step S32). Specifically, as described above, the control unit 11 acquires an ultrasonic tomographic image of the patient to be treated and an X-ray fluoroscopic image taken at the same time as the ultrasonic examination.
  • the control unit 11 determines whether or not pre-expansion before stent placement is necessary in response to an operation input from the user (step S33). When it is determined that the pre-expansion is not necessary (S33: NO), the control unit 11 shifts the process to step S36. When it is determined that the pre-expansion is necessary (S33: YES), the control unit 11 inputs the first medical image acquired in step S32 into the generation model 50 to generate the second medical image after the pre-expansion. , Output to the display device 23 (step S34).
  • the control unit 11 determines whether or not the pre-expansion is completed in response to the operation input from the user (step S35). When it is determined that the pre-expansion is not completed (S35: NO), the control unit 11 waits for processing.
  • control unit 11 When it is determined that the pre-expansion is completed (S35: YES), the control unit 11 generates a second medical image after the stent expansion and outputs it to the display device 23 (step S36).
  • the control unit 11 determines whether or not post-extension is necessary in response to an operation input from the user (step S37). When it is determined that post-extension is not necessary (S37: NO), the control unit 11 ends a series of processes. When it is determined that post-expansion is necessary (S37: YES), the control unit 11 reacquires the first medical image (step S38). The control unit 11 inputs the acquired first medical image into the generation model 50, generates the second medical image after post-expansion, and outputs the second medical image to the display device 23 (step S39). The control unit 11 ends a series of processes.
  • the first embodiment it is possible to generate a second medical image that predicts the state after the treatment from the first medical image before the treatment and present it to the user.
  • the first embodiment it is possible to generate a second medical image after each of the pre-dilation, the stent dilation, and the post-dilation according to the endovascular treatment and present it to the user.
  • the device information can be preferably specified by accepting the designated input of the target portion on the fluoroscopic image.
  • the state after treatment is represented three-dimensionally from a plurality of ultrasonic tomographic images (cross-sectional layer images) sequentially generated using the generative model 50 and an X-ray perspective image. It is also possible to provide a modified image.
  • the generation accuracy of the second medical image can be improved by using the lesion information in addition to the device information for the input of the generation model 50.
  • FIG. 9 is an explanatory diagram of the generative model 50 according to the second embodiment.
  • the generation model 50 according to the present embodiment does not include an encoder and a decoder for processing an X-ray fluoroscopic image, and has two encoders that accept input of an ultrasonic tomographic image before treatment and device information, respectively, and a super after treatment. It has only a decoder that generates an ultrasonic tomographic image and a classifier that discriminates the authenticity of the ultrasonic tomographic image generated by the decoder.
  • the server 1 uses the ultrasonic tomographic images before and after the treatment of the patient who has undergone endovascular treatment as the training data as the first medical image and the second medical image for training, and generates the generative model 50 shown in FIG. ..
  • FIG. 10 is an explanatory diagram showing an example of a display screen of a medical image according to the second embodiment.
  • the display device 23 displays the current tomographic image 501 corresponding to the first medical image, as in the first embodiment.
  • the display device 23 further displays a vertical tomographic image 521 in which a plurality of ultrasonic tomographic images (cross-layer images) are reconstructed.
  • the vertical tomographic image 521 is a vertical cross-sectional image obtained by reconstructing a plurality of cross-sectional layer images continuously imaged by the intravascular image diagnostic apparatus 21 along the longitudinal direction of the blood vessel.
  • the server 1 reconstructs the longitudinal tomographic image 521 from a plurality of transverse layer images (frame images) continuously imaged along the longitudinal direction of the blood vessel according to the scanning of the catheter 211, and displays the longitudinal tomographic image 521 on the display device 23.
  • the display device 23 receives the drawing input of the rectangular frame on the vertical tomographic image 521, and thereby receives the designated input of the part in the blood vessel to be treated.
  • the display device 23 receives the designated input of other device information in the designated field 503 as in the first embodiment.
  • the server 1 inputs the ultrasonic tomographic image corresponding to the first medical image and the device information into the generation model 50, and generates the ultrasonic tomographic image corresponding to the second medical image. Then, the server 1 displays the generated ultrasonic tomographic image as a predicted tomographic image 505. Of course, as the predicted tomographic image 505, not only the transverse layer image but also the vertical tomographic image may be displayed.
  • a plurality of ultrasonic tomographic images sequentially generated by the generation model 50 may be reconstructed to generate a three-dimensional image and display it on the display device 23.
  • the display device 23 receives from the user input of correction information for correcting the second medical image generated by the generation model 50 on the display screen illustrated in FIG. Specifically, the display device 23 receives a correction input for correcting the position, length, diameter (width), etc. of the stent (therapeutic device) displayed in color. For example, the display device 23 receives a drawing input for newly drawing the boundary (edge) of the image region corresponding to the stent with respect to the predicted tomographic image 505 or the predicted perspective image 506. The display device 23 transmits the second medical image on which the image area corresponding to the stent is drawn by the user to the server 1 as correction information, and causes the server 1 to perform re-learning.
  • the input of the correction information for the second medical image after the stent expansion is accepted, but the input of the correction information for the second medical image after the pre-expansion or the post-expansion may be accepted. ..
  • the server 1 relearns based on the correction information acquired from the display device 23, and updates the generation model 50. Specifically, the server 1 uses the first medical image and device information input when the second medical image is generated as training data for re-learning, and the second medical image in which the image area of the stent is drawn. Is given to the generation model 50, and the parameters of the generator and the classifier are updated.
  • FIG. 11 is a flowchart showing a procedure for generating a second medical image according to the third embodiment.
  • the server 1 executes the following processing.
  • the control unit 11 of the server 1 accepts the input of the correction information of the output second medical image (step S201). For example, the control unit 11 receives a correction input for correcting the position, length, diameter, etc. of the stent on the second medical image.
  • the control unit 11 shifts the process to step S37.
  • the control unit 11 After executing NO or the process of step S39 in step S37, the control unit 11 updates the generation model 50 based on the correction information input in step S201 (step S202). Specifically, the control unit 11 uses the first medical image and device information input when the second medical image was generated in step S36 as input data for training, and uses the second medical image generated in step S36 as input data. The corrected image is retrained as output data for training, and the parameters of the generator and the classifier are updated. The control unit 11 ends a series of processes.
  • the accuracy of generating the second medical image can be improved through the operation of this system.

Abstract

This program makes a computer execute processing for acquiring a first medical image on which a hollow organ of a patient before treatment is imaged and device information associated with a treatment device used for treating the hollow organ, and generating, when the first medical image and device information are input, a second medical image after treatment by inputting the acquired first medical image and device information to a model having learnt to generate the second medical image.

Description

プログラム、情報処理方法、情報処理装置及びモデル生成方法Program, information processing method, information processing device and model generation method
 本発明は、プログラム、情報処理方法、情報処理装置及びモデル生成方法に関する。 The present invention relates to a program, an information processing method, an information processing device, and a model generation method.
 超音波画像、光干渉(OCT:Optical Coherence Tomography)画像、X線画像など、人体内部を可視化した医用画像に基づく治療が行われている。これに伴い、画像観察者が好適に医用画像を観察可能なように、医用画像を加工する種々の技術が提案されている。 Treatment is performed based on medical images that visualize the inside of the human body, such as ultrasonic images, optical coherence tomography (OCT) images, and X-ray images. Along with this, various techniques for processing medical images have been proposed so that image observers can preferably observe medical images.
 例えば特許文献1では、被検体の病変部の超音波画像を表示する超音波診断装置であって、治療前の超音波画像と治療後の超音波画像とを重ね合わせた画像を生成し、病変部の治療領域と未治療領域とを異なった色で表示する超音波診断装置が開示されている。 For example, in Patent Document 1, it is an ultrasonic diagnostic apparatus that displays an ultrasonic image of a lesion portion of a subject, and generates an image in which an ultrasonic image before treatment and an ultrasonic image after treatment are superimposed to generate a lesion. An ultrasonic diagnostic apparatus for displaying a treated area and an untreated area of a part in different colors is disclosed.
特開2008-119071号公報Japanese Unexamined Patent Publication No. 2008-119071
 しかしながら、特許文献1に係る発明は、治療前に取得した画像と治療後に取得した画像とを重ね合わせるに過ぎず、治療後の画像を生成するものではない。 However, the invention according to Patent Document 1 merely superimposes the image acquired before the treatment and the image acquired after the treatment, and does not generate the image after the treatment.
 一つの側面では、治療前の医用画像から治療後の状態を予測した医用画像を得ることができるプログラム等を提供することを目的とする。 One aspect is to provide a program or the like that can obtain a medical image that predicts the state after treatment from a medical image before treatment.
 一つの側面に係るプログラムは、治療前の患者の管腔器官をイメージングした第1医用画像と、前記管腔器官の治療に用いる治療用デバイスに関連するデバイス情報とを取得し、前記第1医用画像及びデバイス情報を入力した場合に、治療後の第2医用画像を生成するよう学習済みのモデルに、取得した前記第1医用画像及びデバイス情報を入力して前記第2医用画像を生成する処理をコンピュータに実行させる。 A program according to one aspect acquires a first medical image of a patient's luminal organ before treatment and device information related to a therapeutic device used for the treatment of the luminal organ, and obtains the first medical image. A process of inputting the acquired first medical image and device information into a model trained to generate a second medical image after treatment when an image and device information are input to generate the second medical image. To the computer.
 一つの側面では、治療前の医用画像から治療後の状態を予測した医用画像を得ることができる。 On one aspect, it is possible to obtain a medical image that predicts the state after treatment from the medical image before treatment.
治療支援システムの構成例を示す説明図である。It is explanatory drawing which shows the configuration example of the treatment support system. サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a server. 実施の形態1の概要を示す説明図である。It is explanatory drawing which shows the outline of Embodiment 1. FIG. 生成モデルに関する説明図である。It is explanatory drawing about the generative model. 医用画像の表示画面例を示す説明図である。It is explanatory drawing which shows the display screen example of a medical image. 血管内治療の治療段階に関する説明図である。It is explanatory drawing about the treatment stage of endovascular treatment. 生成モデルの生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the generation process of a generation model. 第2医用画像の生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the 2nd medical image generation processing. 実施の形態2に係る生成モデルに関する説明図である。It is explanatory drawing about the generation model which concerns on Embodiment 2. 実施の形態2に係る医用画像の表示画面例を示す説明図である。It is explanatory drawing which shows the display screen example of the medical image which concerns on Embodiment 2. 実施の形態3に係る第2医用画像の生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the 2nd medical image generation processing which concerns on Embodiment 3.
 以下、本発明をその実施の形態を示す図面に基づいて詳述する。
(実施の形態1)
 図1は、治療支援システムの構成例を示す説明図である。本実施の形態では、血管内治療を実施前に患者の血管をイメージングした医用画像(以下、「第1医用画像」と呼ぶ)と、血管内治療に用いる治療用デバイスに関するデバイス情報とに基づき、治療後の血管の状態を予測した医用画像(以下、「第2医用画像」)を生成する治療支援システムについて説明する。治療支援システムは、情報処理装置1、画像診断システム2を有する。情報処理装置1及び画像診断システム2は、LAN(Local Area Network)、インターネット等のネットワークNに通信接続されている。
Hereinafter, the present invention will be described in detail with reference to the drawings showing the embodiments thereof.
(Embodiment 1)
FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system. In the present embodiment, based on a medical image (hereinafter referred to as "first medical image") in which a patient's blood vessel is imaged before performing endovascular treatment, and device information regarding a therapeutic device used for endovascular treatment. A treatment support system that generates a medical image (hereinafter, "second medical image") that predicts the state of blood vessels after treatment will be described. The treatment support system includes an information processing device 1 and a diagnostic imaging system 2. The information processing device 1 and the diagnostic imaging system 2 are communication-connected to a network N such as a LAN (Local Area Network) or the Internet.
 なお、本実施の形態では血管内治療を一例に説明するが、対象とする管腔器官は血管に限定されず、例えば胆管、膵管、気管支、腸などの他の管腔器官であってもよい。 In the present embodiment, endovascular treatment will be described as an example, but the target luminal organ is not limited to a blood vessel, and may be another luminal organ such as a bile duct, pancreatic duct, bronchus, or intestine. ..
 画像診断システム2は、血管内画像診断装置21、透視画像撮影装置22、表示装置23を有する。血管内画像診断装置21は、患者の血管内断層像をイメージングするための装置であり、例えばカテーテル211を用いた超音波検査を行うIVUS(Intravascular Ultrasound)装置である。カテーテル211は患者の血管内に挿入される医用器具であり、超音波を送信すると共に血管内からの反射波を受信するイメージングコアを備える。血管内画像診断装置21は、カテーテル211で受信した反射波の信号に基づいて超音波断層像を生成し、表示装置23に表示させる。 The diagnostic imaging system 2 includes an intravascular diagnostic imaging device 21, a fluoroscopic imaging device 22, and a display device 23. The intravascular image diagnostic device 21 is a device for imaging an intravascular tomographic image of a patient, for example, an IVUS (Intravascular Ultrasound) device that performs an ultrasonic examination using a catheter 211. Catheter 211 is a medical device that is inserted into a patient's blood vessel and comprises an imaging core that transmits ultrasonic waves and receives reflected waves from within the blood vessel. The intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image based on the signal of the reflected wave received by the catheter 211 and displays it on the display device 23.
 なお、本実施の形態では血管内画像診断装置21が超音波断層像を生成するものとするが、例えばOCT、OFDI(Optical Frequency Domain Imaging)等の光学的手法による光干渉断層像を撮影してもよい。 In the present embodiment, the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image. For example, an optical interference tomographic image is taken by an optical method such as OCT or OFDI (Optical Frequency Domain Imaging). May be good.
 透視画像撮影装置22は、患者体内を透視した透視画像を撮影するための装置ユニットであり、例えば血管造影検査を行うアンギオグラフィ装置である。透視画像撮影装置22は、X線源221、X線センサ222を備え、X線源221から照射されたX線をX線センサ222が受信することにより、患者のX線透視画像を撮影する。 The fluoroscopic image capturing device 22 is a device unit for capturing a fluoroscopic image that sees through the inside of a patient, and is, for example, an angiography device that performs angiography examination. The fluoroscopic image capturing device 22 includes an X-ray source 221 and an X-ray sensor 222, and the X-ray sensor 222 receives the X-rays emitted from the X-ray source 221 to capture an X-ray fluoroscopic image of the patient.
 なお、上記では医用画像の一例として超音波断層像、光干渉断層像、アンギオグラフィ画像を挙げたが、医用画像はコンピュータ断層撮影(CT;Computed Tomography)画像、磁気共鳴(MRI;Magnetic Resonance Imaging)画像などであってもよい。 In the above, ultrasonic tomography, optical interference tomography, and angiography are given as examples of medical images, but the medical images are computed tomography (CT) images and magnetic resonance imaging (MRI). It may be an image or the like.
 情報処理装置1は、種々の情報処理、情報の送受信が可能な情報処理装置であり、例えばサーバコンピュータ、パーソナルコンピュータ等である。本実施の形態では情報処理装置1がサーバコンピュータであるものとし、以下では簡潔のためサーバ1と読み替える。なお、サーバ1は画像診断システム2と同じ施設(病院等)に設置されたローカルサーバであってもよく、インターネット等を介して通信接続されたクラウドサーバであってもよい。サーバ1は、画像診断システム2で生成された第1医用画像(超音波断層像及びX線透視画像)から第2医用画像を生成する生成装置として機能し、生成した第2医用画像を画像診断システム2に出力する。 The information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer. In the present embodiment, it is assumed that the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity. The server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging system 2, or may be a cloud server communicated and connected via the Internet or the like. The server 1 functions as a generator that generates a second medical image from the first medical image (ultrasonic tomographic image and X-ray perspective image) generated by the diagnostic imaging system 2, and the generated second medical image is image-diagnosed. Output to system 2.
 具体的には後述のように、サーバ1は、所定の訓練データを学習する機械学習を事前に行い、第1医用画像及びデバイス情報を入力として、第2医用画像を生成する生成モデル50(図3等参照)を用意してある。なお、デバイス情報については後述する。サーバ1は、画像診断システム2から第1医用画像及びデバイス情報を取得して生成モデル50に入力し、第2医用画像を生成して表示装置23に表示させる。 Specifically, as will be described later, the server 1 performs machine learning to learn predetermined training data in advance, and generates a second medical image by inputting the first medical image and device information (FIG. 5). See 3rd grade). The device information will be described later. The server 1 acquires the first medical image and device information from the diagnostic imaging system 2 and inputs them to the generation model 50 to generate the second medical image and display it on the display device 23.
 なお、本実施の形態では画像診断システム2とは別体のサーバ1において第2医用画像を生成するものとするが、サーバ1が機械学習によって生成した生成モデル50を画像診断システム2にインストールし、画像診断システム2で第2医用画像の生成を可能としてもよい。 In the present embodiment, the second medical image is generated on the server 1 separate from the diagnostic imaging system 2, but the generation model 50 generated by the server 1 by machine learning is installed in the diagnostic imaging system 2. , The diagnostic imaging system 2 may be capable of generating a second medical image.
 図2は、サーバ1の構成例を示すブロック図である。サーバ1は、制御部11、主記憶部12、通信部13、及び補助記憶部14を備える。
 制御部11は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)等の演算処理装置を有し、補助記憶部14に記憶されたプログラムPを読み出して実行することにより、種々の情報処理、制御処理等を行う。主記憶部12は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部11が演算処理を実行するために必要なデータを一時的に記憶する。通信部13は、通信に関する処理を行うための通信モジュールであり、外部と情報の送受信を行う。
FIG. 2 is a block diagram showing a configuration example of the server 1. The server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
The control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed. The main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing. Remember. The communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
 補助記憶部14は、大容量メモリ、ハードディスク等の不揮発性記憶領域であり、制御部11が処理を実行するために必要なプログラムP、その他のデータを記憶している。また、補助記憶部14は、生成モデル50を記憶している。生成モデル50は、上述の如く訓練データを学習済みの機械学習モデルであり、第1医用画像及びデバイス情報を入力として、第2医用画像を生成するモデルである。生成モデル50は、人工知能ソフトウェアを構成するプログラムモジュールとしての利用が想定される。 The auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the generation model 50. The generative model 50 is a machine learning model in which training data has been trained as described above, and is a model that generates a second medical image by inputting a first medical image and device information. The generation model 50 is expected to be used as a program module constituting artificial intelligence software.
 なお、補助記憶部14はサーバ1に接続された外部記憶装置であってもよい。また、サーバ1は複数のコンピュータからなるマルチコンピュータであっても良く、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。 The auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
 また、本実施の形態においてサーバ1は上記の構成に限られず、例えば操作入力を受け付ける入力部、画像を表示する表示部等を含んでもよい。また、サーバ1は、CD(Compact Disk)、DVD(Digital Versatile Disc)、USB(Universal Serial Bus)メモリ等の可搬型記憶媒体1aを読み取る読取部を備え、可搬型記憶媒体1aからプログラムPを読み取って実行するようにしても良い。あるいはサーバ1は、半導体メモリ1bからプログラムPを読み込んでも良い。 Further, in the present embodiment, the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 is provided with a reading unit for reading a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), or a USB (UniversalSerialBus) memory, and reads a program P from the portable storage medium 1a. You may try to execute it. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
 図3は、実施の形態1の概要を示す説明図である。図3では、生成モデル50を用いて第1医用画像及びテキスト情報から第2医用画像を生成する様子を概念的に図示している。図3に基づき、本実施の形態の概要を説明する。 FIG. 3 is an explanatory diagram showing an outline of the first embodiment. FIG. 3 conceptually illustrates how a second medical image is generated from the first medical image and text information using the generative model 50. An outline of the present embodiment will be described with reference to FIG.
 生成モデル50は、患者の治療前に画像診断システム2で取得した第1医用画像と、治療に用いる治療用デバイスに関するデバイス情報とを入力として、治療後の状態を予測した第2医用画像を生成する機械学習モデルである。第1医用画像は、血管内画像診断装置21及び透視画像撮影装置22でそれぞれ取得した超音波断層像及びX線透視画像であり、治療前に患者の血管を検査した画像である。 The generation model 50 generates a second medical image that predicts the state after the treatment by inputting the first medical image acquired by the diagnostic imaging system 2 before the treatment of the patient and the device information about the treatment device used for the treatment. It is a machine learning model to do. The first medical image is an ultrasonic tomographic image and an X-ray fluoroscopic image acquired by the intravascular image diagnostic apparatus 21 and the fluoroscopic imaging apparatus 22, respectively, and is an image in which the blood vessel of the patient is inspected before treatment.
 デバイス情報は、血管内治療に使用するデバイスに関連する情報であり、例えばPCI(Percutaneous Coronary Intervention;経皮的冠動脈形成術)に用いられるステント、バルーン等に関連する情報である。例えばデバイス情報は、患者の血管内に留置するステントの長さ、直径、種類、留置位置、バルーンの種類、バルーンによる拡張後の直径(以下、「拡張径」と呼ぶ)、拡張圧、拡張時間などを含む。なお、治療用デバイスはステント及びバルーンに限定されず、ロータブレータ等の他のデバイスであってもよい。 The device information is information related to a device used for endovascular treatment, for example, information related to a stent, a balloon, etc. used for PCI (Percutaneous Coronary Intervention). For example, device information includes the length, diameter, type, placement position, balloon type, diameter after expansion by the balloon (hereinafter referred to as "expansion diameter"), expansion pressure, and expansion time of the stent to be placed in the patient's blood vessel. And so on. The therapeutic device is not limited to the stent and the balloon, and may be another device such as a rotablator.
 第2医用画像は、治療用デバイスを用いた治療後の血管内の状態を予測した画像であり、治療後の超音波断層像及びX線透視画像である。なお、サーバ1は超音波断層像及びX線透視画像のいずれか一方のみを第2医用画像として生成してもよく、あるいは3種類以上の画像を生成してもよい。 The second medical image is an image that predicts the state in the blood vessel after the treatment using the therapeutic device, and is an ultrasonic tomographic image and an X-ray fluoroscopic image after the treatment. The server 1 may generate only one of the ultrasonic tomographic image and the X-ray perspective image as the second medical image, or may generate three or more types of images.
 サーバ1は、訓練用の第1医用画像及びデバイス情報と、第2医用画像とを含む訓練データを学習し、生成モデル50を生成する。訓練データは、例えば血管内治療を実施済みの患者のデータであり、治療前の超音波断層像及びX線透視画像(第1医用画像)と、当該患者の治療に用いた治療用デバイスに関するデバイス情報と、治療後の超音波断層像及びX線透視画像とを含む。サーバ1は、これらのデータを訓練データに用いて機械学習を行い、生成モデル50を予め生成してある。 The server 1 learns the training data including the first medical image and device information for training and the second medical image, and generates the generation model 50. The training data is, for example, data of a patient who has undergone endovascular treatment, and is an ultrasonic tomographic image and an X-ray fluoroscopic image (first medical image) before treatment, and a device related to a therapeutic device used for treating the patient. Includes information and post-treatment ultrasound tomographic and fluoroscopic images. The server 1 performs machine learning using these data as training data, and generates a generation model 50 in advance.
 なお、訓練データは実際の患者のデータに限定されず、例えばGAN(Generative Adversarial Network)等のデータ生成手段で水増しされた仮想のデータであってもよい。 The training data is not limited to the actual patient data, and may be virtual data inflated by a data generation means such as GAN (Generative Adversarial Network).
 例えばサーバ1は、患者の治療時に第1医用画像及びデバイス情報を画像診断システム2から取得し、治療後の第2医用画像を生成して表示装置23に表示させる。具体的には、サーバ1は、カテーテル211を用いた超音波検査時に、血管の長手方向に沿って連続してイメージングされた複数の超音波断層像(フレーム画像)を血管内画像診断装置21から順次取得する。また、サーバ1は、超音波検査時に同時に撮影されているX線透視画像を透視画像撮影装置22から取得する。サーバ1は、連続する複数の超音波断層像と、各超音波断層像を取得時のX線透視画像とをデバイス情報と共に生成モデル50に順次入力し、治療後の状態を予測した超音波断層像及びX線透視画像を順次生成する。 For example, the server 1 acquires the first medical image and device information from the diagnostic imaging system 2 at the time of treating the patient, generates the second medical image after the treatment, and displays it on the display device 23. Specifically, the server 1 receives a plurality of ultrasonic tomographic images (frame images) continuously imaged along the longitudinal direction of the blood vessel from the intravascular image diagnostic apparatus 21 during the ultrasonic examination using the catheter 211. Acquire sequentially. Further, the server 1 acquires an X-ray fluoroscopic image taken at the same time as the ultrasonic inspection from the fluoroscopic image capturing device 22. The server 1 sequentially inputs a plurality of continuous ultrasonic tomographic images and an X-ray fluoroscopic image at the time of acquisition of each ultrasonic tomographic image into the generation model 50 together with device information, and predicts the state after treatment. Images and fluoroscopic images are sequentially generated.
 図3の例では、治療後の第2医用画像として、ステントが留置され、バルーンにより拡張された後の血管の超音波断層像及びX線透視画像が生成される様子を図示している。なお、図3では、ステントに対応する部分をハッチングにより図示している。サーバ1は、生成モデル50を用いて治療後の血管の状態を予測し、第2医用画像として提示する。 In the example of FIG. 3, as a second medical image after treatment, an ultrasonic tomographic image and an X-ray fluoroscopic image of a blood vessel after a stent is placed and expanded by a balloon are generated. In FIG. 3, the portion corresponding to the stent is shown by hatching. The server 1 predicts the state of the blood vessel after the treatment using the generative model 50 and presents it as a second medical image.
 なお、本実施の形態では治療時に第2医用画像を生成して画像診断システム2に出力するものとするが、録画された第1医用画像を事後的に生成モデル50に入力して第2医用画像を生成してもよいことは勿論である。 In the present embodiment, the second medical image is generated at the time of treatment and output to the diagnostic imaging system 2. However, the recorded first medical image is input to the generation model 50 after the fact and used for the second medical. Of course, an image may be generated.
 図4は、生成モデル50に関する説明図である。本実施の形態では、生成モデル50としてGANを用いる。GANは、入力データから出力データを生成する生成器(Generator)と、生成器が生成したデータの真偽を識別する識別器(Discriminator)とを備え、生成器及び識別器が競合して学習を行うことでネットワークを構築する。 FIG. 4 is an explanatory diagram of the generative model 50. In this embodiment, GAN is used as the generative model 50. The GAN includes a generator that generates output data from input data and a discriminator that discriminates the authenticity of the data generated by the generator, and the generator and the discriminator compete with each other for learning. Build a network by doing.
 GANに係る生成器はランダムなノイズ(潜在変数)の入力を受け付け、出力データを生成する。識別器は、学習用に与えられる真のデータと、生成器から与えられるデータとを用いて、生成器から与えられたデータの真偽を学習する。GANでは、最終的に生成器の損失関数が最小化し、かつ、識別器の損失関数が最大化するようにネットワークを構築する。 The generator related to GAN accepts random noise (latent variable) input and generates output data. The discriminator learns the truth of the data given by the generator by using the true data given for learning and the data given by the generator. In GAN, the network is constructed so that the loss function of the generator is finally minimized and the loss function of the discriminator is maximized.
 本実施の形態に係る生成モデル50の生成器は、入力データを潜在変数に変換するエンコーダと、潜在変数から出力データを生成するデコーダとを備え、第1医用画像及びデバイス情報から第2医用画像を生成する。例えば図4に示すように、生成モデル50の生成器は、治療前の超音波断層像、X線透視画像、及びデバイス情報の3種類の入力データの入力を受け付ける3つのエンコーダを備える。また、生成器は、治療後の超音波断層像及びX線透視画像を生成する2つのデコーダを備える。また、生成モデル50は、各デコーダから出力されたデータの真偽を識別する2つの識別器を備える。生成器は、各エンコーダにおいて入力データの特徴量を抽出し、各入力データの特徴量を結合した潜在変数を各デコーダに入力して、治療後の状態を予測した超音波断層像及びX線透視画像を生成する。2つの識別器はそれぞれ、超音波断層像及びX線透視画像の真偽を識別する。 The generator of the generation model 50 according to the present embodiment includes an encoder that converts input data into a latent variable and a decoder that generates output data from the latent variable, and includes a first medical image and a second medical image from device information. To generate. For example, as shown in FIG. 4, the generator of the generative model 50 includes three encoders that accept inputs of three types of input data: an ultrasonic tomographic image before treatment, an X-ray fluoroscopic image, and device information. The generator also comprises two decoders that generate a post-treatment ultrasound tomographic image and a fluoroscopic image. Further, the generative model 50 includes two classifiers for discriminating the authenticity of the data output from each decoder. The generator extracts the features of the input data in each encoder, inputs the latent variable that combines the features of each input data to each decoder, and predicts the state after treatment. Ultrasonic tomographic image and X-ray fluoroscopy. Generate an image. The two classifiers discriminate between the authenticity of the ultrasonic tomographic image and the fluoroscopic image, respectively.
 サーバ1は、訓練用に与えられる第1医用画像及びデバイス情報と、第2医用画像とを用いて学習を行い、生成モデル50を生成する。例えばサーバ1はまず、生成器のパラメータ(重み等)を固定した上で訓練用の第1医用画像及びデバイス情報を生成器に入力し、第2医用画像を生成する。そしてサーバ1は、生成器が生成した第2医用画像を偽のデータとし、訓練用の第2医用画像を真のデータとして識別器に与え、識別器のパラメータを最適化する。次にサーバ1は、識別器のパラメータを最適値に固定し、生成器の学習を行う。サーバ1は、生成器が生成した第2医用画像を識別器に入力した場合に、真偽の確率が50%に近似するよう生成器のパラメータを最適化する。これにより、サーバ1は生成モデル50を生成する。実際に第1医用画像から第2医用画像を生成する場合は生成器のみを用いる。 The server 1 performs learning using the first medical image and device information given for training and the second medical image, and generates a generation model 50. For example, the server 1 first fixes the parameters (weights, etc.) of the generator, inputs the first medical image for training and the device information to the generator, and generates the second medical image. Then, the server 1 uses the second medical image generated by the generator as fake data, gives the second medical image for training as true data to the classifier, and optimizes the parameters of the classifier. Next, the server 1 fixes the parameter of the classifier to the optimum value and learns the generator. The server 1 optimizes the parameters of the generator so that the probability of authenticity is close to 50% when the second medical image generated by the generator is input to the classifier. As a result, the server 1 generates the generative model 50. When actually generating the second medical image from the first medical image, only the generator is used.
 なお、生成モデル50はGANに限定されず、VAE(Variational Autoencoder)、CNN(例えばU-net)等のニューラルネットワーク、あるいはその他の学習アルゴリズムに基づくモデルであってもよい。 The generative model 50 is not limited to GAN, and may be a model based on a neural network such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
 また、図4に示す生成モデル50のネットワーク構造は一例であり、本実施の形態はこれに限定されるものではない。例えば、超音波断層像及びX線透視画像という2つの画像を結合して単一の画像とみなし、1種類のエンコーダ及びデコーダで2つの画像を同時に処理可能としてもよい。また、デバイス情報の特徴量を抽出するためのエンコーダは用意せず、One-hotエンコーディング等の手段でデバイス情報(テキストデータ)を符号化し、潜在空間に直接写像(結合)してもよい。このように、生成モデル50のネットワーク構造は種々の変更が考えられる。 Further, the network structure of the generation model 50 shown in FIG. 4 is an example, and the present embodiment is not limited to this. For example, two images, an ultrasonic tomographic image and an X-ray perspective image, may be combined and regarded as a single image, and two images may be processed simultaneously by one type of encoder and decoder. Further, the encoder for extracting the feature amount of the device information is not prepared, and the device information (text data) may be encoded by a means such as One-hot encoding and directly mapped (combined) to the latent space. As described above, various changes can be considered in the network structure of the generative model 50.
 なお、サーバ1は、訓練用の第2医用画像として、図3で例示したように、ステント等のユーザが関心あるオブジェクトを他の画像領域と異なる表示態様で表示する第2医用画像を用いると好適である。例えばサーバ1は、ステント等のオブジェクトに対応する画像領域を、色分け、エッジ強調等によりラベリング(加工)した画像を用いる。これにより、サーバ1は、ユーザが関心あるオブジェクトを識別可能な第2医用画像を生成することができる。 As the second medical image for training, the server 1 uses a second medical image that displays an object of interest to the user, such as a stent, in a display mode different from that of other image areas, as illustrated in FIG. Suitable. For example, the server 1 uses an image in which an image area corresponding to an object such as a stent is labeled (processed) by color coding, edge enhancement, or the like. This allows the server 1 to generate a second medical image that allows the user to identify the object of interest.
 なお、ラベリングされるオブジェクトはステント等の治療用デバイスに限定されず、例えば血管内の病変部(プラーク等)、内腔境界など、その他のオブジェクトであってもよい。 The labeled object is not limited to a therapeutic device such as a stent, and may be another object such as a lesion (plaque or the like) in a blood vessel or a lumen boundary.
 また、上記では医用画像以外の入力データとしてステント情報を挙げたが、血管内治療に関連するその他の情報も生成モデル50に入力してもよい。例えばサーバ1は、血管内の病変部に関する病変情報を生成モデル50に入力する。病変情報は、例えば病変部の種類(プラーク、石灰化した組織、血管狭窄部など)、位置、性状(病変部の硬さ等)などである。例えばサーバ1は、ステント情報に対応するエンコーダに病変情報を併せて入力し、潜在変数に変換する。これにより、生成モデル50は病変の性質や状態も参照しつつ第2医用画像を生成することができる。 In addition, although the stent information is mentioned as input data other than the medical image in the above, other information related to endovascular treatment may also be input to the generation model 50. For example, the server 1 inputs lesion information regarding a lesion portion in a blood vessel into the generation model 50. The lesion information includes, for example, the type of lesion (plaque, calcified tissue, vascular stenosis, etc.), position, and properties (hardness of lesion, etc.). For example, the server 1 also inputs the lesion information into the encoder corresponding to the stent information and converts it into a latent variable. As a result, the generative model 50 can generate a second medical image while also referring to the nature and state of the lesion.
 図5は、医用画像の表示画面例を示す説明図である。図5では、表示装置23が表示する画面例であって、第1医用画像及び第2医用画像を表示した画面例を図示している。図5に基づき、血管内治療時に実行される処理フローについて説明する。 FIG. 5 is an explanatory diagram showing an example of a display screen of a medical image. FIG. 5 is an example of a screen displayed by the display device 23, and shows an example of a screen displaying a first medical image and a second medical image. The processing flow executed at the time of endovascular treatment will be described with reference to FIG.
 例えば表示装置23は、第1医用画像に対応する現在の血管の超音波断層像及びX線透視画像を、現断層像501、現透視画像502として表示する。また、表示装置23は、デバイス情報の指定入力を受け付けるため、画面左側のメニューバーに指定欄503、503、503…を表示する。また、表示装置23は、現在の治療段階の指定入力を受け付けるためのステータス欄504をメニューバーに表示する。 For example, the display device 23 displays the ultrasonic tomographic image and the X-ray perspective image of the current blood vessel corresponding to the first medical image as the current tomographic image 501 and the current perspective image 502. Further, in order to accept the designated input of the device information, the display device 23 displays the designated fields 503, 503, 503 ... In the menu bar on the left side of the screen. In addition, the display device 23 displays a status column 504 for accepting a designated input of the current treatment stage on the menu bar.
 表示装置23は、当該画面上でデバイス情報の指定入力を受け付ける。具体的には、表示装置23は、現透視画像502上に矩形枠を描画する操作入力を受け付けることで、治療対象とする血管内の部位(病変部の位置)の指定入力を受け付ける。さらに表示装置23は、ステントの長さ、直径、バルーンによる拡張径等の指定入力を各指定欄503により受け付ける。 The display device 23 accepts a designated input of device information on the screen. Specifically, the display device 23 accepts the operation input for drawing a rectangular frame on the current fluoroscopic image 502, thereby accepting the designated input of the site (position of the lesion portion) in the blood vessel to be treated. Further, the display device 23 accepts designation inputs such as the length and diameter of the stent and the expansion diameter by the balloon in each designation field 503.
 なお、図5の例では、複数の指定欄503により各項目のデバイス情報を個別に入力することとしているが、例えばステントやバルーンの製品名を入力することで、各項目を同時に指定可能としてもよい。 In the example of FIG. 5, the device information of each item is individually input in the plurality of designation fields 503. However, for example, by inputting the product name of the stent or balloon, each item can be specified at the same time. good.
 また、表示装置23は、ステータス欄504を介して、現在の血管内治療の治療段階の指定入力を受け付ける。サーバ1は、ステータス欄504で指定された治療段階に応じた第2医用画像を生成し、表示装置23に出力する。 Further, the display device 23 accepts the designated input of the treatment stage of the current endovascular treatment via the status column 504. The server 1 generates a second medical image according to the treatment stage specified in the status column 504 and outputs it to the display device 23.
 図6は、血管内治療の治療段階に関する説明図である。図6では、PCIを実施する場合の一般的な治療フローを概念的に図示している。一般的には、PCIは以下の手順で実施される。 FIG. 6 is an explanatory diagram regarding the treatment stage of endovascular treatment. FIG. 6 conceptually illustrates a general treatment flow when performing PCI. Generally, PCI is carried out by the following procedure.
 まず、カテーテル211を誘導するためのガイドワイヤを患者の血管に挿入する。次に、カテーテル211を挿入して超音波断層像を取得し、X線透視画像と合わせて病変部の状態を確認する。その後、ステントを留置可能なように、必要に応じて留置前にバルーンによる血管の拡張が行われる。なお、ステント留置前のバルーン拡張は省略され得る。 First, a guide wire for guiding the catheter 211 is inserted into the patient's blood vessel. Next, the catheter 211 is inserted to obtain an ultrasonic tomographic image, and the condition of the lesion is confirmed together with the fluoroscopic image. The balloon is then vasodilated prior to placement, if necessary, so that the stent can be placed. Balloon expansion before stent placement may be omitted.
 次に、ステントを血管内に留置し、バルーンによって病変部を拡張する。ステントの留置後、超音波画像を再度取得し、X線透視画像と合わせて拡張後の病変部の状態を確認する。その後、ステントの拡張が不十分である場合などには、再びバルーンによる血管の拡張が行われる。なお、ステント留置後のバルーン拡張は省略され得る。最後に病変部の状態を各画像で確認し、一連の処置が終了する。 Next, a stent is placed in the blood vessel and the lesion is expanded with a balloon. After the stent is placed, the ultrasonic image is acquired again, and the condition of the lesion after expansion is confirmed together with the fluoroscopic image. After that, if the stent is not sufficiently dilated, the blood vessel is dilated again by the balloon. Balloon expansion after stent placement may be omitted. Finally, the condition of the lesion is confirmed on each image, and a series of treatments is completed.
 なお、以下の説明では簡潔のため、ステント留置前のバルーン拡張を「前拡張」と呼び、ステント留置後のバルーン拡張を「後拡張」と呼ぶ。 In the following explanation, for the sake of brevity, balloon dilation before stent placement is referred to as "pre-dilation", and balloon dilation after stent placement is referred to as "post-dilation".
 本実施の形態においてサーバ1は、上記の前拡張、ステントの拡張、及び後拡張に合わせて、各治療段階の処置前の第1医用画像から処置後の第2医用画像を生成モデル50により生成する。 In the present embodiment, the server 1 generates a second medical image after the treatment from the first medical image before the treatment at each treatment stage by the generation model 50 in accordance with the above-mentioned pre-dilation, stent expansion, and post-dilation. do.
 例えばサーバ1は、ConditionalGANのように、生成モデル50を入力データのクラスラベルを入力可能なモデルとして構成する。サーバ1は、生成モデル50に第1医用画像を入力する際に、第1医用画像のクラスを示すラベルデータとして、前拡張の処置前であるか、ステントの拡張前であるか、又は後拡張の処置前であるかを示す値を入力する。そしてサーバ1は、入力されたラベルデータに応じて、いずれかの処置後の第2医用画像を生成する。これにより、ユーザはこれから行う処置後の状態を予測した第2医用画像を段階的に確認することができる。 For example, the server 1 configures the generative model 50 as a model in which the class label of the input data can be input, such as the Conditional GAN. When the server 1 inputs the first medical image into the generative model 50, the label data indicating the class of the first medical image is used before the pre-expansion procedure, before the stent expansion, or after the post-expansion. Enter a value that indicates whether it is before the treatment of. Then, the server 1 generates a second medical image after any of the treatments according to the input label data. As a result, the user can confirm the second medical image that predicts the state after the treatment to be performed step by step.
 図5では、ステント拡張時の表示画面例を図示している。サーバ1は、ステータス欄504で指定された治療段階に従い、ステント拡張後の第2医用画像を生成する。すなわち、サーバ1は、ステント拡張前に血管内画像診断装置21で生成された超音波断層像と、透視画像撮影装置22で撮影されたX線透視画像と、ユーザが指定したデバイス情報とを生成モデル50に入力し、ステント拡張後の状態を予測した超音波断層像及びX線透視画像を生成する。この場合、例えばサーバ1は、ステントに対応する画像領域を他の領域と異なる表示態様で示す超音波断層像及びX線透視画像を第2医用画像として生成する。 FIG. 5 illustrates an example of a display screen when the stent is expanded. Server 1 produces a second medical image after stent expansion according to the treatment stage specified in status column 504. That is, the server 1 generates an ultrasonic tomographic image generated by the intravascular diagnostic imaging apparatus 21 before stent expansion, an X-ray perspective image captured by the perspective imaging apparatus 22, and device information specified by the user. It is input to the model 50 to generate an ultrasonic tomographic image and an X-ray perspective image that predict the state after stent expansion. In this case, for example, the server 1 generates an ultrasonic tomographic image and an X-ray perspective image showing the image region corresponding to the stent in a display mode different from that of other regions as a second medical image.
 サーバ1は、生成した第2医用画像を出力し、表示装置23に表示させる。表示装置23は、第2医用画像に対応する超音波断層像及びX線透視画像を、予測断層像505、予測透視画像506として表示する。 The server 1 outputs the generated second medical image and displays it on the display device 23. The display device 23 displays the ultrasonic tomographic image and the X-ray perspective image corresponding to the second medical image as the predicted tomographic image 505 and the predicted perspective image 506.
 なお、図5では第2医用画像として2次元の超音波断層像及びX線透視画像を図示しているが、サーバ1は2次元の超音波断層像及びX線透視画像を再構成し、3次元の血管画像を生成して表示装置23に表示させてもよい。図3で説明したように、サーバ1は、血管内画像診断装置21から順次取得する複数の超音波断層像(横断層像)と、X線透視画像とを生成モデル50に順次入力して、治療後の状態を予測した複数の超音波断層像と、X線透視画像とを第2医用画像として生成する。サーバ1は、超音波検査時にX線不透過マーカ等の手段で検出したカテーテル211の位置に応じて各超音波断層像とX線透視画像との位置合わせを行い、公知の手法で3次元画像に変換する。これにより、治療後の血管内の状態を3次元で表現し、ユーザに提示することができる。 Although FIG. 5 shows a two-dimensional ultrasonic tomographic image and an X-ray fluoroscopic image as the second medical image, the server 1 reconstructs the two-dimensional ultrasonic tomographic image and the X-ray fluoroscopic image, and 3 A dimensional blood vessel image may be generated and displayed on the display device 23. As described with reference to FIG. 3, the server 1 sequentially inputs a plurality of ultrasonic tomographic images (transverse layer images) sequentially acquired from the intravascular diagnostic imaging apparatus 21 and an X-ray perspective image into the generation model 50, and then sequentially inputs the plurality of ultrasonic tomographic images (transverse layer images) to the generation model 50. A plurality of ultrasonic tomographic images predicting the state after treatment and an X-ray perspective image are generated as a second medical image. The server 1 aligns each ultrasonic tomographic image with the X-ray fluoroscopic image according to the position of the catheter 211 detected by means such as an X-ray opaque marker during the ultrasonic examination, and uses a known method to perform a three-dimensional image. Convert to. Thereby, the state in the blood vessel after the treatment can be expressed in three dimensions and presented to the user.
 また、図5の例では、一パターンのデバイス情報の指定入力を受け付けて一パターンの第2医用画像を表示するものとしたが、複数パターンのデバイス情報の指定入力を受け付けて、複数パターンに対応する複数の第2医用画像を表示してもよい。例えばサーバ1は、治療に用いるステントの候補として複数のステントの指定入力を受け付け、各ステントを用いた場合の第2医用画像を別々に生成し、表示装置23にプレビュー表示させる。これにより、ユーザの利便性を向上させることができる。 Further, in the example of FIG. 5, one pattern of designated input of device information is accepted and one pattern of the second medical image is displayed. However, a plurality of patterns of designated input of device information are accepted to support a plurality of patterns. A plurality of second medical images may be displayed. For example, the server 1 accepts designated inputs of a plurality of stents as candidates for stents used for treatment, separately generates a second medical image when each stent is used, and displays the preview on the display device 23. Thereby, the convenience of the user can be improved.
 図7は、生成モデル50の生成処理の手順を示すフローチャートである。図7に基づき、機械学習により生成モデル50を生成する際の処理内容について説明する。
 サーバ1の制御部11は、治療を実施済みの患者についてのデータであって、治療前の医用画像である第1医用画像、及び治療に用いた治療用デバイスに関するデバイス情報と、治療後の医用画像である第2医用画像とを含む訓練データを取得する(ステップS11)。具体的には、制御部11は、血管内治療を実施済みの患者に関し、治療前の超音波断層像及びX線透視画像を第1医用画像として取得する。また、制御部11は、血管内治療に用いたステント、バルーン等の情報をデバイス情報として取得する。また、制御部11は、血管内治療後の超音波断層像及びX線透視画像を第2医用画像として取得する。
FIG. 7 is a flowchart showing the procedure of the generation process of the generation model 50. Based on FIG. 7, the processing content when the generation model 50 is generated by machine learning will be described.
The control unit 11 of the server 1 is data about a patient who has been treated, and is a first medical image which is a medical image before treatment, device information about a treatment device used for treatment, and a medical after treatment. The training data including the second medical image which is an image is acquired (step S11). Specifically, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image before treatment as a first medical image for a patient who has undergone endovascular treatment. In addition, the control unit 11 acquires information on the stent, balloon, and the like used for endovascular treatment as device information. In addition, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image after endovascular treatment as a second medical image.
 制御部11は訓練データに基づき、第1医用画像及びデバイス情報を入力した場合に第2医用画像を生成する生成モデル50を生成する(ステップS12)。具体的には、制御部11は、治療前の超音波断層像、X線透視画像、及びデバイス情報を入力として、治療後の超音波断層像及びX線透視画像を出力とするGANを生成する。制御部11は一連の処理を終了する。 Based on the training data, the control unit 11 generates a generation model 50 that generates a second medical image when the first medical image and device information are input (step S12). Specifically, the control unit 11 generates a GAN that inputs the ultrasonic tomographic image before the treatment, the X-ray fluoroscopic image, and the device information, and outputs the ultrasonic tomographic image and the X-ray fluoroscopic image after the treatment. .. The control unit 11 ends a series of processes.
 図8は、第2医用画像の生成処理の手順を示すフローチャートである。図8に基づき、第2医用画像を生成する際の処理内容について説明する。
 サーバ1の制御部11は、患者の管腔器官の治療に用いる治療用デバイスに関するデバイス情報を画像診断システム2から取得する(ステップS31)。また、制御部11は、第1医用画像を画像診断システム2から取得する(ステップS32)。具体的には上述の如く、制御部11は、治療対象の患者を検査した超音波断層像と、超音波検査時に同時に撮影されたX線透視画像とを取得する。
FIG. 8 is a flowchart showing a procedure for generating a second medical image. Based on FIG. 8, the processing content when generating the second medical image will be described.
The control unit 11 of the server 1 acquires device information regarding a therapeutic device used for treating a patient's luminal organ from the diagnostic imaging system 2 (step S31). Further, the control unit 11 acquires the first medical image from the diagnostic imaging system 2 (step S32). Specifically, as described above, the control unit 11 acquires an ultrasonic tomographic image of the patient to be treated and an X-ray fluoroscopic image taken at the same time as the ultrasonic examination.
 制御部11は、ユーザからの操作入力に応じて、ステント留置前の前拡張が必要か否かを判定する(ステップS33)。前拡張が必要ではないと判定した場合(S33:NO)、制御部11は処理をステップS36に移行する。前拡張が必要であると判定した場合(S33:YES)、制御部11は、ステップS32で取得した第1医用画像を生成モデル50に入力して、前拡張後の第2医用画像を生成し、表示装置23に出力する(ステップS34)。 The control unit 11 determines whether or not pre-expansion before stent placement is necessary in response to an operation input from the user (step S33). When it is determined that the pre-expansion is not necessary (S33: NO), the control unit 11 shifts the process to step S36. When it is determined that the pre-expansion is necessary (S33: YES), the control unit 11 inputs the first medical image acquired in step S32 into the generation model 50 to generate the second medical image after the pre-expansion. , Output to the display device 23 (step S34).
 制御部11は、ユーザからの操作入力に応じて、前拡張が完了したか否かを判定する(ステップS35)。前拡張が完了していないと判定した場合(S35:NO)、制御部11は処理を待機する。 The control unit 11 determines whether or not the pre-expansion is completed in response to the operation input from the user (step S35). When it is determined that the pre-expansion is not completed (S35: NO), the control unit 11 waits for processing.
 前拡張が完了したと判定した場合(S35:YES)、制御部11はステント拡張後の第2医用画像を生成し、表示装置23に出力する(ステップS36)。 When it is determined that the pre-expansion is completed (S35: YES), the control unit 11 generates a second medical image after the stent expansion and outputs it to the display device 23 (step S36).
 制御部11は、ユーザからの操作入力に応じて、後拡張が必要であるか否かを判定する(ステップS37)。後拡張が必要でないと判定した場合(S37:NO)、制御部11は一連の処理を終了する。後拡張が必要であると判定した場合(S37:YES)、制御部11は、第1医用画像を再取得する(ステップS38)。制御部11は、取得した第1医用画像を生成モデル50に入力して後拡張後の第2医用画像を生成し、表示装置23に出力する(ステップS39)。制御部11は一連の処理を終了する。 The control unit 11 determines whether or not post-extension is necessary in response to an operation input from the user (step S37). When it is determined that post-extension is not necessary (S37: NO), the control unit 11 ends a series of processes. When it is determined that post-expansion is necessary (S37: YES), the control unit 11 reacquires the first medical image (step S38). The control unit 11 inputs the acquired first medical image into the generation model 50, generates the second medical image after post-expansion, and outputs the second medical image to the display device 23 (step S39). The control unit 11 ends a series of processes.
 以上より、本実施の形態1によれば、治療前の第1医用画像から治療後の状態を予測した第2医用画像を生成し、ユーザに提示することができる。 From the above, according to the first embodiment, it is possible to generate a second medical image that predicts the state after the treatment from the first medical image before the treatment and present it to the user.
 また、本実施の形態1によれば、複数の治療段階に合わせて、各処置後の第2医用画像を提示することができる。 Further, according to the first embodiment, it is possible to present a second medical image after each treatment according to a plurality of treatment stages.
 また、本実施の形態1によれば、血管内治療に合わせて、前拡張後、ステント拡張後、及び後拡張後それぞれの第2医用画像を生成してユーザに提示することができる。 Further, according to the first embodiment, it is possible to generate a second medical image after each of the pre-dilation, the stent dilation, and the post-dilation according to the endovascular treatment and present it to the user.
 また、本実施の形態1によれば、X線透視画像上で対象部位の指定入力を受け付けることで、デバイス情報を好適に指定することができる。 Further, according to the first embodiment, the device information can be preferably specified by accepting the designated input of the target portion on the fluoroscopic image.
 また、本実施の形態1によれば、生成モデル50を用いて順次生成される複数の超音波断層像(横断層像)と、X線透視画像とから、治療後の状態を3次元で表現した画像を提供することもできる。 Further, according to the first embodiment, the state after treatment is represented three-dimensionally from a plurality of ultrasonic tomographic images (cross-sectional layer images) sequentially generated using the generative model 50 and an X-ray perspective image. It is also possible to provide a modified image.
 また、本実施の形態1によれば、デバイス情報以外にも病変情報を生成モデル50の入力に用いることで、第2医用画像の生成精度を高めることができる。 Further, according to the first embodiment, the generation accuracy of the second medical image can be improved by using the lesion information in addition to the device information for the input of the generation model 50.
(実施の形態2)
 本実施の形態では、第1医用画像及び第2医用画像として超音波断層像のみを扱う形態について述べる。なお、実施の形態1と重複する内容については同一の符号を付して説明を省略する。
(Embodiment 2)
In this embodiment, a mode in which only an ultrasonic tomographic image is treated as a first medical image and a second medical image will be described. The contents overlapping with the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
 図9は、実施の形態2に係る生成モデル50に関する説明図である。本実施の形態に係る生成モデル50は、X線透視画像を処理するためのエンコーダ及びデコーダを備えず、治療前の超音波断層像及びデバイス情報それぞれの入力を受け付ける2つのエンコーダ、治療後の超音波断層像を生成するデコーダ、及びデコーダが生成した超音波断層像の真偽を識別する識別器のみを有する。サーバ1は、訓練用の第1医用画像及び第2医用画像として、血管内治療を実施済みの患者の治療前後の超音波断層像を訓練データとして用い、図9に示す生成モデル50を生成する。 FIG. 9 is an explanatory diagram of the generative model 50 according to the second embodiment. The generation model 50 according to the present embodiment does not include an encoder and a decoder for processing an X-ray fluoroscopic image, and has two encoders that accept input of an ultrasonic tomographic image before treatment and device information, respectively, and a super after treatment. It has only a decoder that generates an ultrasonic tomographic image and a classifier that discriminates the authenticity of the ultrasonic tomographic image generated by the decoder. The server 1 uses the ultrasonic tomographic images before and after the treatment of the patient who has undergone endovascular treatment as the training data as the first medical image and the second medical image for training, and generates the generative model 50 shown in FIG. ..
 図10は、実施の形態2に係る医用画像の表示画面例を示す説明図である。本実施の形態において表示装置23は、実施の形態1と同様に、第1医用画像に対応する現断層像501を表示する。本実施の形態ではさらに、表示装置23は、複数の超音波断層像(横断層像)を再構成した縦断層像521を表示する。 FIG. 10 is an explanatory diagram showing an example of a display screen of a medical image according to the second embodiment. In the present embodiment, the display device 23 displays the current tomographic image 501 corresponding to the first medical image, as in the first embodiment. In the present embodiment, the display device 23 further displays a vertical tomographic image 521 in which a plurality of ultrasonic tomographic images (cross-layer images) are reconstructed.
 縦断層像521は、血管内画像診断装置21で連続してイメージングされた複数の横断層像を、血管の長手方向に沿う形で再構成した縦断面画像である。サーバ1は、カテーテル211の走査に従い、血管の長手方向に沿って連続してイメージングされた複数の横断層像(フレーム画像)から縦断層像521を再構成し、表示装置23に表示させる。本実施の形態で表示装置23は、縦断層像521上で矩形枠の描画入力を受け付けることで、治療対象とする血管内の部位の指定入力を受け付ける。そのほか、表示装置23は実施の形態1と同様に、指定欄503により他のデバイス情報の指定入力を受け付ける。 The vertical tomographic image 521 is a vertical cross-sectional image obtained by reconstructing a plurality of cross-sectional layer images continuously imaged by the intravascular image diagnostic apparatus 21 along the longitudinal direction of the blood vessel. The server 1 reconstructs the longitudinal tomographic image 521 from a plurality of transverse layer images (frame images) continuously imaged along the longitudinal direction of the blood vessel according to the scanning of the catheter 211, and displays the longitudinal tomographic image 521 on the display device 23. In the present embodiment, the display device 23 receives the drawing input of the rectangular frame on the vertical tomographic image 521, and thereby receives the designated input of the part in the blood vessel to be treated. In addition, the display device 23 receives the designated input of other device information in the designated field 503 as in the first embodiment.
 サーバ1は、第1医用画像に対応する超音波断層像と、デバイス情報とを生成モデル50に入力し、第2医用画像に対応する超音波断層像を生成する。そしてサーバ1は、生成した超音波断層像を予測断層像505として表示させる。なお、予測断層像505として、横断層像だけでなく縦断層像を表示してもよいことは勿論である。 The server 1 inputs the ultrasonic tomographic image corresponding to the first medical image and the device information into the generation model 50, and generates the ultrasonic tomographic image corresponding to the second medical image. Then, the server 1 displays the generated ultrasonic tomographic image as a predicted tomographic image 505. Of course, as the predicted tomographic image 505, not only the transverse layer image but also the vertical tomographic image may be displayed.
 なお、実施の形態1と同様に、生成モデル50で順次生成する複数の超音波断層像を再構成し、3次元画像を生成して表示装置23に表示させてもよい。 Note that, as in the first embodiment, a plurality of ultrasonic tomographic images sequentially generated by the generation model 50 may be reconstructed to generate a three-dimensional image and display it on the display device 23.
 上記の点以外は実施の形態1と同様であるため、本実施の形態ではフローチャートその他の詳細な説明は省略する。 Since the same as the first embodiment except for the above points, the flowchart and other detailed explanations will be omitted in the present embodiment.
(実施の形態3)
 本実施の形態では、生成モデル50が生成した第2医用画像の修正情報の入力を受け付け、入力された修正情報に基づいて再学習を行う形態について説明する。
(Embodiment 3)
In the present embodiment, an embodiment in which the input of the correction information of the second medical image generated by the generation model 50 is accepted and the re-learning is performed based on the input correction information will be described.
 例えば表示装置23は、図5で例示した表示画面上で、生成モデル50により生成した第2医用画像を修正するための修正情報の入力をユーザから受け付ける。具体的には、表示装置23は、色分け表示したステント(治療用デバイス)の位置、長さ、直径(幅)等を修正する修正入力を受け付ける。例えば表示装置23は、予測断層像505又は予測透視画像506に対し、ステントに対応する画像領域の境界(エッジ)を新たに描画する描画入力を受け付ける。表示装置23は、ユーザによりステントに対応する画像領域が描画された第2医用画像を修正情報としてサーバ1に送信し、再学習を行わせる。 For example, the display device 23 receives from the user input of correction information for correcting the second medical image generated by the generation model 50 on the display screen illustrated in FIG. Specifically, the display device 23 receives a correction input for correcting the position, length, diameter (width), etc. of the stent (therapeutic device) displayed in color. For example, the display device 23 receives a drawing input for newly drawing the boundary (edge) of the image region corresponding to the stent with respect to the predicted tomographic image 505 or the predicted perspective image 506. The display device 23 transmits the second medical image on which the image area corresponding to the stent is drawn by the user to the server 1 as correction information, and causes the server 1 to perform re-learning.
 なお、本実施の形態ではステント拡張後の第2医用画像に対する修正情報の入力を受け付けることとするが、前拡張後、又は後拡張後の第2医用画像に対する修正情報の入力を受け付けてもよい。 In the present embodiment, the input of the correction information for the second medical image after the stent expansion is accepted, but the input of the correction information for the second medical image after the pre-expansion or the post-expansion may be accepted. ..
 サーバ1は、表示装置23から取得した修正情報に基づいて再学習を行い、生成モデル50を更新する。具体的には、サーバ1は、再学習用の訓練データとして、第2医用画像を生成した際に入力とした第1医用画像及びデバイス情報と、ステントの画像領域が描画された第2医用画像とを生成モデル50に与え、生成器及び識別器のパラメータを更新する。 The server 1 relearns based on the correction information acquired from the display device 23, and updates the generation model 50. Specifically, the server 1 uses the first medical image and device information input when the second medical image is generated as training data for re-learning, and the second medical image in which the image area of the stent is drawn. Is given to the generation model 50, and the parameters of the generator and the classifier are updated.
 図11は、実施の形態3に係る第2医用画像の生成処理の手順を示すフローチャートである。ステント留置後の第2医用画像を生成して表示装置23に出力した後(ステップS36)、サーバ1は以下の処理を実行する。
 サーバ1の制御部11は、出力した第2医用画像の修正情報の入力を受け付ける(ステップS201)。例えば制御部11は、第2医用画像上でステントの位置、長さ、直径等を修正する修正入力を受け付ける。制御部11は、処理をステップS37に移行する。
FIG. 11 is a flowchart showing a procedure for generating a second medical image according to the third embodiment. After generating the second medical image after the stent placement and outputting it to the display device 23 (step S36), the server 1 executes the following processing.
The control unit 11 of the server 1 accepts the input of the correction information of the output second medical image (step S201). For example, the control unit 11 receives a correction input for correcting the position, length, diameter, etc. of the stent on the second medical image. The control unit 11 shifts the process to step S37.
 ステップS37でNO、又はステップS39の処理を実行後、制御部11は、ステップS201で入力された修正情報に基づいて生成モデル50を更新する(ステップS202)。具体的には、制御部11は、ステップS36で第2医用画像を生成した際に入力とした第1医用画像及びデバイス情報を訓練用の入力データとし、ステップS36で生成した第2医用画像を修正した画像を訓練用の出力データとして再学習を行い、生成器及び識別器のパラメータを更新する。制御部11は一連の処理を終了する。 After executing NO or the process of step S39 in step S37, the control unit 11 updates the generation model 50 based on the correction information input in step S201 (step S202). Specifically, the control unit 11 uses the first medical image and device information input when the second medical image was generated in step S36 as input data for training, and uses the second medical image generated in step S36 as input data. The corrected image is retrained as output data for training, and the parameters of the generator and the classifier are updated. The control unit 11 ends a series of processes.
 以上より、本実施の形態3によれば、本システムの運用を通じて第2医用画像の生成精度を高めることができる。 From the above, according to the third embodiment, the accuracy of generating the second medical image can be improved through the operation of this system.
 今回開示された実施の形態はすべての点で例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。 It should be considered that the embodiment disclosed this time is an example in all respects and is not restrictive. The scope of the present invention is indicated by the scope of claims, not the above-mentioned meaning, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims.
 1   サーバ(情報処理装置)
 1a  可搬型記憶媒体
 1b  半導体メモリ
 11  制御部
 12  主記憶部
 13  通信部
 14  補助記憶部
 N   ネットワーク
 P   プログラム
 2   画像診断システム
 21  血管内画像診断装置
 211 カテーテル
 22  透視画像撮影装置
 221 X線源
 222 X線センサ
 23  表示装置
 501 現断層像
 502 現透視画像
 503 指定欄
 504 ステータス欄
 505 予測断層像
 506 予測透視画像
 521 縦断層像
1 Server (information processing device)
1a Portable storage medium 1b Semiconductor memory 11 Control unit 12 Main storage unit 13 Communication unit 14 Auxiliary storage unit N network P program 2 Image diagnosis system 21 Intravascular image diagnosis device 211 Catheter 22 Perspective imaging device 221 X-ray source 222 X-ray Sensor 23 Display device 501 Current tomographic image 502 Current perspective image 503 Designation column 504 Status column 505 Predicted tomographic image 506 Predicted perspective image 521 Vertical tomographic image

Claims (11)

  1.  治療前の患者の管腔器官をイメージングした第1医用画像と、前記管腔器官の治療に用いる治療用デバイスに関連するデバイス情報とを取得し、
     前記第1医用画像及びデバイス情報を入力した場合に、治療後の第2医用画像を生成するよう学習済みのモデルに、取得した前記第1医用画像及びデバイス情報を入力して前記第2医用画像を生成する
     処理をコンピュータに実行させるプログラム。
    A first medical image of the patient's luminal organ before treatment and device information related to the therapeutic device used for the treatment of the luminal organ are acquired.
    When the first medical image and device information are input, the acquired first medical image and device information are input to a model trained to generate a second medical image after treatment, and the second medical image is input. A program that causes a computer to perform the process of generating.
  2.  複数の治療段階に応じて、各治療段階の処置前の前記第1医用画像を取得し、
     取得した前記第1医用画像を前記モデルに入力して、各治療段階の処置後の前記第2医用画像を生成する
     請求項1に記載のプログラム。
    The first medical image before the treatment of each treatment stage is acquired according to a plurality of treatment stages.
    The program according to claim 1, wherein the acquired first medical image is input to the model to generate the second medical image after the treatment of each treatment stage.
  3.  前記治療はカテーテルを用いた血管内治療であり、
     ステント留置前のバルーン拡張、前記ステントの拡張、及び前記ステント留置後のバルーン拡張の少なくとも2以上の治療段階の処置前の第1医用画像を取得して前記第2医用画像を生成する
     請求項2に記載のプログラム。
    The treatment is endovascular treatment using a catheter.
    Claim 2 to obtain a first medical image before treatment of at least two or more treatment stages of balloon expansion before stent placement, expansion of the stent, and balloon expansion after stent placement to generate the second medical image. The program described in.
  4.  前記第1医用画像は、前記患者の体内の透視画像を含み、
     前記透視画像を表示部に表示し、
     前記透視画像に対し、治療対象の前記管腔器官内の部位と、該部位の治療に用いる前記治療用デバイスとを指定する指定入力を受け付け、
     指定された前記部位及び治療用デバイスを示す前記デバイス情報を前記モデルに入力する
     請求項1~3のいずれか1項に記載のプログラム。
    The first medical image includes a fluoroscopic image of the patient's body.
    The fluoroscopic image is displayed on the display unit, and the fluoroscopic image is displayed on the display unit.
    For the fluoroscopic image, a designated input for designating a site in the luminal organ to be treated and the therapeutic device used for the treatment of the site is accepted.
    The program according to any one of claims 1 to 3, wherein the device information indicating the designated site and the therapeutic device is input to the model.
  5.  前記第1医用画像は、前記管腔器官内の横断層像を含み、
     前記管腔器官の長手方向に沿って連続してイメージングされた複数の横断層像から縦断層像を生成して表示部に表示し、
     前記縦断層像に対し、治療対象の前記管腔器官内の部位と、該部位の治療に用いる前記治療用デバイスとを指定する指定入力を受け付け、
     指定された前記部位及び治療用デバイスを示す前記デバイス情報を前記モデルに入力する
     請求項1~3のいずれか1項に記載のプログラム。
    The first medical image includes a cross-layer image within the luminal organ.
    A longitudinal tomographic image is generated from a plurality of transverse image images continuously imaged along the longitudinal direction of the luminal organ and displayed on the display unit.
    For the longitudinal tomographic image, a designated input for designating a site in the luminal organ to be treated and the therapeutic device used for the treatment of the site is accepted.
    The program according to any one of claims 1 to 3, wherein the device information indicating the designated site and the therapeutic device is input to the model.
  6.  前記複数の横断層像を前記モデルに入力して、治療後の複数の横断層像を生成し、
     生成した治療後の複数の横断層像から3次元の前記第2医用画像を生成する
     請求項5に記載のプログラム。
    The plurality of cross-layer images are input to the model to generate the plurality of cross-layer images after treatment.
    The program according to claim 5, wherein a three-dimensional second medical image is generated from the generated plurality of cross-layer images after treatment.
  7.  前記管腔器官内の病変部の性状を示す病変情報の入力を受け付け、
     前記病変情報を前記モデルに入力して前記第2医用画像を生成する
     請求項1~6のいずれか1項に記載のプログラム。
    Accepts input of lesion information indicating the properties of the lesion in the luminal organ,
    The program according to any one of claims 1 to 6, wherein the lesion information is input to the model to generate the second medical image.
  8.  生成した前記第2医用画像の修正情報の入力を受け付け、
     前記修正情報に基づき、前記モデルを更新する
     請求項1~7のいずれか1項に記載のプログラム。
    Accepting the input of the correction information of the generated second medical image,
    The program according to any one of claims 1 to 7, which updates the model based on the modified information.
  9.  治療前の患者の管腔器官をイメージングした第1医用画像と、前記管腔器官の治療に用いる治療用デバイスに関連するデバイス情報とを取得し、
     前記第1医用画像及びデバイス情報を入力した場合に、治療後の第2医用画像を生成するよう学習済みのモデルに、取得した前記第1医用画像及びデバイス情報を入力して前記第2医用画像を生成する
     処理をコンピュータが実行する情報処理方法。
    A first medical image of the patient's luminal organ before treatment and device information related to the therapeutic device used for the treatment of the luminal organ are acquired.
    When the first medical image and device information are input, the acquired first medical image and device information are input to a model trained to generate a second medical image after treatment, and the second medical image is input. An information processing method in which a computer executes the process of generating information.
  10.  治療前の患者の管腔器官をイメージングした第1医用画像と、前記管腔器官の治療に用いる治療用デバイスに関連するデバイス情報とを取得する取得部と
     前記第1医用画像及びデバイス情報を入力した場合に、治療後の第2医用画像を生成するよう学習済みのモデルに、取得した前記第1医用画像及びデバイス情報を入力して前記第2医用画像を生成する生成部と
     を備える情報処理装置。
    Input the acquisition unit for acquiring the first medical image of the patient's luminal organ before treatment and the device information related to the therapeutic device used for the treatment of the luminal organ, and the first medical image and device information. In this case, information processing including a generation unit for inputting the acquired first medical image and device information into a model trained to generate a second medical image after treatment and generating the second medical image. Device.
  11.  治療前及び治療後夫々の患者の管腔器官をイメージングした第1医用画像及び第2医用画像と、前記管腔器官の治療に用いた治療用デバイスに関連するデバイス情報とを含む訓練データを取得し、
     前記訓練データに基づき、前記第1医用画像及びデバイス情報を入力した場合に前記第2医用画像を生成する学習済みモデルを生成する
     処理をコンピュータが実行するモデル生成方法。
    Acquisition of training data including first and second medical images imaging the luminal organs of each patient before and after treatment, and device information related to the therapeutic device used for the treatment of the luminal organs. death,
    A model generation method in which a computer executes a process of generating a learned model that generates the second medical image when the first medical image and device information are input based on the training data.
PCT/JP2021/009301 2020-03-27 2021-03-09 Program, information processing method, information processing device, and model generation method WO2021193021A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022509536A JPWO2021193021A1 (en) 2020-03-27 2021-03-09

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-058996 2020-03-27
JP2020058996 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021193021A1 true WO2021193021A1 (en) 2021-09-30

Family

ID=77891479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/009301 WO2021193021A1 (en) 2020-03-27 2021-03-09 Program, information processing method, information processing device, and model generation method

Country Status (2)

Country Link
JP (1) JPWO2021193021A1 (en)
WO (1) WO2021193021A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018525074A (en) * 2015-07-08 2018-09-06 アオーティカ コーポレイション Apparatus and method for anatomical mapping for prosthetic implants
WO2019002526A1 (en) * 2017-06-29 2019-01-03 Koninklijke Philips N.V. Device and method for predicting an unfolded state of a foldable implant in biological tissue
JP2019510547A (en) * 2016-02-16 2019-04-18 メンティス アー・ベーMentice AB System and method for routing a catheter-like conduit line in a vessel
JP2020503909A (en) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド Method of using a stent planning system and vascular representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018525074A (en) * 2015-07-08 2018-09-06 アオーティカ コーポレイション Apparatus and method for anatomical mapping for prosthetic implants
JP2019510547A (en) * 2016-02-16 2019-04-18 メンティス アー・ベーMentice AB System and method for routing a catheter-like conduit line in a vessel
JP2020503909A (en) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド Method of using a stent planning system and vascular representation
WO2019002526A1 (en) * 2017-06-29 2019-01-03 Koninklijke Philips N.V. Device and method for predicting an unfolded state of a foldable implant in biological tissue

Also Published As

Publication number Publication date
JPWO2021193021A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
US11847781B2 (en) Systems and methods for medical acquisition processing and machine learning for anatomical assessment
CN112368781A (en) Method and system for assessing vascular occlusion based on machine learning
US20220198784A1 (en) System and methods for augmenting x-ray images for training of deep neural networks
CN114126491B (en) Assessment of coronary artery calcification in angiographic images
WO2021193019A1 (en) Program, information processing method, information processing device, and model generation method
US20230245307A1 (en) Information processing device, information processing method, and program
WO2022071181A1 (en) Information processing device, information processing method, program, and model generation method
CN112309574A (en) Method and apparatus for deformation simulation
US20240013385A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2021193015A1 (en) Program, information processing method, information processing device, and model generation method
WO2021193021A1 (en) Program, information processing method, information processing device, and model generation method
WO2021193024A1 (en) Program, information processing method, information processing device and model generating method
WO2021193026A1 (en) Program, information processing method, information processing device, and model generation method
WO2021193018A1 (en) Program, information processing method, information processing device, and model generation method
Breininger Machine learning and deformation modeling for workflow-compliant image fusion during endovascular aortic repair
CN114648536A (en) Method and device for extracting vascular wall
JP2022142607A (en) Program, image processing method, image processing device, and model generation method
WO2021193022A1 (en) Information processing device, information processing method, and program
WO2021199967A1 (en) Program, information processing method, learning model generation method, learning model relearning method, and information processing system
WO2021199966A1 (en) Program, information processing method, training model generation method, retraining method for training model, and information processing system
WO2021193020A1 (en) Program, information processing method, information processing device, and model generating method
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2023100838A1 (en) Computer program, information processing device, information processing method, and training model generation method
WO2021199962A1 (en) Program, information processing method, and information processing device
US20240008849A1 (en) Medical system, method for processing medical image, and medical image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776414

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022509536

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776414

Country of ref document: EP

Kind code of ref document: A1