WO2021193021A1 - Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle - Google Patents

Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle Download PDF

Info

Publication number
WO2021193021A1
WO2021193021A1 PCT/JP2021/009301 JP2021009301W WO2021193021A1 WO 2021193021 A1 WO2021193021 A1 WO 2021193021A1 JP 2021009301 W JP2021009301 W JP 2021009301W WO 2021193021 A1 WO2021193021 A1 WO 2021193021A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
treatment
image
input
device information
Prior art date
Application number
PCT/JP2021/009301
Other languages
English (en)
Japanese (ja)
Inventor
陽 井口
悠介 関
雄紀 坂口
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Priority to JP2022509536A priority Critical patent/JPWO2021193021A1/ja
Publication of WO2021193021A1 publication Critical patent/WO2021193021A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/12Arrangements for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a program, an information processing method, an information processing device, and a model generation method.
  • Treatment is performed based on medical images that visualize the inside of the human body, such as ultrasonic images, optical coherence tomography (OCT) images, and X-ray images.
  • OCT optical coherence tomography
  • X-ray images various techniques for processing medical images have been proposed so that image observers can preferably observe medical images.
  • Patent Document 1 it is an ultrasonic diagnostic apparatus that displays an ultrasonic image of a lesion portion of a subject, and generates an image in which an ultrasonic image before treatment and an ultrasonic image after treatment are superimposed to generate a lesion.
  • An ultrasonic diagnostic apparatus for displaying a treated area and an untreated area of a part in different colors is disclosed.
  • Patent Document 1 merely superimposes the image acquired before the treatment and the image acquired after the treatment, and does not generate the image after the treatment.
  • One aspect is to provide a program or the like that can obtain a medical image that predicts the state after treatment from a medical image before treatment.
  • a program acquires a first medical image of a patient's luminal organ before treatment and device information related to a therapeutic device used for the treatment of the luminal organ, and obtains the first medical image.
  • FIG. It is explanatory drawing which shows the configuration example of the treatment support system. It is a block diagram which shows the configuration example of a server. It is explanatory drawing which shows the outline of Embodiment 1.
  • FIG. It is explanatory drawing about the generative model. It is explanatory drawing which shows the display screen example of a medical image. It is explanatory drawing about the treatment stage of endovascular treatment.
  • It is a flowchart which shows the procedure of the generation process of a generation model.
  • It is a flowchart which shows the procedure of the 2nd medical image generation processing It is explanatory drawing about the generation model which concerns on Embodiment 2.
  • It is a flowchart which shows the procedure of the 2nd medical image generation processing which concerns on Embodiment 3.
  • FIG. 1 is an explanatory diagram showing a configuration example of a treatment support system.
  • first medical image a medical image
  • second medical image a medical image that predicts the state of blood vessels after treatment.
  • the treatment support system includes an information processing device 1 and a diagnostic imaging system 2.
  • the information processing device 1 and the diagnostic imaging system 2 are communication-connected to a network N such as a LAN (Local Area Network) or the Internet.
  • the target luminal organ is not limited to a blood vessel, and may be another luminal organ such as a bile duct, pancreatic duct, bronchus, or intestine. ..
  • the diagnostic imaging system 2 includes an intravascular diagnostic imaging device 21, a fluoroscopic imaging device 22, and a display device 23.
  • the intravascular image diagnostic device 21 is a device for imaging an intravascular tomographic image of a patient, for example, an IVUS (Intravascular Ultrasound) device that performs an ultrasonic examination using a catheter 211.
  • Catheter 211 is a medical device that is inserted into a patient's blood vessel and comprises an imaging core that transmits ultrasonic waves and receives reflected waves from within the blood vessel.
  • the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image based on the signal of the reflected wave received by the catheter 211 and displays it on the display device 23.
  • the intravascular diagnostic imaging apparatus 21 generates an ultrasonic tomographic image.
  • an optical interference tomographic image is taken by an optical method such as OCT or OFDI (Optical Frequency Domain Imaging). May be good.
  • the fluoroscopic image capturing device 22 is a device unit for capturing a fluoroscopic image that sees through the inside of a patient, and is, for example, an angiography device that performs angiography examination.
  • the fluoroscopic image capturing device 22 includes an X-ray source 221 and an X-ray sensor 222, and the X-ray sensor 222 receives the X-rays emitted from the X-ray source 221 to capture an X-ray fluoroscopic image of the patient.
  • ultrasonic tomography optical interference tomography
  • angiography are given as examples of medical images, but the medical images are computed tomography (CT) images and magnetic resonance imaging (MRI). It may be an image or the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer.
  • the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity.
  • the server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging system 2, or may be a cloud server communicated and connected via the Internet or the like.
  • the server 1 functions as a generator that generates a second medical image from the first medical image (ultrasonic tomographic image and X-ray perspective image) generated by the diagnostic imaging system 2, and the generated second medical image is image-diagnosed. Output to system 2.
  • the server 1 performs machine learning to learn predetermined training data in advance, and generates a second medical image by inputting the first medical image and device information (FIG. 5). See 3rd grade).
  • the device information will be described later.
  • the server 1 acquires the first medical image and device information from the diagnostic imaging system 2 and inputs them to the generation model 50 to generate the second medical image and display it on the display device 23.
  • the second medical image is generated on the server 1 separate from the diagnostic imaging system 2, but the generation model 50 generated by the server 1 by machine learning is installed in the diagnostic imaging system 2.
  • the diagnostic imaging system 2 may be capable of generating a second medical image.
  • FIG. 2 is a block diagram showing a configuration example of the server 1.
  • the server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
  • the control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed.
  • the main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing.
  • the communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the generation model 50.
  • the generative model 50 is a machine learning model in which training data has been trained as described above, and is a model that generates a second medical image by inputting a first medical image and device information.
  • the generation model 50 is expected to be used as a program module constituting artificial intelligence software.
  • the auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 is provided with a reading unit for reading a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), or a USB (UniversalSerialBus) memory, and reads a program P from the portable storage medium 1a. You may try to execute it. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
  • a portable storage medium 1a such as a CD (CompactDisk), a DVD (DigitalVersatileDisc), or a USB (UniversalSerialBus) memory
  • FIG. 3 is an explanatory diagram showing an outline of the first embodiment.
  • FIG. 3 conceptually illustrates how a second medical image is generated from the first medical image and text information using the generative model 50. An outline of the present embodiment will be described with reference to FIG.
  • the generation model 50 generates a second medical image that predicts the state after the treatment by inputting the first medical image acquired by the diagnostic imaging system 2 before the treatment of the patient and the device information about the treatment device used for the treatment. It is a machine learning model to do.
  • the first medical image is an ultrasonic tomographic image and an X-ray fluoroscopic image acquired by the intravascular image diagnostic apparatus 21 and the fluoroscopic imaging apparatus 22, respectively, and is an image in which the blood vessel of the patient is inspected before treatment.
  • the device information is information related to a device used for endovascular treatment, for example, information related to a stent, a balloon, etc. used for PCI (Percutaneous Coronary Intervention).
  • device information includes the length, diameter, type, placement position, balloon type, diameter after expansion by the balloon (hereinafter referred to as "expansion diameter"), expansion pressure, and expansion time of the stent to be placed in the patient's blood vessel.
  • expansion diameter the length, diameter, type, placement position, balloon type, diameter after expansion by the balloon
  • expansion pressure expansion time of the stent to be placed in the patient's blood vessel.
  • the therapeutic device is not limited to the stent and the balloon, and may be another device such as a rotablator.
  • the second medical image is an image that predicts the state in the blood vessel after the treatment using the therapeutic device, and is an ultrasonic tomographic image and an X-ray fluoroscopic image after the treatment.
  • the server 1 may generate only one of the ultrasonic tomographic image and the X-ray perspective image as the second medical image, or may generate three or more types of images.
  • the server 1 learns the training data including the first medical image and device information for training and the second medical image, and generates the generation model 50.
  • the training data is, for example, data of a patient who has undergone endovascular treatment, and is an ultrasonic tomographic image and an X-ray fluoroscopic image (first medical image) before treatment, and a device related to a therapeutic device used for treating the patient. Includes information and post-treatment ultrasound tomographic and fluoroscopic images.
  • the server 1 performs machine learning using these data as training data, and generates a generation model 50 in advance.
  • the training data is not limited to the actual patient data, and may be virtual data inflated by a data generation means such as GAN (Generative Adversarial Network).
  • GAN Generic Adversarial Network
  • the server 1 acquires the first medical image and device information from the diagnostic imaging system 2 at the time of treating the patient, generates the second medical image after the treatment, and displays it on the display device 23. Specifically, the server 1 receives a plurality of ultrasonic tomographic images (frame images) continuously imaged along the longitudinal direction of the blood vessel from the intravascular image diagnostic apparatus 21 during the ultrasonic examination using the catheter 211. Acquire sequentially. Further, the server 1 acquires an X-ray fluoroscopic image taken at the same time as the ultrasonic inspection from the fluoroscopic image capturing device 22.
  • the server 1 sequentially inputs a plurality of continuous ultrasonic tomographic images and an X-ray fluoroscopic image at the time of acquisition of each ultrasonic tomographic image into the generation model 50 together with device information, and predicts the state after treatment. Images and fluoroscopic images are sequentially generated.
  • an ultrasonic tomographic image and an X-ray fluoroscopic image of a blood vessel after a stent is placed and expanded by a balloon are generated.
  • the portion corresponding to the stent is shown by hatching.
  • the server 1 predicts the state of the blood vessel after the treatment using the generative model 50 and presents it as a second medical image.
  • the second medical image is generated at the time of treatment and output to the diagnostic imaging system 2.
  • the recorded first medical image is input to the generation model 50 after the fact and used for the second medical.
  • an image may be generated.
  • FIG. 4 is an explanatory diagram of the generative model 50.
  • GAN is used as the generative model 50.
  • the GAN includes a generator that generates output data from input data and a discriminator that discriminates the authenticity of the data generated by the generator, and the generator and the discriminator compete with each other for learning. Build a network by doing.
  • the generator related to GAN accepts random noise (latent variable) input and generates output data.
  • the discriminator learns the truth of the data given by the generator by using the true data given for learning and the data given by the generator.
  • the network is constructed so that the loss function of the generator is finally minimized and the loss function of the discriminator is maximized.
  • the generator of the generation model 50 includes an encoder that converts input data into a latent variable and a decoder that generates output data from the latent variable, and includes a first medical image and a second medical image from device information.
  • the generator of the generative model 50 includes three encoders that accept inputs of three types of input data: an ultrasonic tomographic image before treatment, an X-ray fluoroscopic image, and device information.
  • the generator also comprises two decoders that generate a post-treatment ultrasound tomographic image and a fluoroscopic image.
  • the generative model 50 includes two classifiers for discriminating the authenticity of the data output from each decoder.
  • the generator extracts the features of the input data in each encoder, inputs the latent variable that combines the features of each input data to each decoder, and predicts the state after treatment.
  • Ultrasonic tomographic image and X-ray fluoroscopy Generate an image.
  • the two classifiers discriminate between the authenticity of the ultrasonic tomographic image and the fluoroscopic image, respectively.
  • the server 1 performs learning using the first medical image and device information given for training and the second medical image, and generates a generation model 50. For example, the server 1 first fixes the parameters (weights, etc.) of the generator, inputs the first medical image for training and the device information to the generator, and generates the second medical image. Then, the server 1 uses the second medical image generated by the generator as fake data, gives the second medical image for training as true data to the classifier, and optimizes the parameters of the classifier. Next, the server 1 fixes the parameter of the classifier to the optimum value and learns the generator. The server 1 optimizes the parameters of the generator so that the probability of authenticity is close to 50% when the second medical image generated by the generator is input to the classifier. As a result, the server 1 generates the generative model 50. When actually generating the second medical image from the first medical image, only the generator is used.
  • the generative model 50 is not limited to GAN, and may be a model based on a neural network such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • VAE Variational Autoencoder
  • CNN for example, U-net
  • another learning algorithm such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • the network structure of the generation model 50 shown in FIG. 4 is an example, and the present embodiment is not limited to this.
  • two images, an ultrasonic tomographic image and an X-ray perspective image may be combined and regarded as a single image, and two images may be processed simultaneously by one type of encoder and decoder.
  • the encoder for extracting the feature amount of the device information is not prepared, and the device information (text data) may be encoded by a means such as One-hot encoding and directly mapped (combined) to the latent space.
  • various changes can be considered in the network structure of the generative model 50.
  • the server 1 uses a second medical image that displays an object of interest to the user, such as a stent, in a display mode different from that of other image areas, as illustrated in FIG. Suitable.
  • the server 1 uses an image in which an image area corresponding to an object such as a stent is labeled (processed) by color coding, edge enhancement, or the like. This allows the server 1 to generate a second medical image that allows the user to identify the object of interest.
  • the labeled object is not limited to a therapeutic device such as a stent, and may be another object such as a lesion (plaque or the like) in a blood vessel or a lumen boundary.
  • the server 1 inputs lesion information regarding a lesion portion in a blood vessel into the generation model 50.
  • the lesion information includes, for example, the type of lesion (plaque, calcified tissue, vascular stenosis, etc.), position, and properties (hardness of lesion, etc.).
  • the server 1 also inputs the lesion information into the encoder corresponding to the stent information and converts it into a latent variable.
  • the generative model 50 can generate a second medical image while also referring to the nature and state of the lesion.
  • FIG. 5 is an explanatory diagram showing an example of a display screen of a medical image.
  • FIG. 5 is an example of a screen displayed by the display device 23, and shows an example of a screen displaying a first medical image and a second medical image. The processing flow executed at the time of endovascular treatment will be described with reference to FIG.
  • the display device 23 displays the ultrasonic tomographic image and the X-ray perspective image of the current blood vessel corresponding to the first medical image as the current tomographic image 501 and the current perspective image 502. Further, in order to accept the designated input of the device information, the display device 23 displays the designated fields 503, 503, 503 ... In the menu bar on the left side of the screen. In addition, the display device 23 displays a status column 504 for accepting a designated input of the current treatment stage on the menu bar.
  • the display device 23 accepts a designated input of device information on the screen. Specifically, the display device 23 accepts the operation input for drawing a rectangular frame on the current fluoroscopic image 502, thereby accepting the designated input of the site (position of the lesion portion) in the blood vessel to be treated. Further, the display device 23 accepts designation inputs such as the length and diameter of the stent and the expansion diameter by the balloon in each designation field 503.
  • the device information of each item is individually input in the plurality of designation fields 503.
  • each item can be specified at the same time. good.
  • the display device 23 accepts the designated input of the treatment stage of the current endovascular treatment via the status column 504.
  • the server 1 generates a second medical image according to the treatment stage specified in the status column 504 and outputs it to the display device 23.
  • FIG. 6 is an explanatory diagram regarding the treatment stage of endovascular treatment.
  • FIG. 6 conceptually illustrates a general treatment flow when performing PCI.
  • PCI is carried out by the following procedure.
  • a guide wire for guiding the catheter 211 is inserted into the patient's blood vessel.
  • the catheter 211 is inserted to obtain an ultrasonic tomographic image, and the condition of the lesion is confirmed together with the fluoroscopic image.
  • the balloon is then vasodilated prior to placement, if necessary, so that the stent can be placed. Balloon expansion before stent placement may be omitted.
  • a stent is placed in the blood vessel and the lesion is expanded with a balloon.
  • the ultrasonic image is acquired again, and the condition of the lesion after expansion is confirmed together with the fluoroscopic image.
  • the blood vessel is dilated again by the balloon. Balloon expansion after stent placement may be omitted.
  • the condition of the lesion is confirmed on each image, and a series of treatments is completed.
  • pre-dilation balloon dilation before stent placement
  • post-dilation balloon dilation after stent placement
  • the server 1 generates a second medical image after the treatment from the first medical image before the treatment at each treatment stage by the generation model 50 in accordance with the above-mentioned pre-dilation, stent expansion, and post-dilation. do.
  • the server 1 configures the generative model 50 as a model in which the class label of the input data can be input, such as the Conditional GAN.
  • the label data indicating the class of the first medical image is used before the pre-expansion procedure, before the stent expansion, or after the post-expansion. Enter a value that indicates whether it is before the treatment of.
  • the server 1 generates a second medical image after any of the treatments according to the input label data. As a result, the user can confirm the second medical image that predicts the state after the treatment to be performed step by step.
  • FIG. 5 illustrates an example of a display screen when the stent is expanded.
  • Server 1 produces a second medical image after stent expansion according to the treatment stage specified in status column 504. That is, the server 1 generates an ultrasonic tomographic image generated by the intravascular diagnostic imaging apparatus 21 before stent expansion, an X-ray perspective image captured by the perspective imaging apparatus 22, and device information specified by the user. It is input to the model 50 to generate an ultrasonic tomographic image and an X-ray perspective image that predict the state after stent expansion. In this case, for example, the server 1 generates an ultrasonic tomographic image and an X-ray perspective image showing the image region corresponding to the stent in a display mode different from that of other regions as a second medical image.
  • the server 1 outputs the generated second medical image and displays it on the display device 23.
  • the display device 23 displays the ultrasonic tomographic image and the X-ray perspective image corresponding to the second medical image as the predicted tomographic image 505 and the predicted perspective image 506.
  • FIG. 5 shows a two-dimensional ultrasonic tomographic image and an X-ray fluoroscopic image as the second medical image
  • the server 1 reconstructs the two-dimensional ultrasonic tomographic image and the X-ray fluoroscopic image
  • 3 A dimensional blood vessel image may be generated and displayed on the display device 23.
  • the server 1 sequentially inputs a plurality of ultrasonic tomographic images (transverse layer images) sequentially acquired from the intravascular diagnostic imaging apparatus 21 and an X-ray perspective image into the generation model 50, and then sequentially inputs the plurality of ultrasonic tomographic images (transverse layer images) to the generation model 50.
  • a plurality of ultrasonic tomographic images predicting the state after treatment and an X-ray perspective image are generated as a second medical image.
  • the server 1 aligns each ultrasonic tomographic image with the X-ray fluoroscopic image according to the position of the catheter 211 detected by means such as an X-ray opaque marker during the ultrasonic examination, and uses a known method to perform a three-dimensional image. Convert to. Thereby, the state in the blood vessel after the treatment can be expressed in three dimensions and presented to the user.
  • one pattern of designated input of device information is accepted and one pattern of the second medical image is displayed.
  • a plurality of patterns of designated input of device information are accepted to support a plurality of patterns.
  • a plurality of second medical images may be displayed.
  • the server 1 accepts designated inputs of a plurality of stents as candidates for stents used for treatment, separately generates a second medical image when each stent is used, and displays the preview on the display device 23. Thereby, the convenience of the user can be improved.
  • FIG. 7 is a flowchart showing the procedure of the generation process of the generation model 50. Based on FIG. 7, the processing content when the generation model 50 is generated by machine learning will be described.
  • the control unit 11 of the server 1 is data about a patient who has been treated, and is a first medical image which is a medical image before treatment, device information about a treatment device used for treatment, and a medical after treatment.
  • the training data including the second medical image which is an image is acquired (step S11). Specifically, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image before treatment as a first medical image for a patient who has undergone endovascular treatment.
  • control unit 11 acquires information on the stent, balloon, and the like used for endovascular treatment as device information. In addition, the control unit 11 acquires an ultrasonic tomographic image and an X-ray perspective image after endovascular treatment as a second medical image.
  • the control unit 11 Based on the training data, the control unit 11 generates a generation model 50 that generates a second medical image when the first medical image and device information are input (step S12). Specifically, the control unit 11 generates a GAN that inputs the ultrasonic tomographic image before the treatment, the X-ray fluoroscopic image, and the device information, and outputs the ultrasonic tomographic image and the X-ray fluoroscopic image after the treatment. .. The control unit 11 ends a series of processes.
  • FIG. 8 is a flowchart showing a procedure for generating a second medical image. Based on FIG. 8, the processing content when generating the second medical image will be described.
  • the control unit 11 of the server 1 acquires device information regarding a therapeutic device used for treating a patient's luminal organ from the diagnostic imaging system 2 (step S31). Further, the control unit 11 acquires the first medical image from the diagnostic imaging system 2 (step S32). Specifically, as described above, the control unit 11 acquires an ultrasonic tomographic image of the patient to be treated and an X-ray fluoroscopic image taken at the same time as the ultrasonic examination.
  • the control unit 11 determines whether or not pre-expansion before stent placement is necessary in response to an operation input from the user (step S33). When it is determined that the pre-expansion is not necessary (S33: NO), the control unit 11 shifts the process to step S36. When it is determined that the pre-expansion is necessary (S33: YES), the control unit 11 inputs the first medical image acquired in step S32 into the generation model 50 to generate the second medical image after the pre-expansion. , Output to the display device 23 (step S34).
  • the control unit 11 determines whether or not the pre-expansion is completed in response to the operation input from the user (step S35). When it is determined that the pre-expansion is not completed (S35: NO), the control unit 11 waits for processing.
  • control unit 11 When it is determined that the pre-expansion is completed (S35: YES), the control unit 11 generates a second medical image after the stent expansion and outputs it to the display device 23 (step S36).
  • the control unit 11 determines whether or not post-extension is necessary in response to an operation input from the user (step S37). When it is determined that post-extension is not necessary (S37: NO), the control unit 11 ends a series of processes. When it is determined that post-expansion is necessary (S37: YES), the control unit 11 reacquires the first medical image (step S38). The control unit 11 inputs the acquired first medical image into the generation model 50, generates the second medical image after post-expansion, and outputs the second medical image to the display device 23 (step S39). The control unit 11 ends a series of processes.
  • the first embodiment it is possible to generate a second medical image that predicts the state after the treatment from the first medical image before the treatment and present it to the user.
  • the first embodiment it is possible to generate a second medical image after each of the pre-dilation, the stent dilation, and the post-dilation according to the endovascular treatment and present it to the user.
  • the device information can be preferably specified by accepting the designated input of the target portion on the fluoroscopic image.
  • the state after treatment is represented three-dimensionally from a plurality of ultrasonic tomographic images (cross-sectional layer images) sequentially generated using the generative model 50 and an X-ray perspective image. It is also possible to provide a modified image.
  • the generation accuracy of the second medical image can be improved by using the lesion information in addition to the device information for the input of the generation model 50.
  • FIG. 9 is an explanatory diagram of the generative model 50 according to the second embodiment.
  • the generation model 50 according to the present embodiment does not include an encoder and a decoder for processing an X-ray fluoroscopic image, and has two encoders that accept input of an ultrasonic tomographic image before treatment and device information, respectively, and a super after treatment. It has only a decoder that generates an ultrasonic tomographic image and a classifier that discriminates the authenticity of the ultrasonic tomographic image generated by the decoder.
  • the server 1 uses the ultrasonic tomographic images before and after the treatment of the patient who has undergone endovascular treatment as the training data as the first medical image and the second medical image for training, and generates the generative model 50 shown in FIG. ..
  • FIG. 10 is an explanatory diagram showing an example of a display screen of a medical image according to the second embodiment.
  • the display device 23 displays the current tomographic image 501 corresponding to the first medical image, as in the first embodiment.
  • the display device 23 further displays a vertical tomographic image 521 in which a plurality of ultrasonic tomographic images (cross-layer images) are reconstructed.
  • the vertical tomographic image 521 is a vertical cross-sectional image obtained by reconstructing a plurality of cross-sectional layer images continuously imaged by the intravascular image diagnostic apparatus 21 along the longitudinal direction of the blood vessel.
  • the server 1 reconstructs the longitudinal tomographic image 521 from a plurality of transverse layer images (frame images) continuously imaged along the longitudinal direction of the blood vessel according to the scanning of the catheter 211, and displays the longitudinal tomographic image 521 on the display device 23.
  • the display device 23 receives the drawing input of the rectangular frame on the vertical tomographic image 521, and thereby receives the designated input of the part in the blood vessel to be treated.
  • the display device 23 receives the designated input of other device information in the designated field 503 as in the first embodiment.
  • the server 1 inputs the ultrasonic tomographic image corresponding to the first medical image and the device information into the generation model 50, and generates the ultrasonic tomographic image corresponding to the second medical image. Then, the server 1 displays the generated ultrasonic tomographic image as a predicted tomographic image 505. Of course, as the predicted tomographic image 505, not only the transverse layer image but also the vertical tomographic image may be displayed.
  • a plurality of ultrasonic tomographic images sequentially generated by the generation model 50 may be reconstructed to generate a three-dimensional image and display it on the display device 23.
  • the display device 23 receives from the user input of correction information for correcting the second medical image generated by the generation model 50 on the display screen illustrated in FIG. Specifically, the display device 23 receives a correction input for correcting the position, length, diameter (width), etc. of the stent (therapeutic device) displayed in color. For example, the display device 23 receives a drawing input for newly drawing the boundary (edge) of the image region corresponding to the stent with respect to the predicted tomographic image 505 or the predicted perspective image 506. The display device 23 transmits the second medical image on which the image area corresponding to the stent is drawn by the user to the server 1 as correction information, and causes the server 1 to perform re-learning.
  • the input of the correction information for the second medical image after the stent expansion is accepted, but the input of the correction information for the second medical image after the pre-expansion or the post-expansion may be accepted. ..
  • the server 1 relearns based on the correction information acquired from the display device 23, and updates the generation model 50. Specifically, the server 1 uses the first medical image and device information input when the second medical image is generated as training data for re-learning, and the second medical image in which the image area of the stent is drawn. Is given to the generation model 50, and the parameters of the generator and the classifier are updated.
  • FIG. 11 is a flowchart showing a procedure for generating a second medical image according to the third embodiment.
  • the server 1 executes the following processing.
  • the control unit 11 of the server 1 accepts the input of the correction information of the output second medical image (step S201). For example, the control unit 11 receives a correction input for correcting the position, length, diameter, etc. of the stent on the second medical image.
  • the control unit 11 shifts the process to step S37.
  • the control unit 11 After executing NO or the process of step S39 in step S37, the control unit 11 updates the generation model 50 based on the correction information input in step S201 (step S202). Specifically, the control unit 11 uses the first medical image and device information input when the second medical image was generated in step S36 as input data for training, and uses the second medical image generated in step S36 as input data. The corrected image is retrained as output data for training, and the parameters of the generator and the classifier are updated. The control unit 11 ends a series of processes.
  • the accuracy of generating the second medical image can be improved through the operation of this system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention concerne un programme qui amène un ordinateur à exécuter un traitement pour acquérir une première image médicale sur laquelle un organe creux d'un patient avant le traitement est imagé et des informations de dispositif associées à un dispositif de traitement utilisé pour traiter l'organe creux, et à générer, lorsque la première image médicale et les informations de dispositif sont entrées, une seconde image médicale après le traitement par entrée de la première image médicale acquise et des informations de dispositif acquises dans un modèle ayant appris à générer la seconde image médicale.
PCT/JP2021/009301 2020-03-27 2021-03-09 Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle WO2021193021A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022509536A JPWO2021193021A1 (fr) 2020-03-27 2021-03-09

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-058996 2020-03-27
JP2020058996 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021193021A1 true WO2021193021A1 (fr) 2021-09-30

Family

ID=77891479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/009301 WO2021193021A1 (fr) 2020-03-27 2021-03-09 Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle

Country Status (2)

Country Link
JP (1) JPWO2021193021A1 (fr)
WO (1) WO2021193021A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018525074A (ja) * 2015-07-08 2018-09-06 アオーティカ コーポレイション 補綴インプラントのための解剖学的マッピングのための装置および方法
WO2019002526A1 (fr) * 2017-06-29 2019-01-03 Koninklijke Philips N.V. Dispositif et procédé de prédiction d'un état déplié d'un implant pliable dans un tissu biologique
JP2019510547A (ja) * 2016-02-16 2019-04-18 メンティス アー・ベーMentice AB 脈管内においてカテーテルのような導管ラインをルーティングするためのシステムおよび方法
JP2020503909A (ja) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド ステント計画システム及び血管表現を使用する方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018525074A (ja) * 2015-07-08 2018-09-06 アオーティカ コーポレイション 補綴インプラントのための解剖学的マッピングのための装置および方法
JP2019510547A (ja) * 2016-02-16 2019-04-18 メンティス アー・ベーMentice AB 脈管内においてカテーテルのような導管ラインをルーティングするためのシステムおよび方法
JP2020503909A (ja) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド ステント計画システム及び血管表現を使用する方法
WO2019002526A1 (fr) * 2017-06-29 2019-01-03 Koninklijke Philips N.V. Dispositif et procédé de prédiction d'un état déplié d'un implant pliable dans un tissu biologique

Also Published As

Publication number Publication date
JPWO2021193021A1 (fr) 2021-09-30

Similar Documents

Publication Publication Date Title
CN112368781A (zh) 基于机器学习来评估血管阻塞的方法和系统
CN114126491B (zh) 血管造影图像中的冠状动脉钙化的评估
US20220198784A1 (en) System and methods for augmenting x-ray images for training of deep neural networks
JP7536861B2 (ja) プログラム、情報処理方法、情報処理装置及びモデル生成方法
WO2022071121A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2022071181A1 (fr) Dispositif de traitement d'information, procédé de traitement d'information, programme, et procédé de génération de modèle
CN112309574A (zh) 用于变形模拟的方法和装置
US20240013385A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2021193015A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
WO2021193021A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
JP7536862B2 (ja) プログラム、情報処理方法、情報処理装置及びモデル生成方法
WO2021193026A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations, et procédé de génération de modèles
WO2021193018A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
Breininger Machine learning and deformation modeling for workflow-compliant image fusion during endovascular aortic repair
CN114648536A (zh) 血管壁的提取方法及装置
JP2022142607A (ja) プログラム、画像処理方法、画像処理装置及びモデル生成方法
WO2021193022A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7545466B2 (ja) プログラム、情報処理方法、情報処理装置及びモデル生成方法
WO2021199967A1 (fr) Programme, procédé de traitement d'informations, procédé de génération de modèle d'apprentissage, procédé de réapprentissage de modèle d'apprentissage, et système de traitement d'informations
WO2021199966A1 (fr) Programme, procédé de traitement d'informations, procédé de génération de modèle d'apprentissage, procédé de réapprentissage pour modèle d'apprentissage, et système de traitement d'informations
WO2023100838A1 (fr) Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage
WO2021199962A1 (fr) Programme, procédé de traitement d'informations et dispositif de traitement d'informations
US20240008849A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2024071322A1 (fr) Procédé de traitement d'informations, procédé de génération de modèle d'apprentissage, programme informatique et dispositif de traitement d'informations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776414

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022509536

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776414

Country of ref document: EP

Kind code of ref document: A1