WO2024071251A1 - Computer program, information processing method, information processing device, and learning model - Google Patents

Computer program, information processing method, information processing device, and learning model Download PDF

Info

Publication number
WO2024071251A1
WO2024071251A1 PCT/JP2023/035280 JP2023035280W WO2024071251A1 WO 2024071251 A1 WO2024071251 A1 WO 2024071251A1 JP 2023035280 W JP2023035280 W JP 2023035280W WO 2024071251 A1 WO2024071251 A1 WO 2024071251A1
Authority
WO
WIPO (PCT)
Prior art keywords
plaque
stent
image
lesion
information
Prior art date
Application number
PCT/JP2023/035280
Other languages
French (fr)
Japanese (ja)
Inventor
貴則 富永
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2024071251A1 publication Critical patent/WO2024071251A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a computer program, an information processing method, an information processing device, and a learning model for processing medical images.
  • Diagnostic medical catheters are used for diagnosing or treating lesions in hollow organs such as blood vessels and vasculature. Diagnostic medical catheters are equipped with ultrasonic sensors or light-receiving sensors and moved into the organ, and images based on signals obtained from the sensors are used for diagnosis.
  • Diagnostic imaging of blood vessels, particularly of hollow organs, is essential for the safe and reliable performance of procedures such as percutaneous coronary intervention (PCI).
  • PCI percutaneous coronary intervention
  • intravascular imaging techniques such as IVUS (Intravascular Ultrasound) and OCT (Optical Coherence Tomography) using medical catheters are becoming widespread.
  • Doctors and other medical professionals refer to medical images based on these imaging technologies to understand the condition of hollow organs, make diagnoses, and provide treatment.
  • various technologies have been proposed for generating and displaying information that assists in interpreting the medical images through image processing or calculation (Patent Document 1, etc.).
  • Medical professionals interpret the medical images to understand the anatomical characteristics of the patient's hollow organ and the condition of the affected area, and then perform treatment by expanding the blocked hollow organ itself using a balloon attached to the tip of a medical catheter used for treatment, or by placing a stent inside the hollow organ. At this time, it is desirable to output information based on the medical images that allows the extent of the affected area and the condition of the surrounding hollow organs to be accurately understood.
  • the purpose of this disclosure is to provide a computer program, information processing method, information processing device, and learning model that are capable of displaying appropriate information required for judgment regarding medical images.
  • the computer program of the present disclosure causes a computer to execute a process of calculating data indicating anatomical characteristics of a tubular organ based on a signal output from an imaging device provided on a catheter inserted into the tubular organ, identifying an area on the longitudinal axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicating anatomical characteristics, and outputting information for placing a stent in the tubular organ based on the identified area on the longitudinal axis of the lesion.
  • the computer is caused to execute a process of outputting information on the landing zone of the stent as information for placing the stent in the tubular organ.
  • the computer is caused to execute a process of outputting information on the position of a reference portion, which is a portion of the lumen that is larger before and after the range on the long axis of the lesion, as information for placing a stent in the tubular organ.
  • the computer is caused to execute a process of changing the position of the reference area based on information on other lesion areas in the vicinity of the range on the long axis of the lesion area, and outputting information on the position of the reference area after the change.
  • the computer is caused to execute a process of outputting a proposal for the size of the stent to be placed based on the data indicating the anatomical features and the position of the reference portion on the long axis.
  • the imaging device is a catheter device that includes a transmitter and a receiver for waves of different wavelengths.
  • the imaging device is a dual-type catheter device including a transmitter and a receiver for IVUS and OCT, respectively.
  • the lesions are different types of lesions including lipid plaque, fibrous plaque, and calcific plaque
  • the computer is caused to execute a process of identifying the longitudinal extent of each of the different types of lesions in the tubular organ based on a signal from the catheter device corresponding to the different types of lesions.
  • the computer is made to execute a process of calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor, identifying the position of lipid plaque or fibrous plaque in the longitudinal direction of the tubular organ based on a signal obtained from an OCT sensor, and outputting information for placing a stent based on the distribution of the plaque burden and the position of the lipid plaque or fibrous plaque.
  • the computer is caused to execute a process of calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor, identifying the position of lipid plaque in the longitudinal direction of the tubular organ based on a signal obtained from the IVUS sensor, identifying the position of lipid plaque or fibrous plaque in the longitudinal direction of the tubular organ based on a signal obtained from an OCT sensor, and outputting information for placing a stent based on the distribution of plaque burden and the position of the lipid plaque or fibrous plaque.
  • a computer acquires a signal output from an imaging device provided in a catheter inserted into a tubular organ, calculates data indicative of anatomical characteristics of the tubular organ based on the signal output from the imaging device provided in the catheter inserted into the tubular organ, identifies an area on the longitudinal axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicative of anatomical characteristics, and outputs information for placing a stent in the tubular organ based on the identified area on the longitudinal axis of the lesion.
  • An information processing device is an information processing device that acquires a signal output from an imaging device provided in a catheter inserted into a tubular organ, and includes a memory unit that stores a trained model that outputs data for distinguishing the range of tissue or lesions shown in a tomographic image of the tubular organ based on the signal when the tomographic image of the tubular organ based on the signal is input, and a processing unit that executes image processing based on the signal from the imaging device, and the processing unit inputs an image based on the signal output from the imaging device provided in the catheter inserted into the tubular organ to the model, calculates data indicating anatomical characteristics of the tubular organ based on the data output from the model, identifies the range on the long axis of the tubular organ where the lesion in the tubular organ exists based on the calculated data indicating the anatomical characteristics, and outputs information for placing a stent in the tubular organ based on the identified range on the long axis of the lesion.
  • the learning model according to the present disclosure includes an input layer to which data indicating anatomical characteristics of a tubular organ is inputted, the data relating to the distribution of the data in the longitudinal direction of the tubular organ, the output layer to output the suitability of a stent to be placed in a lesion of the tubular organ, and an intermediate layer trained based on teacher data including the distribution and the track record of the stent used in the lesion based on the distribution, and causes a computer to function in such a way that the distribution of data indicating anatomical characteristics of a tubular organ in the longitudinal direction of the tubular organ is provided to the input layer, calculation is performed based on the intermediate layer, and the suitability of the stent corresponding to the distribution is output from the output layer.
  • the present disclosure it is possible to output information regarding an appropriate position for placing a stent in a luminal organ based on data showing the anatomical characteristics of the luminal organ. This is expected to enable the appropriate selection of the size of the stent and improve the accuracy of diagnosis and treatment.
  • FIG. 1 is a schematic diagram of an imaging diagnostic device.
  • FIG. 13 is an explanatory diagram showing the operation of the catheter.
  • FIG. 1 is a block diagram showing a configuration of an image processing device.
  • FIG. 1 is a schematic diagram of a segmentation model.
  • 11 is a flowchart illustrating an example of an information processing procedure performed by the image processing device.
  • 11 is a flowchart illustrating an example of an information processing procedure performed by the image processing device.
  • 1 shows an example of a screen displayed on a display device.
  • 11A to 11C are diagrams showing a process of changing the position of a reference portion.
  • 11 shows another example of a screen displayed on the display device.
  • FIG. 13 is a block diagram showing a configuration of an image processing device according to a second embodiment.
  • FIG. 1 is a schematic diagram of a plaque detection model.
  • 10 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a second embodiment.
  • 10 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a second embodiment.
  • 1 shows an example of a screen displayed on a display device.
  • FIG. 13 is a block diagram showing a configuration of an image processing device according to a third embodiment.
  • FIG. 1 is a schematic diagram of a stent information model.
  • 1 is a flow chart illustrating an example of a process for generating a stent information model.
  • 13 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a third embodiment.
  • 13 is a flowchart illustrating an example of an information processing procedure by an image processing device according to a third embodiment.
  • 1 shows an example of a screen displayed on a display device.
  • First Embodiment 1 is a schematic diagram of an image diagnostic apparatus 100.
  • the image diagnostic apparatus 100 includes a catheter 1, an MDU (Motor Drive Unit) 2, an image processing device (information processing device) 3, a display device 4, and an input device 5.
  • MDU Motor Drive Unit
  • image processing device information processing device
  • display device 4, and an input device 5.
  • the catheter 1 is a flexible tube for medical use.
  • the catheter 1 is known as an imaging catheter, which has an imaging device 11 at its tip and rotates in a circumferential direction by being driven from its base end.
  • the imaging device 11 of the catheter 1 is a dual-type catheter device that includes a transmitter and a receiver of waves of different wavelengths (ultrasound, light).
  • the imaging device 11 includes an ultrasound probe including an ultrasound transducer and an ultrasound sensor for the IVUS method, and an OCT device including a near-infrared laser and a near-infrared sensor.
  • the OCT device is a device that includes an optical element with a lens function and a reflecting function at its tip, and may have a structure that guides light to a near-infrared laser and a near-infrared sensor connected via an optical fiber.
  • the target of the dual type is not limited to a combination of IVUS and OCT, but may also be near-infrared spectroscopy, etc.
  • the MDU 2 is a drive unit attached to the base end of the catheter 1, and controls the operation of the catheter 1 by driving the internal motor in response to the operation of the examination operator.
  • the image processing device 3 generates multiple medical images, such as cross-sectional images of blood vessels, based on the signal output from the imaging device 11 of the catheter 1.
  • the configuration of the image processing device 3 will be described in detail later.
  • the display device 4 uses a liquid crystal display panel, an organic EL (Electro Luminescence) display panel, or the like.
  • the display device 4 displays the medical images generated by the image processing device 3 and information related to the medical images.
  • the input device 5 is an input interface that accepts operations for the image processing device 3.
  • the input device 5 may be a keyboard, a mouse, etc., or may be a touch panel, soft keys, hard keys, etc. built into the display device 4.
  • the input device 5 may also accept operations based on voice input. In this case, the input device 5 uses a microphone and a voice recognition engine.
  • FIG. 2 is an explanatory diagram showing the operation of the catheter 1.
  • the catheter 1 is inserted into a tubular blood vessel L by an examination operator along a guide wire W inserted into the coronary artery shown in the figure.
  • the right side corresponds to the distal side from the insertion point of the catheter 1 and guide wire W
  • the left side corresponds to the proximal side.
  • the catheter 1 is driven by the MDU 2 to move from the distal end to the proximal end within the blood vessel L as shown by the arrow in the figure, and while rotating in the circumferential direction, the imaging device 11 scans the blood vessel in a spiral manner.
  • the image processing device 3 acquires signals for each scan output from the imaging device 11 of the catheter 1 for both IVUS and OCT.
  • One scan is a spiral scan in which a detection wave is emitted from the imaging device 11 in the radial direction and reflected light is detected.
  • the image processing device 3 generates a tomographic image (cross-sectional image) obtained by polar coordinate conversion (inverse conversion) for each 360 degrees for each IVUS and OCT rectangular image (I0 in FIG. 2) in which the signals for each scan are aligned in the radial direction for 360 degrees and arranged in a rectangular shape (I1 in FIG. 2).
  • the rectangular image generated based on the signal from IVUS is referred to as rectangular image I01
  • the tomographic image is referred to as tomographic image I11
  • the rectangular image generated based on the signal from OCT is distinguished from the rectangular image I02 and the tomographic image I12, which will be described below.
  • the tomographic images I11 and I12 are also called frame images.
  • the reference point (center) of the cross-sectional images I11 and I12 corresponds to the range of the catheter 1 (not imaged).
  • the image processing device 3 may further generate a long axis image (longitudinal cross-sectional image) in which the pixel values on a line passing through the reference points of each of the cross-sectional images I11 and I12 are arranged along the length (longitudinal direction) of the blood vessel by the catheter 1.
  • the image processing device 3 calculates data indicating the anatomical characteristics of the blood vessel based on the obtained rectangular images I01 and I02, cross-sectional images I11 and I12, or long axis image, and outputs the cross-sectional image I1 or the long axis image and the calculated data so that they can be viewed by a doctor, examination operator, or other medical personnel.
  • the image processing device 3 performs image processing on the rectangular images I01, I02, the tomographic images I11, I12, or the long axis images to output images that make it easier to grasp the anatomical features of the blood vessels and the state of the lesion. Specifically, the image processing device 3 outputs information for placing a stent in an area that includes the lesion. The output processing by the image processing device 3 will be described in detail below.
  • FIG. 3 is a block diagram showing the configuration of the image processing device 3.
  • the image processing device 3 is a computer, and includes a processing unit 30, a storage unit 31, and an input/output I/F 32.
  • the processing unit 30 includes one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), GPGPUs (General-purpose computing on graphics processing units), TPUs (Tensor Processing Units), etc.
  • the processing unit 30 incorporates a non-temporary storage medium such as a RAM (Random Access Memory), and performs calculations based on a computer program P3 stored in the storage unit 31 while storing data generated during processing in the non-temporary storage medium.
  • a non-temporary storage medium such as a RAM (Random Access Memory)
  • the storage unit 31 is a non-volatile storage medium such as a hard disk or flash memory.
  • the storage unit 31 stores the computer program P3 read by the processing unit 30, setting data, etc.
  • the storage unit 31 also stores a trained segmentation model 31M.
  • the segmentation model 31M includes a first model 311M trained on the IVUS tomographic image I11 and a second model 312M trained on the OCT tomographic image I12.
  • the computer program P3 and the segmentation model 31M may be copies of the computer program P9 and the segmentation model 91M stored in a non-temporary storage medium 9 outside the device, read out via the input/output I/F 32.
  • the computer program P3 and the segmentation model 31M may be distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
  • the input/output I/F 32 is an interface to which the catheter 1, the display device 4, and the input device 5 are connected.
  • the processing unit 30 acquires a signal (digital data) output from the imaging device 11 via the input/output I/F 32.
  • the processing unit 30 outputs screen data of a screen including the generated tomographic image I1 and/or long axis image to the display device 4 via the input/output I/F 32.
  • the processing unit 30 accepts operation information input to the input device 5 via the input/output I/F 32.
  • FIG. 4 is a schematic diagram of segmentation model 31M. Of first model 311M and second model 312M that constitute segmentation model 31M, FIG. 4 shows first model 311M. The configuration of second model 312M is the same as that of first model 311M except that the image to be learned is different, so illustration and detailed description are omitted.
  • the first model 311M and the second model 312M that make up the segmentation model 31M are each models that have been trained to output an image showing the area of one or more objects appearing in an image when the image is input.
  • the first model 311M is, for example, a model that performs semantic segmentation.
  • the first model 311M is designed to output an image in which each pixel in the input image is tagged with data indicating which object the pixel is in.
  • the first model 311M uses, for example, a so-called U-net in which a convolution layer, a pooling layer, an upsampling layer, and a softmax layer are symmetrically arranged, as shown in FIG. 4.
  • the first model 311M outputs a tag image IS1.
  • the tag image IS1 is obtained by tagging the pixels at the positions of the lumen range of the blood vessel, the membrane range corresponding to the area between the lumen boundary of the blood vessel including the tunica media and the blood vessel boundary, the range in which the guide wire W and its reflection are captured, and the range corresponding to the catheter 1, with different pixel values (shown by different types of hatching and solid color in FIG.
  • the first model 311M further identifies the range of lipid plaque formed in the blood vessel.
  • the first model 311M for IVUS identifies the area in which fibrous plaque or calcified plaque is captured. IVUS can distinguish between fibrous plaque and calcified plaque.
  • the first model 311M is exemplified by semantic segmentation and U-net, but it goes without saying that this is not limited to this.
  • the first model 311M may be a model that realizes individual recognition processing using instance segmentation, etc.
  • the first model 311M is not limited to being based on U-net, and may also use a model based on SegNet, R-CNN, or an integrated model with other edge extraction processing, etc.
  • the processing unit 30 identifies the blood (lumen area), intima area, media area, and adventitia area of the blood vessels shown in the tomographic image I11 based on pixel values in the tag image IS1 obtained by inputting the IVUS tomographic image I11 into the first model 311M and their coordinates within the image.
  • the processing unit 30 can detect the lumen boundary and blood vessel boundary of the blood vessels shown in the tomographic image I11. Strictly speaking, the blood vessel boundary is the external elastic membrane (EEM) between the tunica media and adventitia of the blood vessel.
  • EEM external elastic membrane
  • the processing unit 30 identifies the range of lipid plaque and each of fibrous plaque and calcified plaque based on the pixel values in the tag image IS1 obtained by inputting the tomographic image I11 into the first model 311M for IVUS and the coordinates within that image.
  • the processing unit 30 similarly inputs the OCT cross-sectional image I12 into the second model 312M, and based on the pixel values in the tag image IS2 obtained and the coordinates within that image, identifies the blood (lumen area), calcified plaque, fibrous plaque, and lipid plaque of the blood vessels shown in the cross-sectional image I12.
  • the imaging diagnostic device 100 of the present disclosure is used in diagnosis for placing a stent in a lesion in a blood vessel, which is a hollow organ.
  • the examination operator or medical provider scans the imaging device 11 to evaluate and diagnose the condition of the lesion.
  • the imaging diagnostic device 100 outputs information for determining the type and size of the balloon to be used, together with the scanning results of the imaging device 11, to the display device 4.
  • the examination operator or medical provider scans the imaging device 11 to evaluate the condition of the blood vessel opened by the balloon and determine the type of stent to be placed and where the stent should be placed.
  • the examination operator or medical provider places the stent of the determined type and size using the catheter 1.
  • the image processing device 3 generates and outputs information indicating the type of stent and the position at which the stent should be placed, based on information obtained by image processing of the IVUS tomographic image I11 and the OCT tomographic image I12. The process of outputting information about the stent is described below.
  • the processing unit 30 of the image processing device 3 identifies the luminal boundary of the vascular lumen range from each range identified for each of the IVUS tomographic image I11 and the OCT tomographic image I12, and calculates numerical values such as the maximum diameter, minimum diameter, and average inner diameter inside the lumen boundary. Furthermore, the processing unit 30 calculates the ratio of the cross-sectional area to the area inside the vascular boundary (hereinafter referred to as plaque burden) from the results of identifying the ranges of calcified plaque, fibrous plaque, and lipid plaque identified for each of the IVUS tomographic image I11 and the OCT tomographic image I12.
  • the image processing device 3 of the present disclosure outputs a graph of the distribution of the average lumen diameter and the distribution of plaque burden with respect to the position in the longitudinal direction of the blood vessel.
  • the image processing device 3 further outputs, on the graph showing the distribution, a reference portion to be referred to for placing a stent and candidates for the landing zone of the stent.
  • FIGS. 5 and 6 are flowcharts showing an example of the information processing procedure by the image processing device 3.
  • the processing unit 30 of the image processing device 3 starts the following processing when a signal is output from the imaging device 11 of the catheter 1.
  • the processing unit 30 generates tomographic images I11 and I12 (step S102) each time it acquires a predetermined amount (e.g., 360°) of signals (data) from the imaging device 11 of the catheter 1 for both IVUS and OCT (step S101).
  • the processing unit 30 performs polar coordinate conversion (inverse conversion) on the signals arranged in a rectangle for each of the IVUS and OCT to generate the tomographic images I11 and I12.
  • the processing unit 30 stores the signal data acquired in step S101 and the tomographic images I11 and I12 generated in step S102 for each of the IVUS and OCT in the memory unit 31 in association with positions on the long axis of the blood vessel (step S103).
  • the processing unit 30 inputs the IVUS tomographic image I11 to the first IVUS model 311M (step S104).
  • the processing unit 30 stores the region identification result (tag image IS1) output from the first IVUS model 311M in the memory unit 31 in association with the position on the long axis of the blood vessel (step S105).
  • the processing unit 30 inputs the OCT tomographic image I12 to the second model 312M for OCT (step S106).
  • the processing unit 30 stores the region identification result (tag image IS2) output from the second model 312M for OCT in the memory unit 31 in association with the position on the long axis of the blood vessel (step S107).
  • the processing unit 30 extracts necessary area images from the tomographic images I11 and I12 based on the area identification result for the IVUS tomographic image I11 (tag image IS1) and the area identification result for the OCT tomographic image I12 (tag image IS2) (step S108).
  • the processing unit 30 extracts, for example, area images of the membrane area corresponding to the media area and adventitia area, and area images of the lipid plaque area from the IVUS tomographic image I11, and extracts area images of the lumen area and area images of the fibrous plaque and calcified plaque areas from the OCT tomographic image I12. That is, the processing unit 30 appropriately extracts anatomical features and lesion areas from each of the IVUS and OCT images that allow clear area identification.
  • the processing unit 30 synthesizes the extracted area images to create a corrected tomographic image (step S109).
  • the processing unit 30 calculates the coordinate (angle) shift between the IVUS tomographic image I11 and the OCT tomographic image I12 so that they can be overlaid on each other without any problems.
  • the processing unit 30 calculates data indicating anatomical characteristics including the maximum, minimum and average inner diameter of the range inside the lumen boundary of the blood vessel and plaque burden for the corrected tomographic image (step S110).
  • the processing unit 30 stores the data indicating the anatomical characteristics calculated in step S110 (the average inner diameter inside the lumen boundary and the plaque burden, etc.) in the memory unit 31 in association with the position on the long axis of the blood vessel (step S111). In step S111, the processing unit 30 may also store the angle ranges of lipid plaque, fibrous plaque, or calcified plaque identified in the tomographic images I11 and I12.
  • the processing unit 30 determines whether scanning by the imaging device 11 of the catheter 1 has been completed (step S112). If it is determined that scanning has not been completed (S112: NO), the processing unit 30 returns the process to step S101 and generates the next tomographic images I11 and I12.
  • the processing unit 30 creates and outputs a graph showing the distribution of data indicating anatomical features for the entire longitudinal direction of the scanned blood vessel (step S113).
  • the processing unit 30 identifies the position and range of the lesion (plaque) on the long axis based on various data stored in the memory unit 31 in association with the position on the long axis of the blood vessel (step S114).
  • the processing unit 30 identifies, for example, the positions on the long axis where the plaque burden is equal to or greater than a set percentage threshold (e.g., 50%) for a continuous period of at least a set length threshold (e.g., 4 mm) as the range of plaque.
  • a set percentage threshold e.g. 50%
  • a set length threshold e.g., 4 mm
  • the processing unit 30 determines a reference area for the identified lesion based on the position and range of the lesion on the long axis (step S115). In step S115, the processing unit 30 determines the reference area as the area with the largest lumen diameter before and after the range of the lesion, within a predetermined range (10 mm) from the range of the lesion to the location where a large side branch is present. In step S115, if the reference area overlaps with the location where lipid plaque is present, the processing unit 30 re-determines as the reference area the location where the plaque burden is lowest outside the range of the lipid plaque.
  • the processing unit 30 stores the position of the determined reference area (step S116).
  • the processing unit 30 outputs to the display device 4 the position and range on the long axis of the lesion identified in step S114 and a graphic showing the reference area determined in step S115 on the graph displayed in step S113 (step S117).
  • the processing unit 30 ends the process.
  • Figure 7 shows an example of a screen 400 displayed on the display device 4.
  • the screen 400 shown in Figure 7 is displayed after scanning from the proximal to the distal end of the blood vessel is completed in order to output the type, size, and placement position of the stent.
  • Screen 400 includes cursor 401 indicating the position on the long axis of the blood vessel corresponding to tomographic image I11 or tomographic image I12 to be displayed, tomographic image I11 and tomographic image I12 generated based on the signal obtained at that position, and corrected tomographic image I3.
  • Screen 400 includes data column 402 displaying numerical values of data indicating anatomical features calculated by image processing of tomographic images I11, I12, and I3. Corrected tomographic image I3, IVUS tomographic image I11, and OCT tomographic image I12 may be displayed in a switched manner each time they are selected on screen 400.
  • the screen 400 in FIG. 7 includes a data column 402 that displays the numerical values of data indicating anatomical features calculated by image processing of the tomographic images I11, I12, and I3.
  • Screen 400 further includes graphs 403 and 404 showing the distribution of data indicating anatomical characteristics with respect to position on the long axis of the blood vessel.
  • Graph 403 shows the distribution of mean lumen diameter with respect to position on the long axis.
  • Graph 404 shows the distribution of plaque burden with respect to position on the long axis.
  • Graph 404 is displayed with graphic 405 superimposed on it.
  • Graphic 405 indicates the range in which the portion in which the plaque burden is equal to or greater than the percentage threshold (50% in this case) continues for 2 mm or more in the longitudinal direction. Examination operators and other medical personnel who visually view graph 404 with graphic 405 superimposed thereon can understand that in the blood vessels in the range in which graphic 405 is displayed, the plaque burden is equal to or greater than the threshold over a length of 2 mm or more.
  • Graph 404 displays, on its long axis, a graphic 406 indicating the presence of lipid plaque, and a graphic 407 indicating the presence of fibrous plaque or calcified plaque.
  • Graph 404 also displays a bar 408 indicating the position of the reference area relative to the lesion.
  • the examination operator or medical provider visually viewing the screen 400 shown in FIG. 7 can recognize the range where the plaque burden is less than the threshold and the average lumen diameter is large, and the range where the plaque burden is equal to or greater than the threshold and the average lumen diameter is small, by juxtaposing the graphs 403 and 404.
  • the examination operator or medical provider can determine what kind of balloon or stent should be used as information for expanding the lesion from the inside by regarding the range where the plaque burden is equal to or greater than the threshold and the average lumen diameter as the lesion. After expanding the lesion with the balloon, the examination operator or medical provider also checks the screen 400 again based on the signal obtained by scanning the catheter 1.
  • the examination operator or medical provider can determine where the stent should be placed by referring to the graphic 405 indicating the range of the lesion, the graphic 406 indicating the range of the lipid plaque, the graphic 407 indicating the range of the fibrous plaque, and the bar 408 indicating the reference area.
  • FIG. 8 is a diagram showing the process of changing the position of the reference area.
  • graphs 404 indicating the distribution of plaque burden with respect to the position on the long axis are shown above and below.
  • the upper graph 404 shows the state before the change, and the lower graph 404 shows the state after the change.
  • a reference portion is determined as a portion with the largest lumen diameter among positions on the long axis within 10 mm proximal to the lesion and within 10 mm distal to the lesion, covering the range of the hatched graphic 405, up to the location where a large side branch exists.
  • the image processing of the tomographic images I11 and I12 by the processing unit 30 identifies the presence of soft lipid plaque at the position once determined as the reference portion. If lipid plaque is present at the position of the reference portion once determined, the processing unit 30 re-determines a position within the same 10 mm range where the lumen diameter is the largest in the range where lipid plaque is not present.
  • the processing unit 30 does not need to avoid this position and re-determine it. This is because a location where calcified plaque exists may be more suitable as a location for placing a stent than a location where lipid plaque exists.
  • FIG. 7 a bar 408 indicating a reference portion is displayed.
  • the screen 400 may display a graphic indicating the landing zone of the stent as information for placing the stent in the tubular organ.
  • FIG. 9 shows another example of the screen 400 displayed on the display device 4.
  • FIG. 9 not only are bars 408 displayed indicating reference areas proximal and distal to the lesion, but also a graphic 409 is displayed indicating the landing zone, which is the area of contact on the long axis of the blood vessel that is used to secure the stent. Additionally, the screen 400 in FIG. 9 displays the dimension from the proximal end of the landing zone to the distal end. This allows the examination operator or medical provider to visually check the graphic 409 and its associated dimensions to determine what size stent should be placed and how.
  • the processing unit 30 of the image processing device 3 may output stent recommendation information on the screen 400 by referring to the pre-stored sizes for each stent part number based on the dimension from the proximal end to the distal end of the landing zone shown on the screen 400 of FIG. 9.
  • a plaque detection model 32M is used that is trained to output, when an IVUS tomographic image I11 is input, whether or not lipid plaque is present, and, if present, its location.
  • the configuration of the imaging diagnostic device 100 of the second embodiment is similar to that of the imaging diagnostic device 100 of the first embodiment, except for the plaque detection model 32M stored in the image processing device 3 and the details of the processing by the processing unit 30, which are described below. Therefore, among the configurations of the imaging diagnostic device 100 of the second embodiment, the configurations common to the imaging diagnostic device 100 of the first embodiment are given the same reference numerals and detailed descriptions are omitted.
  • FIG. 10 is a block diagram showing the configuration of an image processing device 3 of the second embodiment.
  • a plaque detection model 32M is stored in the storage unit 31 of the image processing device 32.
  • the plaque detection model 32M may be a copy of the plaque detection model 92M stored in a non-temporary storage medium 9 outside the device, read out via the input/output I/F 32.
  • the plaque detection model 32M may be a model distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
  • the plaque detection model 32M is a model that outputs the probability that lipid plaque is present in the input image when an IVUS tomographic image I11 or a rectangular image I01 is input.
  • the probability may be output as a value close to "1" if even one area is present in the entire input image, or may be output for each angle from a reference in the radial direction by dividing the accuracy into radial directions corresponding to the scanning signal in the input image.
  • the accuracy corresponding to each angle information such as the 12 o'clock angle (angle from a reference line extending upward from the image center or blood vessel center is zero degrees) and the 2 o'clock angle (angle from the same reference line) in the tomographic image I11 is output.
  • the plaque detection model 32M may output the result of identifying the area in the input image in which lipid plaque is present, similar to the segmentation model 31M.
  • the plaque detection model 32M is a model using a neural network including an input layer 321, an intermediate layer 322, and an output layer 323.
  • the input layer 321 inputs a two-dimensional signal distribution, i.e., image data.
  • the output layer 323 outputs the probability that lipid plaque is present.
  • the output layer 323 may output the probability that lipid plaque is present in the form of an array of 180 values, for example, 0°, 2°, 4°, ..., 356°, 358°, in increments of 2°.
  • the processing unit 30 can input the IVUS tomographic image I11, which is easy to detect lipid plaque, to the input layer 321, or input the rectangular image I01 to the input layer 321, and obtain the output probability.
  • the processing unit 30 can obtain an array of the probability that lipid plaque exists at each angle output from the plaque detection model 32M, and obtain the continuous portion where the probability is equal to or greater than a predetermined value as the angle range of lipid plaque.
  • the plaque detection model 32M is created in advance by the image processing device 3 or another processing device and is considered to have been trained.
  • the teacher data is an IVUS tomographic image I11 or a rectangular image I01 with annotations.
  • the annotation is data indicating the presence or absence of lipid plaque (for example, the probability of presence is "1" and the probability of absence is "0").
  • the plaque detection model 32M is trained using the IVUS tomographic image I11 or rectangular image I01, in which the presence or absence of lipid plaque has been determined, as teacher data. If the plaque detection model 32M is a model of the type that outputs the probability that lipid plaque exists for each angle, the annotation is an array of data indicating the presence or absence of lipid plaque for each angle.
  • the annotation is created to indicate the presence or absence of lipid plaque for each angle based on the IVUS tomographic image I11 or rectangular image I01, in which the location of the lipid plaque in the image is known. For example, the presence or absence of lipid plaque for each created angle is added as an annotation to the IVUS tomographic image I11 or rectangular image I01 in an order of angle, thereby creating training data.
  • FIGS. 12 and 13 are flowcharts showing an example of an information processing procedure by the image processing device 3 of the second embodiment. Among the processing procedures shown in Figs. 12 and 13, the same step numbers are used for the steps common to the processing procedures shown in the flowcharts of Figs. 5 and 6 of the first embodiment, and detailed descriptions thereof will be omitted.
  • the processing unit 30 stores data indicating anatomical features calculated based on the range identification performed on each of the IVUS tomographic image I11 and the OCT tomographic image I21 (S111), and then performs the following processing.
  • the processing unit 30 inputs the IVUS tomographic image I11 or rectangular image I01 to the plaque detection model 32M (step S121).
  • the processing unit 30 determines whether or not lipid plaque is present based on the accuracy information output from the plaque detection model 32M (step S122).
  • the processing unit 30 may determine that lipid plaque is present when lipid plaque is present continuously for a length greater than or equal to a threshold in the longitudinal direction, including the previous and following tomographic images I11 or rectangular images I01.
  • the processing unit 30 may determine the range on the tomographic image I11 or the angle range.
  • the processing unit 30 stores the presence or absence of lipid plaque (or the identified range) determined in step S122 in association with the position on the long axis of the blood vessel (step S123), and proceeds to step S112.
  • the processing unit 30 identifies the location and range of the lesion on the long axis in step S114 (S114), and then identifies the location of lipid plaque on the long axis of the blood vessel based on the information stored in step S123 (step S124).
  • step S117 the processing unit 30 displays a graphic showing the location and range of the lesion (lipid plaque, fibrous plaque, calcified plaque, etc.) identified in step S114 and the location of the lipid plaque identified in step S124 (S117), and ends the process.
  • the presence or absence of lipid plaque is easier to determine with IVUS than with OCT because it can observe the inner part of the blood vessel membrane (the adventitia side), but identifying the area is difficult due to the softness of lipid plaque. For this reason, in addition to identifying the area using both IVUS and OCT in the segmentation model 31M, the image processing device 3 narrows down to lipid plaque and determines the presence or absence at each position using a plaque detection model 32M that uses a neural network for the image. This improves the accuracy of recognition of the area of lipid plaque that is recommended to be avoided as a location for stent placement. It is possible to visually confirm the area of lipid plaque with high accuracy, and also to determine a reference area that avoids the location of the lipid plaque.
  • the processing unit 30 of the image processing device 3 may use not only the plaque detection model 32M for detecting lipid plaques described above, but also a learning model for detecting whether or not a side branch is shown in an IVUS tomographic image I11 or an OCT tomographic image I21 when the image is input.
  • the processing unit 30 uses the learning model to determine whether or not a side branch is shown in the tomographic images I11 and I21 corresponding to the positions on the long axis of the blood vessel, and stores the location where the side branch is shown.
  • the processing unit 30 may calculate the size of the side branch. The position on the long axis where the side branch is present and its size are displayed on the graph 404 on the screen 400 as shown in FIG. 9 or in the vicinity of the graph 404.
  • FIG. 14 shows an example of a screen 400 displayed on the display device 4.
  • components of the screen 400 shown in FIG. 14 are given the same reference numerals and detailed descriptions are omitted.
  • a black diamond mark 410 indicating the location of the side branch is displayed. Furthermore, a numerical value indicating the size (diameter) of the side branch is displayed near the mark 410. This allows the examination operator or medical provider visually viewing the screen 400 to determine the location for placing a stent while recognizing the presence or absence of a side branch and its size.
  • the image processing device 3 uses a learning model that outputs appropriate stent information when it inputs a tomographic image I11, I12, or I3 obtained by a single imaging session using the catheter 1.
  • the configuration of the imaging diagnostic device 100 of the third embodiment is the same as that of the imaging diagnostic device 100 of the first embodiment, except for the stent information model 33M stored in the image processing device 3 and the details of the processing by the processing unit 30, which are described below. Therefore, among the configurations of the imaging diagnostic device 100 of the third embodiment, the configurations common to the imaging diagnostic device 100 of the first embodiment are given the same reference numerals and detailed descriptions are omitted.
  • FIG. 15 is a block diagram showing the configuration of an image processing device 3 of the third embodiment.
  • a stent information model 33M is stored in the storage unit 31 of the image processing device 32.
  • the stent information model 33M may be a copy of a stent information model 93M stored in a non-temporary storage medium 9 outside the device read out via the input/output I/F 32.
  • the stent information model 33M may be a model distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
  • FIG. 16 is a schematic diagram of the stent information model 33M.
  • the stent information model 33M is a learning model using a neural network equipped with an input layer 331, an intermediate layer 332, and an output layer 333.
  • the input layer 331 inputs image data that visualizes a graph showing the distribution of plaque burden in the longitudinal direction, and image data that visualizes a graph showing the distribution of average lumen diameter in the longitudinal direction.
  • a graphic 406 indicating the presence of lipid plaque and a graphic 407 indicating the presence of fibrous plaque or calcified plaque may be superimposed on the graph.
  • the output layer 333 outputs an array of the qualifications of multiple types of stents corresponding to the identification data of each of the multiple types of stents.
  • the input layer 331 may input not only image data, but also a group of values indicating the distribution of plaque burden in the longitudinal direction (a group of values of plaque burden for positions on the longitudinal axis) and a group of values indicating the distribution of average lumen diameter in the longitudinal direction (a group of values of average lumen diameter for positions on the longitudinal axis).
  • the stent information model 33M will be described below as being created in advance by the image processing device 3, but it may also be created in advance by another processing device and considered to have been trained.
  • FIG. 17 is a flowchart showing an example of the process for generating a stent information model 33M.
  • the processing unit 30 reads out the distribution of plaque burden in the longitudinal direction and the distribution of average lumen diameter in the longitudinal direction that were stored in a single diagnosis of a blood vessel using the catheter 1 in the past (step S301).
  • the processing unit 30 reads out the positions and ranges on the longitudinal axis of lipid plaque, fibrous plaque, or calcified plaque that were stored in the single diagnosis using the catheter 1 (step S302).
  • the processing unit 30 creates an image in which a graphic showing the location of the lipid plaque, fibrous plaque, or calcified plaque identified in step S302 is superimposed on the distribution obtained in step S301 (step S303).
  • the processing unit 30 identifies the identification data of the stent used based on the diagnosis using the catheter 1 from the procedure record (step S304).
  • the processing unit 30 stores, as training data, a set of an image of the graph showing the distribution of plaque burden in the longitudinal direction, an image of the graph showing the distribution of average lumen diameter in the longitudinal direction, and the identification data of the stent identified in step S304 (step S305).
  • step S303 If a group of values indicating the distribution of plaque burden in the longitudinal direction and a group of values indicating the distribution of average lumen diameter in the longitudinal direction are input to the stent information model 33M instead of image data, step S303 is omitted. In this case, in step S305, the processing unit 30 stores pairs of each group of values and the stent identification data as training data.
  • the processing unit 30 inputs an image of the graph of the stored training data to the input layer 331 of the stent information model 33M before learning is completed (step S306).
  • the processing unit 30 calculates a loss using the eligibility of each stent identification data output from the output layer 333 of the stent information model 33M and the eligibility of the stent corresponding to the input image, and thereby learns (updates) the parameters of the intermediate layer 332 (step S307).
  • the processing unit 30 determines whether the learning conditions are met (step S308), and if it is determined that the learning conditions are not met (S308: NO), the processing unit 30 returns to step S306 and continues learning.
  • the processing unit 30 stores the descriptive data indicating the network configuration and conditions of the stent information model 33M and the parameters of the intermediate layer 332 in the storage unit 31 or another storage medium (step S309), and ends the model generation process. Note that the processing unit 30 may execute the processes of steps S301-S305 in advance, and then execute the processes of steps S306-S309 for the collected teacher data.
  • the stent information model 33M is generated to output appropriate stent information when a data group showing anatomical features obtained by scanning with the diagnostic catheter 1 (or image data that visualizes the data group) is input.
  • the processing unit 30 can input image data of an image of a graph of the average lumen diameter and image data of an image of a graph of the plaque burden into the stent information model 33M, and obtain the output array of eligibility.
  • the processing unit 30 can create suggested information for a stent to be placed based on the identification data of stents whose eligibility is equal to or greater than a predetermined value.
  • FIGS. 18 and 19 are flowcharts showing an example of an information processing procedure by the image processing device 3 of the third embodiment. Among the processing procedures shown in Figs. 18 and 19, the same step numbers are used for the steps common to the processing procedures shown in the flowcharts of Figs. 5 and 6 of the first embodiment, and detailed descriptions thereof will be omitted.
  • the processing unit 30 determines that scanning of blood vessels is complete (S112: YES) and outputs a graph showing anatomical features and a graphic showing the position, range, and reference area of the lesion (S117), it executes the following process.
  • the processing unit 30 creates an image in which a graphic showing the location of lipid plaque, fibrous plaque, or calcified plaque output in step S117 is superimposed on a graph showing anatomical features (distribution of plaque burden on the long axis and distribution of average lumen diameter on the long axis) (step S131).
  • the processing unit 30 inputs the created image into the stent information model 33M (step S132).
  • the processing unit 30 extracts identification data of stents whose qualifications are equal to or greater than a predetermined value based on the sequence of qualifications output from the stent information model 33M (step S133).
  • the processing unit 30 outputs information such as the product number and size of the stent identified by the identification data extracted in step S133 to the screen displayed on the display device 4 (step S134), and ends the process.
  • FIG. 20 shows an example of a screen 400 displayed on the display device 4.
  • the components common to the screen 400 shown in FIG. 7 of the first embodiment are given the same reference numerals and detailed description is omitted.
  • the screen 400 in FIG. 20 displays a text box 411 that contains information about the recommended stent. This allows the examination operator or medical provider viewing the screen 400 to refer to it when selecting a stent to place.
  • the image processing device 3 has been described as outputting information suggesting a stent, but this is not limited thereto.
  • the image processing device 3 may also learn a model using a neural network to output information suggesting an appropriate balloon, and output new suggestions using the learned model.
  • Image diagnostic device 3 Image processing device 30 Processing unit 31 Storage unit 31M Segmentation model 33M Stent information model 4 Display device 5 Input device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Optics & Photonics (AREA)
  • Robotics (AREA)
  • Endoscopes (AREA)

Abstract

Provided are a computer program, an information processing method, an information processing device, and a learning model which make it possible to display suitable information required for determination in relation to a medical image. The computer program causes a computer to execute the processes of: calculating data, which indicates the anatomical features of a luminal organ, on the basis of signals output from an imaging device provided in a catheter inserted in the luminal organ; identifying a range of a lesion part in the luminal organ on the basis of the data that indicates the calculated anatomical features; and outputting information for detaining a stent in the luminal organ on the basis of the identified range of a lesion part.

Description

コンピュータプログラム、情報処理方法、情報処理装置、及び学習モデルCOMPUTER PROGRAM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND LEARNING MODEL
 本発明は、医用画像に関する処理に係るコンピュータプログラム、情報処理方法、情報処理装置、及び学習モデルに関する。 The present invention relates to a computer program, an information processing method, an information processing device, and a learning model for processing medical images.
 血管及び脈管等の管腔器官に存在する病変部の診断用又は治療用に医療用カテーテルが用いられている。診断用の医療用カテーテルに、超音波センサ又は受光センサを設けて器官内に移動させ、センサから得られる信号に基づく画像が診断に用いられている。 Medical catheters are used for diagnosing or treating lesions in hollow organs such as blood vessels and vasculature. Diagnostic medical catheters are equipped with ultrasonic sensors or light-receiving sensors and moved into the organ, and images based on signals obtained from the sensors are used for diagnosis.
 管腔器官の中でも特に血管の画像診断は、冠動脈インターベンション(PCI:Percutaneous Coronary Intervention)等の施術を安全確実に行なうために必要不可欠である。このため、造影剤を用いて体外から撮影する血管造影技術(angiography )と併せて、医療用カテーテルを用いたIVUS(Intravascular Ultrasound)、OCT(Optical Coherence Tomography)等の血管内イメージング技術が普及している。 Diagnostic imaging of blood vessels, particularly of hollow organs, is essential for the safe and reliable performance of procedures such as percutaneous coronary intervention (PCI). For this reason, in addition to angiography, which uses contrast media to take images from outside the body, intravascular imaging techniques such as IVUS (Intravascular Ultrasound) and OCT (Optical Coherence Tomography) using medical catheters are becoming widespread.
 医師等の医療従事者は、これらのイメージング技術に基づく医用画像を参照して管腔器官の状態を把握して診断し、治療を行なう。医用画像そのものをディスプレイに表示するのみならず、医用画像の読影を補助する情報を画像処理又は演算により生成し、表示するための技術が種々提案されている(特許文献1等)。 Doctors and other medical professionals refer to medical images based on these imaging technologies to understand the condition of hollow organs, make diagnoses, and provide treatment. In addition to displaying the medical images themselves on a screen, various technologies have been proposed for generating and displaying information that assists in interpreting the medical images through image processing or calculation (Patent Document 1, etc.).
特開2022-055170号公報JP 2022-055170 A
 医療従事者は医用画像を読影し、患者の管腔器官の解剖学的特徴や、病変部の状態を把握した上で、治療用の医療用カテーテルの先端に設けたバルーンを用いて閉塞した管腔器官自体を押し広げたり、ステントを管腔器官内部に留置したりする治療を行なう。この際、医用画像に基づいて、病変部の範囲及び周辺の管腔器官の状態を正確に把握できるような情報の出力が望まれる。 Medical professionals interpret the medical images to understand the anatomical characteristics of the patient's hollow organ and the condition of the affected area, and then perform treatment by expanding the blocked hollow organ itself using a balloon attached to the tip of a medical catheter used for treatment, or by placing a stent inside the hollow organ. At this time, it is desirable to output information based on the medical images that allows the extent of the affected area and the condition of the surrounding hollow organs to be accurately understood.
 本開示の目的は、医用画像に関し、判断に必要な適切な情報を表示させることが可能なコンピュータプログラム、情報処理方法、情報処理装置及び学習モデルを提供することにある。 The purpose of this disclosure is to provide a computer program, information processing method, information processing device, and learning model that are capable of displaying appropriate information required for judgment regarding medical images.
 (1)本開示に係るコンピュータプログラムは、コンピュータに、管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づき、前記管腔器官の解剖学的特徴を示すデータを算出し、算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する処理を実行させる。 (1) The computer program of the present disclosure causes a computer to execute a process of calculating data indicating anatomical characteristics of a tubular organ based on a signal output from an imaging device provided on a catheter inserted into the tubular organ, identifying an area on the longitudinal axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicating anatomical characteristics, and outputting information for placing a stent in the tubular organ based on the identified area on the longitudinal axis of the lesion.
 (2)上記(1)のコンピュータプログラムにおいて、前記コンピュータに、前記管腔器官内にステントを留置するための情報として、前記ステントのランディングゾーンの情報を出力する処理を実行させる。 (2) In the computer program of (1) above, the computer is caused to execute a process of outputting information on the landing zone of the stent as information for placing the stent in the tubular organ.
 (3)上記(1)又は(2)のコンピュータプログラムにおいて、前記コンピュータに、前記管腔器官内にステントを留置するための情報として、前記病変部の長軸上の範囲の前後で内腔が大きい部分である参照部の位置の情報を出力する処理を実行させる。 (3) In the computer program of (1) or (2) above, the computer is caused to execute a process of outputting information on the position of a reference portion, which is a portion of the lumen that is larger before and after the range on the long axis of the lesion, as information for placing a stent in the tubular organ.
 (4)上記(3)のコンピュータプログラムにおいて、前記コンピュータに、前記病変部の長軸上の範囲の近傍における他の病変部の情報に基づいて、前記参照部の位置を変更し、変更後の参照部の位置の情報を出力する処理を実行させる。 (4) In the computer program of (3) above, the computer is caused to execute a process of changing the position of the reference area based on information on other lesion areas in the vicinity of the range on the long axis of the lesion area, and outputting information on the position of the reference area after the change.
 (5)上記(3)又は(4)のコンピュータプログラムにおいて、前記コンピュータに、前記解剖学的特徴を示すデータ及び前記参照部の長軸上の位置に基づき、留置されるステントのサイズの提案を出力する処理を実行させる。 (5) In the computer program of (3) or (4) above, the computer is caused to execute a process of outputting a proposal for the size of the stent to be placed based on the data indicating the anatomical features and the position of the reference portion on the long axis.
 (6)上記(1)から(5)のうちのいずれか1項のコンピュータプログラムにおいて、前記イメージングデバイスは、異なる波長の波の送信器及び受信器をそれぞれ含むカテーテルデバイスである。 (6) In the computer program of any one of (1) to (5) above, the imaging device is a catheter device that includes a transmitter and a receiver for waves of different wavelengths.
 (7)上記(1)から(5)のうちのいずれか1項のコンピュータプログラムにおいて、前記イメージングデバイスは、IVUS及びOCTそれぞれの送信器及び受信器を含むデュアルタイプのカテーテルデバイスである。 (7) In the computer program of any one of (1) to (5) above, the imaging device is a dual-type catheter device including a transmitter and a receiver for IVUS and OCT, respectively.
 (8)上記(6)又は(7)のコンピュータプログラムにおいて、前記病変部は、脂質性プラーク、繊維性プラーク、及び石灰プラークを含む異なる種類の病変部であり、前記コンピュータに、異なる種類の病変部に応じた前記カテーテルデバイスからの信号に基づき、異なる種類の病変部それぞれの前記管腔器官における長軸上の範囲を特定する処理を実行させる。 (8) In the computer program of (6) or (7) above, the lesions are different types of lesions including lipid plaque, fibrous plaque, and calcific plaque, and the computer is caused to execute a process of identifying the longitudinal extent of each of the different types of lesions in the tubular organ based on a signal from the catheter device corresponding to the different types of lesions.
 (9)上記(7)のコンピュータプログラムにおいて、前記コンピュータに、IVUSのセンサから得られる信号に基づく前記管腔器官の断層画像からプラークバーデンの長軸方向に対する分布を算出し、OCTのセンサから得られる信号に基づき、脂質性プラーク又は繊維性プラークの前記管腔器官の長軸方向における位置を特定し、前記プラークバーデンの分布と、前記脂質性プラーク又は繊維性プラークの位置とに基づき、ステントを留置するための情報を出力する処理を実行させる。 (9) In the computer program of (7) above, the computer is made to execute a process of calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor, identifying the position of lipid plaque or fibrous plaque in the longitudinal direction of the tubular organ based on a signal obtained from an OCT sensor, and outputting information for placing a stent based on the distribution of the plaque burden and the position of the lipid plaque or fibrous plaque.
 (10)上記(7)又は(8)ののコンピュータプログラムにおいて、前記コンピュータに、IVUSのセンサから得られる信号に基づく前記管腔器官の断層画像からプラークバーデンの長軸方向に対する分布を算出し、前記IVUSのセンサから得られる信号に基づき、脂質性プラークの前記管腔器官の長さ方向における位置を特定し、OCTのセンサから得られる信号に基づき、脂質性プラーク又は繊維性プラークの前記管腔器官の長軸方向における位置を特定し、前記プラークバーデンの分布と、前記脂質性プラーク又は繊維性プラークの位置とに基づき、ステントを留置するための情報を出力する処理を実行させる。 (10) In the computer program of (7) or (8) above, the computer is caused to execute a process of calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor, identifying the position of lipid plaque in the longitudinal direction of the tubular organ based on a signal obtained from the IVUS sensor, identifying the position of lipid plaque or fibrous plaque in the longitudinal direction of the tubular organ based on a signal obtained from an OCT sensor, and outputting information for placing a stent based on the distribution of plaque burden and the position of the lipid plaque or fibrous plaque.
 (11)本開示に係る情報処理方法は、管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号を取得するコンピュータが、管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づき、前記管腔器官の解剖学的特徴を示すデータを算出し、算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する。 (11) In the information processing method disclosed herein, a computer acquires a signal output from an imaging device provided in a catheter inserted into a tubular organ, calculates data indicative of anatomical characteristics of the tubular organ based on the signal output from the imaging device provided in the catheter inserted into the tubular organ, identifies an area on the longitudinal axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicative of anatomical characteristics, and outputs information for placing a stent in the tubular organ based on the identified area on the longitudinal axis of the lesion.
 (12)本開示に係る情報処理装置は、管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号を取得する情報処理装置において、前記信号に基づく前記管腔器官の断層画像が入力された場合に、前記断層画像に写っている組織又は病変部の範囲を分別するデータを出力する学習済みのモデルを記憶する記憶部と、前記イメージングデバイスからの信号に基づく画像処理を実行する処理部とを備え、前記処理部は、管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づく画像を前記モデルへ入力し、前記モデルから出力されたデータに基づいて前記管腔器官の解剖学的特徴を示すデータを算出し、算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する。 (12) An information processing device according to the present disclosure is an information processing device that acquires a signal output from an imaging device provided in a catheter inserted into a tubular organ, and includes a memory unit that stores a trained model that outputs data for distinguishing the range of tissue or lesions shown in a tomographic image of the tubular organ based on the signal when the tomographic image of the tubular organ based on the signal is input, and a processing unit that executes image processing based on the signal from the imaging device, and the processing unit inputs an image based on the signal output from the imaging device provided in the catheter inserted into the tubular organ to the model, calculates data indicating anatomical characteristics of the tubular organ based on the data output from the model, identifies the range on the long axis of the tubular organ where the lesion in the tubular organ exists based on the calculated data indicating the anatomical characteristics, and outputs information for placing a stent in the tubular organ based on the identified range on the long axis of the lesion.
 (13)本開示に係る学習モデルは、管腔器官の解剖学的特徴を示すデータの、前記管腔器官の長軸方向に対する分布に係るデータが入力される入力層と、前記管腔器官の病変部に留置するステントの適格度を出力する出力層と、前記分布と、該分布に基づき前記病変部に対して使用されたステントの実績とを含む教師データに基づいて学習された中間層と、を備え、管腔器官の解剖学的特徴を示すデータの、前記管腔器官の長軸方向に対する分布を前記入力層へ与え、前記中間層に基づいて演算し、前記分布に対応するステントの適格度を前記出力層から出力するようにコンピュータを機能させる。 (13) The learning model according to the present disclosure includes an input layer to which data indicating anatomical characteristics of a tubular organ is inputted, the data relating to the distribution of the data in the longitudinal direction of the tubular organ, the output layer to output the suitability of a stent to be placed in a lesion of the tubular organ, and an intermediate layer trained based on teacher data including the distribution and the track record of the stent used in the lesion based on the distribution, and causes a computer to function in such a way that the distribution of data indicating anatomical characteristics of a tubular organ in the longitudinal direction of the tubular organ is provided to the input layer, calculation is performed based on the intermediate layer, and the suitability of the stent corresponding to the distribution is output from the output layer.
 本開示によれば、管腔器官の解剖学的特徴を示すデータに基づき、管腔器官にてステントを置く位置として適切な位置に関する情報を出力できる。これにより、ステントの大きさを適切に選び、診断及び治療の正確性を向上させることが期待できる。 According to the present disclosure, it is possible to output information regarding an appropriate position for placing a stent in a luminal organ based on data showing the anatomical characteristics of the luminal organ. This is expected to enable the appropriate selection of the size of the stent and improve the accuracy of diagnosis and treatment.
画像診断装置の概要図である。FIG. 1 is a schematic diagram of an imaging diagnostic device. カテーテルの動作を示す説明図である。FIG. 13 is an explanatory diagram showing the operation of the catheter. 画像処理装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of an image processing device. セグメンテーションモデルの概要図である。FIG. 1 is a schematic diagram of a segmentation model. 画像処理装置による情報処理手順の一例を示すフローチャートである。11 is a flowchart illustrating an example of an information processing procedure performed by the image processing device. 画像処理装置による情報処理手順の一例を示すフローチャートである。11 is a flowchart illustrating an example of an information processing procedure performed by the image processing device. 表示装置に表示される画面の例を示す。1 shows an example of a screen displayed on a display device. 参照部の位置の変更の過程を示す図である。11A to 11C are diagrams showing a process of changing the position of a reference portion. 表示装置に表示される画面の他の例を示す。11 shows another example of a screen displayed on the display device. 第2実施形態の画像処理装置の構成を示すブロック図である。FIG. 13 is a block diagram showing a configuration of an image processing device according to a second embodiment. プラーク検出モデルの概要図である。FIG. 1 is a schematic diagram of a plaque detection model. 第2実施形態の画像処理装置による情報処理手順の一例を示すフローチャートである。10 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a second embodiment. 第2実施形態の画像処理装置による情報処理手順の一例を示すフローチャートである。10 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a second embodiment. 表示装置に表示される画面の例を示す。1 shows an example of a screen displayed on a display device. 第3実施形態の画像処理装置の構成を示すブロック図である。FIG. 13 is a block diagram showing a configuration of an image processing device according to a third embodiment. ステント情報モデルの概要図である。FIG. 1 is a schematic diagram of a stent information model. ステント情報モデルを生成する過程の一例を示すフローチャートである。1 is a flow chart illustrating an example of a process for generating a stent information model. 第3実施形態の画像処理装置による情報処理手順の一例を示すフローチャートである。13 is a flowchart illustrating an example of an information processing procedure by an image processing apparatus according to a third embodiment. 第3実施形態の画像処理装置による情報処理手順の一例を示すフローチャートである。13 is a flowchart illustrating an example of an information processing procedure by an image processing device according to a third embodiment. 表示装置に表示される画面の例を示す。1 shows an example of a screen displayed on a display device.
 本発明の実施形態に係るコンピュータプログラム、情報処理方法、情報処理装置及び学習モデルの具体例を、図面を参照しつつ以下に説明する。下記の実施形態では、管腔器官の例として血管を対象とした情報処理について説明するが、管腔器官は血管に限られないことは勿論である。 Specific examples of a computer program, an information processing method, an information processing device, and a learning model according to embodiments of the present invention will be described below with reference to the drawings. In the following embodiments, information processing will be described for blood vessels as an example of a hollow organ, but hollow organs are of course not limited to blood vessels.
 (第1実施形態)
 図1は、画像診断装置100の概要図である。画像診断装置100は、カテーテル1、MDU(Motor Drive Unit)2、画像処理装置(情報処理装置)3、表示装置4及び入力装置5を備える。
First Embodiment
1 is a schematic diagram of an image diagnostic apparatus 100. The image diagnostic apparatus 100 includes a catheter 1, an MDU (Motor Drive Unit) 2, an image processing device (information processing device) 3, a display device 4, and an input device 5.
 カテーテル1は、医療用の柔軟性のある管である。カテーテル1は、先端部にイメージングデバイス11を設け、基端からの駆動によって周方向に回転するイメージングカテーテルと呼ばれるものである。カテーテル1のイメージングデバイス11は、異なる波長の波(超音波、光)の送信器及び受信器をそれぞれ含むデュアルタイプのカテーテルデバイスである。イメージングデバイス11は、IVUS法の超音波振動子及び超音波センサを含む超音波プローブと、近赤外線レーザ及び近赤外線センサ等を含むOCTデバイスとを含む。OCTデバイスは、レンズ機能及び反射機能を備えた光学素子を先端に含むデバイスであって、光ファイバを介して接続される近赤外線レーザ及び近赤外線センサへ導光させる構造を有したものであってもよい。デュアルタイプの対象は、IVUS及びOCTの組み合わせに限らず、近赤外線スペクトロスコピー等であってもよい。 The catheter 1 is a flexible tube for medical use. The catheter 1 is known as an imaging catheter, which has an imaging device 11 at its tip and rotates in a circumferential direction by being driven from its base end. The imaging device 11 of the catheter 1 is a dual-type catheter device that includes a transmitter and a receiver of waves of different wavelengths (ultrasound, light). The imaging device 11 includes an ultrasound probe including an ultrasound transducer and an ultrasound sensor for the IVUS method, and an OCT device including a near-infrared laser and a near-infrared sensor. The OCT device is a device that includes an optical element with a lens function and a reflecting function at its tip, and may have a structure that guides light to a near-infrared laser and a near-infrared sensor connected via an optical fiber. The target of the dual type is not limited to a combination of IVUS and OCT, but may also be near-infrared spectroscopy, etc.
 MDU2は、カテーテル1の基端に取り付けられる駆動装置であり、検査オペレータの操作に応じて内部モータを駆動することによって、カテーテル1の動作を制御する。 The MDU 2 is a drive unit attached to the base end of the catheter 1, and controls the operation of the catheter 1 by driving the internal motor in response to the operation of the examination operator.
 画像処理装置3は、カテーテル1のイメージングデバイス11から出力された信号に基づいて、血管の断層像等、複数の医用画像を生成する。画像処理装置3の構成の詳細については後述する。 The image processing device 3 generates multiple medical images, such as cross-sectional images of blood vessels, based on the signal output from the imaging device 11 of the catheter 1. The configuration of the image processing device 3 will be described in detail later.
 表示装置4は、液晶表示パネル、有機EL(Electro Luminescence)表示パネル等を用いる。表示装置4は、画像処理装置3によって生成される医用画像と、医用画像に関する情報とを表示する。 The display device 4 uses a liquid crystal display panel, an organic EL (Electro Luminescence) display panel, or the like. The display device 4 displays the medical images generated by the image processing device 3 and information related to the medical images.
 入力装置5は、画像処理装置3に対する操作を受け付ける入力インタフェースである。入力装置5は、キーボード、マウス等であってもよいし、表示装置4に内蔵されるタッチパネル、ソフトキー、ハードキー等であってもよい。入力装置5は、音声入力に基づく操作を受け付けてもよい。この場合、入力装置5は、マイクロフォン及び音声認識エンジンを用いる。 The input device 5 is an input interface that accepts operations for the image processing device 3. The input device 5 may be a keyboard, a mouse, etc., or may be a touch panel, soft keys, hard keys, etc. built into the display device 4. The input device 5 may also accept operations based on voice input. In this case, the input device 5 uses a microphone and a voice recognition engine.
 図2は、カテーテル1の動作を示す説明図である。図2においてカテーテル1は、血管内に検査オペレータによって、図中に示す冠動脈に挿入されたガイドワイヤWに沿って、管状の血管L内に挿入されている。図2中の血管Lの拡大図において右部は、カテーテル1及びガイドワイヤWの挿入箇所から遠位、左部は近位に対応する。 FIG. 2 is an explanatory diagram showing the operation of the catheter 1. In FIG. 2, the catheter 1 is inserted into a tubular blood vessel L by an examination operator along a guide wire W inserted into the coronary artery shown in the figure. In the enlarged view of blood vessel L in FIG. 2, the right side corresponds to the distal side from the insertion point of the catheter 1 and guide wire W, and the left side corresponds to the proximal side.
 カテーテル1は、MDU2の駆動により、図中の矢符で示すように、血管L内の遠位から近位へ向けて移動し、且つ、周方向に回転しながら、イメージングデバイス11によって螺旋状に血管内を走査する。 The catheter 1 is driven by the MDU 2 to move from the distal end to the proximal end within the blood vessel L as shown by the arrow in the figure, and while rotating in the circumferential direction, the imaging device 11 scans the blood vessel in a spiral manner.
 本実施形態の画像診断装置100では、画像処理装置3が、カテーテル1のイメージングデバイス11から出力される1回の走査毎の信号を、IVUS及びOCTの両方についてそれぞれ取得する。1回の走査は、イメージングデバイス11から検出波を径方向に発し、反射光を検出することであり、螺旋状に走査される。画像処理装置3は、IVUS及びOCTそれぞれについて、1回の走査毎の信号を360度分毎に径方向で揃えて矩形状に並べた矩形画像(図2中、I0)を、360度分毎に極座標変換(逆変換)することで得られる断層画像(横断面画像)を生成する(図2中、I1)。IVUSからの信号に基づいて生成される矩形画像を矩形画像I01とし、断層画像を断層画像I11とし、OCTからの信号に基づいて生成される矩形画像I02、断層画像I12と区別して、以下に説明する。断層画像I11,I12は、フレーム画像ともいう。断層画像I11,I12の基準点(中心)は、カテーテル1の範囲(画像化されない)に対応する。 In the imaging diagnostic device 100 of this embodiment, the image processing device 3 acquires signals for each scan output from the imaging device 11 of the catheter 1 for both IVUS and OCT. One scan is a spiral scan in which a detection wave is emitted from the imaging device 11 in the radial direction and reflected light is detected. The image processing device 3 generates a tomographic image (cross-sectional image) obtained by polar coordinate conversion (inverse conversion) for each 360 degrees for each IVUS and OCT rectangular image (I0 in FIG. 2) in which the signals for each scan are aligned in the radial direction for 360 degrees and arranged in a rectangular shape (I1 in FIG. 2). The rectangular image generated based on the signal from IVUS is referred to as rectangular image I01, the tomographic image is referred to as tomographic image I11, and the rectangular image generated based on the signal from OCT is distinguished from the rectangular image I02 and the tomographic image I12, which will be described below. The tomographic images I11 and I12 are also called frame images. The reference point (center) of the cross-sectional images I11 and I12 corresponds to the range of the catheter 1 (not imaged).
 画像処理装置3は更に、断層画像I11,I12それぞれの基準点を通る直線上の画素値を、カテーテル1が血管の長さ方向(長軸方向)に沿って並べた長軸画像(縦断面画像)を生成してもよい。画像処理装置3は、得られた矩形画像I01,I02、断層画像I11,I12、又は長軸画像に基づいて血管の解剖学的特徴を示すデータを算出し、断層画像I1又は長軸画像と、算出されたデータとを、医師、検査オペレータあるいは他の医療従事者が視認可能に出力する。 The image processing device 3 may further generate a long axis image (longitudinal cross-sectional image) in which the pixel values on a line passing through the reference points of each of the cross-sectional images I11 and I12 are arranged along the length (longitudinal direction) of the blood vessel by the catheter 1. The image processing device 3 calculates data indicating the anatomical characteristics of the blood vessel based on the obtained rectangular images I01 and I02, cross-sectional images I11 and I12, or long axis image, and outputs the cross-sectional image I1 or the long axis image and the calculated data so that they can be viewed by a doctor, examination operator, or other medical personnel.
 本開示における画像診断装置100では、画像処理装置3が、矩形画像I01,I02、断層画像I11,I12、又は長軸画像に対する画像処理によって、血管における解剖学的特徴及び病変部の状態を把握しやすく出力する。具体的には、画像処理装置3は、病変部を含む範囲にステントを留置するための情報を出力する。以下、画像処理装置3による出力処理について詳細を説明する。 In the image diagnostic device 100 of the present disclosure, the image processing device 3 performs image processing on the rectangular images I01, I02, the tomographic images I11, I12, or the long axis images to output images that make it easier to grasp the anatomical features of the blood vessels and the state of the lesion. Specifically, the image processing device 3 outputs information for placing a stent in an area that includes the lesion. The output processing by the image processing device 3 will be described in detail below.
 図3は、画像処理装置3の構成を示すブロック図である。画像処理装置3は、コンピュータであり、処理部30、記憶部31、及び入出力I/F32を備える。 FIG. 3 is a block diagram showing the configuration of the image processing device 3. The image processing device 3 is a computer, and includes a processing unit 30, a storage unit 31, and an input/output I/F 32.
 処理部30は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)、GPGPU(General-purpose computing on graphics processing units)、TPU(Tensor Processing Unit)等を含む。処理部30は、RAM(Random Access Memory)等の非一時記憶媒体を内蔵し、処理中に生成したデータを非一時記憶媒体に記憶しつつ、記憶部31に記憶されているコンピュータプログラムP3に基づき演算を実行する。 The processing unit 30 includes one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), GPGPUs (General-purpose computing on graphics processing units), TPUs (Tensor Processing Units), etc. The processing unit 30 incorporates a non-temporary storage medium such as a RAM (Random Access Memory), and performs calculations based on a computer program P3 stored in the storage unit 31 while storing data generated during processing in the non-temporary storage medium.
 記憶部31は、ハードディスク、フラッシュメモリ等の不揮発性記憶媒体である。記憶部31は、処理部30が読み出すコンピュータプログラムP3、設定データ等を記憶する。また記憶部31は、学習済みのセグメンテーションモデル31Mを記憶する。セグメンテーションモデル31Mは、IVUSの断層画像I11に対して学習された第1モデル311Mと、OCTの断層画像I12に対して学習された第2モデル312Mとを含む。 The storage unit 31 is a non-volatile storage medium such as a hard disk or flash memory. The storage unit 31 stores the computer program P3 read by the processing unit 30, setting data, etc. The storage unit 31 also stores a trained segmentation model 31M. The segmentation model 31M includes a first model 311M trained on the IVUS tomographic image I11 and a second model 312M trained on the OCT tomographic image I12.
 コンピュータプログラムP3、セグメンテーションモデル31Mは、装置外の非一時記憶媒体9に記憶されたコンピュータプログラムP9、セグメンテーションモデル91Mを入出力I/F32を介して読み出して複製したものであってもよい。コンピュータプログラムP3、セグメンテーションモデル31Mは、遠隔のサーバ装置が配信するものを画像処理装置3が図示しない通信部を介して取得し、記憶部31に記憶したものであってもよい。 The computer program P3 and the segmentation model 31M may be copies of the computer program P9 and the segmentation model 91M stored in a non-temporary storage medium 9 outside the device, read out via the input/output I/F 32. The computer program P3 and the segmentation model 31M may be distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
 入出力I/F32は、カテーテル1、表示装置4及び入力装置5が接続されるインタフェースである。処理部30は、入出力I/F32を介し、イメージングデバイス11から出力される信号(デジタルデータ)を取得する。処理部30は、入出力I/F32を介し、生成した断層画像I1及び/又は長軸画像を含む画面の画面データを表示装置4へ出力する。処理部30は、入出力I/F32を介して、入力装置5に入力された操作情報を受け付ける。 The input/output I/F 32 is an interface to which the catheter 1, the display device 4, and the input device 5 are connected. The processing unit 30 acquires a signal (digital data) output from the imaging device 11 via the input/output I/F 32. The processing unit 30 outputs screen data of a screen including the generated tomographic image I1 and/or long axis image to the display device 4 via the input/output I/F 32. The processing unit 30 accepts operation information input to the input device 5 via the input/output I/F 32.
 図4は、セグメンテーションモデル31Mの概要図である。図4は、セグメンテーションモデル31Mを構成する第1モデル311M及び第2モデル312Mのうち、第1モデル311Mを示す。第2モデル312Mの構成は、学習対象の画像が異なる点以外は第1モデル311Mと同様であるから図示及び詳細な説明を省略する。 FIG. 4 is a schematic diagram of segmentation model 31M. Of first model 311M and second model 312M that constitute segmentation model 31M, FIG. 4 shows first model 311M. The configuration of second model 312M is the same as that of first model 311M except that the image to be learned is different, so illustration and detailed description are omitted.
 セグメンテーションモデル31Mを構成する第1モデル311M及び第2モデル312Mはそれぞれ、画像が入力された場合に、画像に写っている1又は複数の対象物の領域を示す画像を出力するように学習されたモデルである。第1モデル311Mは例えば、セマンティックセグメンテーション(Semantic Segmentation)を実施するモデルである。第1モデル311Mは、入力された画像中の各画素に対し、各画素がいずれの対象物が写っている範囲の画素であるかを示すデータをタグ付けした画像を出力するように設計されている。 The first model 311M and the second model 312M that make up the segmentation model 31M are each models that have been trained to output an image showing the area of one or more objects appearing in an image when the image is input. The first model 311M is, for example, a model that performs semantic segmentation. The first model 311M is designed to output an image in which each pixel in the input image is tagged with data indicating which object the pixel is in.
 第1モデル311Mは例えば、図4に示すように、畳み込み層、プーリング層、アップサンプリング層、及びソフトマックス層を対象的に配置した所謂U-netを用いる。第1モデル311Mは、カテーテル1からの信号によって作成した断層画像I11が入力された場合に、タグ画像IS1を出力する。タグ画像IS1は、血管の内腔範囲、血管の中膜を含む血管の内腔境界と血管境界との間に対応する膜範囲、ガイドワイヤW及びそれによる反響が写っている範囲、並びにカテーテル1に対応する範囲を、その位置の画素に各々異なる画素値(図4中では異なる種類のハッチング及び無地で示す)によってタグを付したものである。第1モデル311Mは更に、血管に形成されている脂質性プラークの範囲を識別する。IVUS用の第1モデル311Mは、繊維性プラークか、あるいは石灰化プラークが写っている範囲を識別する。IVUSでは、繊維性プラークと石灰化プラークとを識別できる。 The first model 311M uses, for example, a so-called U-net in which a convolution layer, a pooling layer, an upsampling layer, and a softmax layer are symmetrically arranged, as shown in FIG. 4. When a tomographic image I11 created by a signal from the catheter 1 is input, the first model 311M outputs a tag image IS1. The tag image IS1 is obtained by tagging the pixels at the positions of the lumen range of the blood vessel, the membrane range corresponding to the area between the lumen boundary of the blood vessel including the tunica media and the blood vessel boundary, the range in which the guide wire W and its reflection are captured, and the range corresponding to the catheter 1, with different pixel values (shown by different types of hatching and solid color in FIG. 4). The first model 311M further identifies the range of lipid plaque formed in the blood vessel. The first model 311M for IVUS identifies the area in which fibrous plaque or calcified plaque is captured. IVUS can distinguish between fibrous plaque and calcified plaque.
 第1モデル311Mは上述したようにセマンティックセグメンテーション、U-netを例示したがこれに限らないことは勿論である。その他、第1モデル311Mは、インスタンスセグメンテーション等による個別認識処理を実現するモデルであってもよい。第1モデル311Mは、U-netベースに限らず、SegNet、R-CNN、又は他のエッジ抽出処理との統合モデル等をベースにしたモデルを使用してもよい。 As described above, the first model 311M is exemplified by semantic segmentation and U-net, but it goes without saying that this is not limited to this. In addition, the first model 311M may be a model that realizes individual recognition processing using instance segmentation, etc. The first model 311M is not limited to being based on U-net, and may also use a model based on SegNet, R-CNN, or an integrated model with other edge extraction processing, etc.
 処理部30は、第1モデル311MにIVUSの断層画像I11を入力して得られるタグ画像IS1における画素値とその画像内の座標とによって、断層画像I11に写っている血管の血液(内腔範囲)、内膜範囲、中膜範囲、外膜範囲を識別する。処理部30は、血管の範囲を識別することによって、断層画像I11に写っている血管の内腔境界、及び血管境界を検出できる。血管境界は、厳密には血管の中膜と外膜との間の外弾性板(EEM:External Elastic Membrane )である。 The processing unit 30 identifies the blood (lumen area), intima area, media area, and adventitia area of the blood vessels shown in the tomographic image I11 based on pixel values in the tag image IS1 obtained by inputting the IVUS tomographic image I11 into the first model 311M and their coordinates within the image. By identifying the blood vessel area, the processing unit 30 can detect the lumen boundary and blood vessel boundary of the blood vessels shown in the tomographic image I11. Strictly speaking, the blood vessel boundary is the external elastic membrane (EEM) between the tunica media and adventitia of the blood vessel.
 処理部30は、IVUS用の第1モデル311Mに断層画像I11を入力して得られるタグ画像IS1における画素値とその画像内の座標とによって、脂質性プラークの範囲及び繊維性プラーク若しくは石灰化プラークそれぞれを識別する。 The processing unit 30 identifies the range of lipid plaque and each of fibrous plaque and calcified plaque based on the pixel values in the tag image IS1 obtained by inputting the tomographic image I11 into the first model 311M for IVUS and the coordinates within that image.
 処理部30は、同様にして第2モデル312MにOCTの断層画像I12を入力して得られるタグ画像IS2における画素値とその画像内の座標とによって、断層画像I12に写っている血管の血液(内腔範囲)、石灰化プラーク、繊維性プラーク及び脂質性プラークをそれぞれ識別する。 The processing unit 30 similarly inputs the OCT cross-sectional image I12 into the second model 312M, and based on the pixel values in the tag image IS2 obtained and the coordinates within that image, identifies the blood (lumen area), calcified plaque, fibrous plaque, and lipid plaque of the blood vessels shown in the cross-sectional image I12.
 このように、IVUSの断層画像I11と、OCTの断層画像I12とで、識別される範囲の明瞭さに差が生じるため、デュアル式のイメージングデバイス11を用いて相互に補完し合うことによって識別の精度を高めることが期待できる。 As such, there is a difference in the clarity of the identified range between the IVUS tomographic image I11 and the OCT tomographic image I12, so it is expected that the accuracy of identification can be improved by using a dual imaging device 11 to complement each other.
 本開示の画像診断装置100は、管腔器官である血管内の病変部にステントを留置するための診断で用いられる。検査オペレータ又は医療事業者は、第1に、病変部の状態を評価・診断するためにイメージングデバイス11を走査させる。第2に、画像診断装置100は、評価・診断された結果に基づいて血管を押し広げる場合、利用するバルーンのタイプ及び大きさを決定するための情報を、イメージングデバイス11の走査結果と併せて表示装置4に出力する。第3に、検査オペレータ又は医療事業者は、バルーンによって押し広げられた血管の状態を評価し、留置されるステントの種類、ステントをいずれの位置に置くべきか、を判断するためにイメージングデバイス11を走査させる。第3に、検査オペレータ又は医療事業者は、決定された種類及び大きさのステントを、カテーテル1を用いて留置する。 The imaging diagnostic device 100 of the present disclosure is used in diagnosis for placing a stent in a lesion in a blood vessel, which is a hollow organ. First, the examination operator or medical provider scans the imaging device 11 to evaluate and diagnose the condition of the lesion. Second, when the blood vessel is to be opened based on the evaluation and diagnosis results, the imaging diagnostic device 100 outputs information for determining the type and size of the balloon to be used, together with the scanning results of the imaging device 11, to the display device 4. Third, the examination operator or medical provider scans the imaging device 11 to evaluate the condition of the blood vessel opened by the balloon and determine the type of stent to be placed and where the stent should be placed. Third, the examination operator or medical provider places the stent of the determined type and size using the catheter 1.
 このため、画像処理装置3は、IVUSの断層画像I11及びOCTの断層画像I12に対する画像処理によって得られる情報に基づき、ステントの種類や、ステントをいずれの位置に置くべきかを示す情報を生成して出力する。以下、ステントに関する情報の出力に係る処理について説明する。 For this reason, the image processing device 3 generates and outputs information indicating the type of stent and the position at which the stent should be placed, based on information obtained by image processing of the IVUS tomographic image I11 and the OCT tomographic image I12. The process of outputting information about the stent is described below.
 本開示において画像処理装置3の処理部30は、IVUSの断層画像I11及びOCTの断層画像I12それぞれに対して識別した各範囲から、血管の内腔範囲の内腔境界を特定し、内腔境界から内側における最大径、最小径、平均内径等の数値を算出する。更に処理部30は、IVUSの断層画像I11及びOCTの断層画像I12それぞれに対して識別した石灰化プラーク、繊維性プラーク及び脂質性プラークの範囲の識別結果から、その断面積の血管境界内側の面積に対する割合(以下、プラークバーデン:plaque burden という)を算出する。 In the present disclosure, the processing unit 30 of the image processing device 3 identifies the luminal boundary of the vascular lumen range from each range identified for each of the IVUS tomographic image I11 and the OCT tomographic image I12, and calculates numerical values such as the maximum diameter, minimum diameter, and average inner diameter inside the lumen boundary. Furthermore, the processing unit 30 calculates the ratio of the cross-sectional area to the area inside the vascular boundary (hereinafter referred to as plaque burden) from the results of identifying the ranges of calcified plaque, fibrous plaque, and lipid plaque identified for each of the IVUS tomographic image I11 and the OCT tomographic image I12.
 本開示の画像処理装置3は、血管の長軸方向の位置に対する平均内腔径の分布、及び、プラークバーデンの分布をそれぞれグラフ化して出力する。画像処理装置3は更に、分布を示すグラフ上で、ステントを留置するために参照すべき参照部(リファレンス)、ステントのランディングゾーンの候補を出力する。 The image processing device 3 of the present disclosure outputs a graph of the distribution of the average lumen diameter and the distribution of plaque burden with respect to the position in the longitudinal direction of the blood vessel. The image processing device 3 further outputs, on the graph showing the distribution, a reference portion to be referred to for placing a stent and candidates for the landing zone of the stent.
 画像処理装置3による処理手順を、フローチャートを参照して説明する。図5及び図6は、画像処理装置3による情報処理手順の一例を示すフローチャートである。画像処理装置3の処理部30は、カテーテル1のイメージングデバイス11から信号が出力されると以下の処理を開始する。 The processing procedure by the image processing device 3 will be explained with reference to a flowchart. Figures 5 and 6 are flowcharts showing an example of the information processing procedure by the image processing device 3. The processing unit 30 of the image processing device 3 starts the following processing when a signal is output from the imaging device 11 of the catheter 1.
 処理部30は、カテーテル1のイメージングデバイス11からの信号(データ)を、IVUS及びOCTの両方について、所定量(例えば360°)分取得する都度(ステップS101)、断層画像I11,I12を生成する(ステップS102)。ステップS102において処理部30は、IVUS及びOCTそれぞれについて、矩形に並べた信号を極座標変換(逆変換)して断層画像I11,I12を生成する。 The processing unit 30 generates tomographic images I11 and I12 (step S102) each time it acquires a predetermined amount (e.g., 360°) of signals (data) from the imaging device 11 of the catheter 1 for both IVUS and OCT (step S101). In step S102, the processing unit 30 performs polar coordinate conversion (inverse conversion) on the signals arranged in a rectangle for each of the IVUS and OCT to generate the tomographic images I11 and I12.
 処理部30は、IVUS及びOCTそれぞれについて、ステップS101で取得した信号データと、ステップS102で生成した断層画像I11,I12とを血管の長軸上の位置に対応付けて記憶部31に記憶する(ステップS103)。 The processing unit 30 stores the signal data acquired in step S101 and the tomographic images I11 and I12 generated in step S102 for each of the IVUS and OCT in the memory unit 31 in association with positions on the long axis of the blood vessel (step S103).
 処理部30は、IVUS用の第1モデル311Mへ、IVUSの断層画像I11を入力する(ステップS104)。処理部30は、IVUS用の第1モデル311Mから出力される領域の識別結果(タグ画像IS1)を、血管の長軸上の位置に対応付けて記憶部31に記憶する(ステップS105)。 The processing unit 30 inputs the IVUS tomographic image I11 to the first IVUS model 311M (step S104). The processing unit 30 stores the region identification result (tag image IS1) output from the first IVUS model 311M in the memory unit 31 in association with the position on the long axis of the blood vessel (step S105).
 処理部30は、OCT用の第2モデル312Mへ、OCTの断層画像I12を入力する(ステップS106)。処理部30は、OCTの第2モデル312Mから出力される領域の識別結果(タグ画像IS2)を、血管の長軸上の位置に対応付けて記憶部31に記憶する(ステップS107)。 The processing unit 30 inputs the OCT tomographic image I12 to the second model 312M for OCT (step S106). The processing unit 30 stores the region identification result (tag image IS2) output from the second model 312M for OCT in the memory unit 31 in association with the position on the long axis of the blood vessel (step S107).
 処理部30は、IVUSの断層画像I11に対する領域の識別結果(タグ画像IS1)と、OCTの断層画像I12に対する領域の識別結果(タグ画像IS2)とに基づき、断層画像I11及び断層画像I12から、必要な領域画像を抽出する(ステップS108)。ステップS108において処理部30は例えば、IVUSの断層画像I11から中膜範囲、及び外膜範囲それぞれに対応する膜範囲の領域画像、脂質性プラークの範囲の領域画像を抽出し、OCTの断層画像I12から、内腔範囲の領域画像、繊維性プラーク及び石灰化プラークの範囲の領域画像を抽出する。すなわち処理部30は、IVUS及びOCTそれぞれから、範囲の識別が明瞭となる解剖学的特徴及び病変部の範囲を適切に抽出する。 The processing unit 30 extracts necessary area images from the tomographic images I11 and I12 based on the area identification result for the IVUS tomographic image I11 (tag image IS1) and the area identification result for the OCT tomographic image I12 (tag image IS2) (step S108). In step S108, the processing unit 30 extracts, for example, area images of the membrane area corresponding to the media area and adventitia area, and area images of the lipid plaque area from the IVUS tomographic image I11, and extracts area images of the lumen area and area images of the fibrous plaque and calcified plaque areas from the OCT tomographic image I12. That is, the processing unit 30 appropriately extracts anatomical features and lesion areas from each of the IVUS and OCT images that allow clear area identification.
 処理部30は、抽出した領域画像を合成して補正断層画像を作成する(ステップS109)。ステップS109において処理部30は、IVUSの断層画像I11と、OCTの断層画像I12との間の座標(角度)のずれを算出し、相互に問題なく重ね合わせられるようにしておくとよい。処理部30は、補正断層画像に対し、血管の内腔境界から内側の範囲の最大値、最小径、平均内径と、プラークバーデンとを含む解剖学的特徴を示すデータを算出する(ステップS110)。 The processing unit 30 synthesizes the extracted area images to create a corrected tomographic image (step S109). In step S109, the processing unit 30 calculates the coordinate (angle) shift between the IVUS tomographic image I11 and the OCT tomographic image I12 so that they can be overlaid on each other without any problems. The processing unit 30 calculates data indicating anatomical characteristics including the maximum, minimum and average inner diameter of the range inside the lumen boundary of the blood vessel and plaque burden for the corrected tomographic image (step S110).
 処理部30は、ステップS110で算出した解剖学的特徴を示すデータ(内腔境界内側の平均内径及びプラークバーデン等)を、血管の長軸上の位置に対応付けて記憶部31に記憶する(ステップS111)。ステップS111において処理部30は、脂質性プラーク、繊維性プラーク、又は石灰化プラークについては、断層画像I11,I12内で識別されたそれらのプラークの角度範囲についても記憶しておくとよい。 The processing unit 30 stores the data indicating the anatomical characteristics calculated in step S110 (the average inner diameter inside the lumen boundary and the plaque burden, etc.) in the memory unit 31 in association with the position on the long axis of the blood vessel (step S111). In step S111, the processing unit 30 may also store the angle ranges of lipid plaque, fibrous plaque, or calcified plaque identified in the tomographic images I11 and I12.
 処理部30は、カテーテル1のイメージングデバイス11による走査を完了したか否かを判断する(ステップS112)。走査を完了していないと判断された場合(S112:NO)、処理部30は、処理をステップS101へ戻し、次の断層画像I11,I12を生成する。 The processing unit 30 determines whether scanning by the imaging device 11 of the catheter 1 has been completed (step S112). If it is determined that scanning has not been completed (S112: NO), the processing unit 30 returns the process to step S101 and generates the next tomographic images I11 and I12.
 走査を完了したと判断された場合(S112:YES)、処理部30は、走査した血管走査した血管の長軸方向全体に対する、解剖学的特徴を示すデータの分布を示すグラフを作成して出力する(ステップS113)。 If it is determined that scanning is complete (S112: YES), the processing unit 30 creates and outputs a graph showing the distribution of data indicating anatomical features for the entire longitudinal direction of the scanned blood vessel (step S113).
 処理部30は、血管の長軸上の位置に対応付けて記憶部31に記憶された種々のデータに基づき、病変部(プラーク)の長軸上における位置及び範囲を特定する(ステップS114)。ステップS114において処理部30は例えば、設定されている長さの閾値(例えば4mm)以上連続してプラークバーデンが設定されている割合の閾値(例えば50%)以上である長軸上の位置を、プラークの範囲として特定する。ステップS114において処理部30は例えば、ステップS107で記憶した繊維性プラークが、各位置のタグ画像IS2にて識別されているか否かによって繊維性プラークの位置を特定する。 The processing unit 30 identifies the position and range of the lesion (plaque) on the long axis based on various data stored in the memory unit 31 in association with the position on the long axis of the blood vessel (step S114). In step S114, the processing unit 30 identifies, for example, the positions on the long axis where the plaque burden is equal to or greater than a set percentage threshold (e.g., 50%) for a continuous period of at least a set length threshold (e.g., 4 mm) as the range of plaque. In step S114, the processing unit 30 identifies the position of the fibrous plaque based on, for example, whether the fibrous plaque stored in step S107 is identified in the tag image IS2 at each position.
 処理部30は、特定した病変部の長軸上における位置及び範囲に基づき、その病変部に対する参照部を決定する(ステップS115)。ステップS115において処理部30は、病変部の範囲の前後で、その範囲から所定の範囲(10mm)以内で、大きな側枝が存在する箇所までの間で、内腔径が最も大きい部分である参照部を決定する。ステップS115において処理部30は、参照部が、脂質性プラークが存在する位置と重複している場合、脂質性プラークの範囲の外側においてプラークバーデンが最も低い位置を参照部として再決定する。 The processing unit 30 determines a reference area for the identified lesion based on the position and range of the lesion on the long axis (step S115). In step S115, the processing unit 30 determines the reference area as the area with the largest lumen diameter before and after the range of the lesion, within a predetermined range (10 mm) from the range of the lesion to the location where a large side branch is present. In step S115, if the reference area overlaps with the location where lipid plaque is present, the processing unit 30 re-determines as the reference area the location where the plaque burden is lowest outside the range of the lipid plaque.
 処理部30は、決定した参照部の位置を記憶する(ステップS116)。処理部30は、ステップS113で表示しているグラフ上に、ステップS114で特定した病変部の長軸上における位置及び範囲と、ステップS115で決定した参照部を示すグラフィックとを表示装置4へ出力する(ステップS117)。処理部30は、処理を終了する。 The processing unit 30 stores the position of the determined reference area (step S116). The processing unit 30 outputs to the display device 4 the position and range on the long axis of the lesion identified in step S114 and a graphic showing the reference area determined in step S115 on the graph displayed in step S113 (step S117). The processing unit 30 ends the process.
 図5及び図6に示した処理手順によって出力される内容について具体例を挙げて説明する。図7は、表示装置4に表示される画面400の例を示す。図7に示す画面400は、ステントの種類、大きさ、留置位置を出力するべく、血管の近位から遠位までの走査が完了してから表示される。 The contents output by the processing procedures shown in Figures 5 and 6 will be explained below with specific examples. Figure 7 shows an example of a screen 400 displayed on the display device 4. The screen 400 shown in Figure 7 is displayed after scanning from the proximal to the distal end of the blood vessel is completed in order to output the type, size, and placement position of the stent.
 画面400は、表示する断層画像I11又は断層画像I12に対応する血管の長軸上の位置を示すカーソル401と、その位置にて得られた信号に基づき生成された断層画像I11及び断層画像I12、及び作成された補正断層画像I3とを含む。画面400には、断層画像I11,I12,I3に対する画像処理によって算出された解剖学的特徴を示すデータの数値を表示するデータ欄402が含まれている。補正断層画像I3と、IVUSの断層画像I11と、OCTの断層画像I12とは、画面400上で選択する都度切り替わって表示されるようにしてもよい。 Screen 400 includes cursor 401 indicating the position on the long axis of the blood vessel corresponding to tomographic image I11 or tomographic image I12 to be displayed, tomographic image I11 and tomographic image I12 generated based on the signal obtained at that position, and corrected tomographic image I3. Screen 400 includes data column 402 displaying numerical values of data indicating anatomical features calculated by image processing of tomographic images I11, I12, and I3. Corrected tomographic image I3, IVUS tomographic image I11, and OCT tomographic image I12 may be displayed in a switched manner each time they are selected on screen 400.
 図7の画面400では、断層画像I11,I12,I3に対する画像処理によって算出された解剖学的特徴を示すデータの数値を表示するデータ欄402が含まれている。 The screen 400 in FIG. 7 includes a data column 402 that displays the numerical values of data indicating anatomical features calculated by image processing of the tomographic images I11, I12, and I3.
 画面400は更に、解剖学的特徴を示すデータの血管の長軸上の位置に対する分布を示すグラフ403,404を含む。グラフ403は、長軸上の位置に対する平均内腔径の分布を示す。グラフ404には、長軸上の位置に対するプラークバーデンの分布を示す。 Screen 400 further includes graphs 403 and 404 showing the distribution of data indicating anatomical characteristics with respect to position on the long axis of the blood vessel. Graph 403 shows the distribution of mean lumen diameter with respect to position on the long axis. Graph 404 shows the distribution of plaque burden with respect to position on the long axis.
 グラフ404には、グラフィック405が重畳されて表示されている。グラフィック405は、プラークバーデンが割合の閾値(ここでは50%)以上である部分が、長軸方向に2mm以上連続している範囲を示す。グラフィック405が重畳されたグラフ404を視認した検査オペレータ及び他の医療従事者は、グラフィック405が表示されている範囲の血管では、2mm以上の長さに亘って、プラークバーデンが閾値以上であることを把握することができる。 Graph 404 is displayed with graphic 405 superimposed on it. Graphic 405 indicates the range in which the portion in which the plaque burden is equal to or greater than the percentage threshold (50% in this case) continues for 2 mm or more in the longitudinal direction. Examination operators and other medical personnel who visually view graph 404 with graphic 405 superimposed thereon can understand that in the blood vessels in the range in which graphic 405 is displayed, the plaque burden is equal to or greater than the threshold over a length of 2 mm or more.
 グラフ404には、その長軸上の位置に、脂質性プラークが存在することを示すグラフィック406と、繊維性プラーク又は石灰化プラークが存在することを示すグラフィック407とが表示されている。更に、グラフ404には病変部に対する参照部の位置を示すバー408が表示されている。 Graph 404 displays, on its long axis, a graphic 406 indicating the presence of lipid plaque, and a graphic 407 indicating the presence of fibrous plaque or calcified plaque. Graph 404 also displays a bar 408 indicating the position of the reference area relative to the lesion.
 図7に示したような画面400を視認する検査オペレータ又は医療事業者は、グラフ403及びグラフ404を並列してあることで、プラークバーデンが閾値未満であって平均内腔径が大きな範囲と、プラークバーデンが閾値以上であって平均内腔径が小さい範囲とを認識できる。検査オペレータ又は医療事業者は、プラークバーデンが閾値以上であって平均内腔径が小さい範囲を病変部と捉え、その病変部を内側から押し広げるための情報として、どのようなバルーンやステントを用いるべきかを判断できる。また、検査オペレータ又は医療事業者は、バルーンで病変部を押し広げた後に、再度、カテーテル1を走査させて得られた信号に基づき、同様に画面400を確認する。この場合、検査オペレータ又は医療事業者は、病変部の範囲を示すグラフィック405、脂質性プラークの範囲を示すグラフィック406、繊維性プラークの範囲を示すグラフィック407、及び参照部を示すバー408を参考にすることで、どの位置にステントを留置すべきかを判断できる。 The examination operator or medical provider visually viewing the screen 400 shown in FIG. 7 can recognize the range where the plaque burden is less than the threshold and the average lumen diameter is large, and the range where the plaque burden is equal to or greater than the threshold and the average lumen diameter is small, by juxtaposing the graphs 403 and 404. The examination operator or medical provider can determine what kind of balloon or stent should be used as information for expanding the lesion from the inside by regarding the range where the plaque burden is equal to or greater than the threshold and the average lumen diameter as the lesion. After expanding the lesion with the balloon, the examination operator or medical provider also checks the screen 400 again based on the signal obtained by scanning the catheter 1. In this case, the examination operator or medical provider can determine where the stent should be placed by referring to the graphic 405 indicating the range of the lesion, the graphic 406 indicating the range of the lipid plaque, the graphic 407 indicating the range of the fibrous plaque, and the bar 408 indicating the reference area.
 図7の画面400上の参照部を示すバー408の位置は、病変部の範囲の近傍における他の病変部の情報に基づいて、変更されている。図8は、参照部の位置の変更の過程を示す図である。図8には、長軸上の位置に対するプラークバーデンの分布を示すグラフ404が上下に示されている。上部のグラフ404が変更前、下部のグラフ404が変更後を示す。 The position of the bar 408 indicating the reference area on the screen 400 in FIG. 7 is changed based on information on other lesions in the vicinity of the range of the lesion. FIG. 8 is a diagram showing the process of changing the position of the reference area. In FIG. 8, graphs 404 indicating the distribution of plaque burden with respect to the position on the long axis are shown above and below. The upper graph 404 shows the state before the change, and the lower graph 404 shows the state after the change.
 変更前においてグラフ404では、ハッチングで示すグラフィック405の範囲に亘る病変部の近位側10mm以内と、遠位側10mm以内の長軸上の位置のうち、大きな側枝が存在する箇所までの間で、内腔径が最も大きい部分である参照部が一旦決定されている。しかしながら、処理部30の断層画像I11,I12それぞれに対する画像処理により、一旦参照部として決定された位置には、柔らかい脂質性プラークが存在することが識別されている。処理部30は、一旦決定した参照部の位置に、脂質性プラークが存在している場合、同様の10mm以内の位置であって、脂質性プラークが存在していない範囲において内腔径が最も大きい部分である位置を再決定する。処理部30は、再決定した位置に、繊維性プラーク又は石灰化プラークが存在する場合、この位置を回避して再々決定する必要はない。ステントが留置される場所として、石灰化プラークが存在する箇所は、脂質性プラークが存在する場所よりも適切としてもよいためである。 Before the change, in the graph 404, a reference portion is determined as a portion with the largest lumen diameter among positions on the long axis within 10 mm proximal to the lesion and within 10 mm distal to the lesion, covering the range of the hatched graphic 405, up to the location where a large side branch exists. However, the image processing of the tomographic images I11 and I12 by the processing unit 30 identifies the presence of soft lipid plaque at the position once determined as the reference portion. If lipid plaque is present at the position of the reference portion once determined, the processing unit 30 re-determines a position within the same 10 mm range where the lumen diameter is the largest in the range where lipid plaque is not present. If fibrous plaque or calcified plaque is present at the re-determined position, the processing unit 30 does not need to avoid this position and re-determine it. This is because a location where calcified plaque exists may be more suitable as a location for placing a stent than a location where lipid plaque exists.
 図7に示した画面400では、参照部を示すバー408が表示された。これに限らず、画面400には、前記管腔器官内にステントを留置するための情報として、ステントのランディングゾーンを示すグラフィックが表示されてもよい。図9は、表示装置4に表示される画面400の他の例を示す。 In the screen 400 shown in FIG. 7, a bar 408 indicating a reference portion is displayed. Without being limited to this, the screen 400 may display a graphic indicating the landing zone of the stent as information for placing the stent in the tubular organ. FIG. 9 shows another example of the screen 400 displayed on the display device 4.
 図9では、病変部よりも近位側と、遠位側とのそれぞれに参照部を示すバー408が表示されているのみならず、血管の長軸上の位置のうち、ステントの固定のために接触する範囲であるランディングゾーンを示すグラフィック409が表示されている。また、図9の画面400では、ランディングゾーンの近位側の端から、遠位側の端までの寸法が表示されている。これにより、検査オペレータ又は医療事業者は、グラフィック409及びそれに関する寸法を視認することにより、どのサイズのステントをどのように留置すべきかを判断できる。 In FIG. 9, not only are bars 408 displayed indicating reference areas proximal and distal to the lesion, but also a graphic 409 is displayed indicating the landing zone, which is the area of contact on the long axis of the blood vessel that is used to secure the stent. Additionally, the screen 400 in FIG. 9 displays the dimension from the proximal end of the landing zone to the distal end. This allows the examination operator or medical provider to visually check the graphic 409 and its associated dimensions to determine what size stent should be placed and how.
 画像処理装置3の処理部30は、図9の画面400に示したランディングゾーンの近位側の端から遠位側の端までの寸法に基づき、予め記憶されているステントの品番毎のサイズを参照して、ステントの提案情報を、画面400内に出力させてもよい。 The processing unit 30 of the image processing device 3 may output stent recommendation information on the screen 400 by referring to the pre-stored sizes for each stent part number based on the dimension from the proximal end to the distal end of the landing zone shown on the screen 400 of FIG. 9.
 (第2実施形態)
 第2実施形態では、IVUSの断層画像I11を入力した場合に、脂質性プラークが存在するか否か、及び、存在する場合の位置を出力するように学習されたプラーク検出モデル32Mを用いる。
Second Embodiment
In the second embodiment, a plaque detection model 32M is used that is trained to output, when an IVUS tomographic image I11 is input, whether or not lipid plaque is present, and, if present, its location.
 第2実施形態の画像診断装置100の構成は、以下に示す画像処理装置3に記憶されているプラーク検出モデル32Mと、処理部30による処理の詳細とを除き、第1実施形態の画像診断装置100の構成と同様である。したがって、第2実施形態の画像診断装置100の構成のうち、第1実施形態の画像診断装置100と共通する構成については同一の符号を付して詳細な説明を省略する。 The configuration of the imaging diagnostic device 100 of the second embodiment is similar to that of the imaging diagnostic device 100 of the first embodiment, except for the plaque detection model 32M stored in the image processing device 3 and the details of the processing by the processing unit 30, which are described below. Therefore, among the configurations of the imaging diagnostic device 100 of the second embodiment, the configurations common to the imaging diagnostic device 100 of the first embodiment are given the same reference numerals and detailed descriptions are omitted.
 図10は、第2実施形態の画像処理装置3の構成を示すブロック図である。画像処理装置32の記憶部31には、セグメンテーションモデル31Mに加え、プラーク検出モデル32Mが記憶されている。プラーク検出モデル32Mは、装置外の非一時記憶媒体9に記憶されたプラーク検出モデル92Mを入出力I/F32を介して読み出して複製したものであってもよい。プラーク検出モデル32Mは、遠隔のサーバ装置が配信するものを画像処理装置3が図示しない通信部を介して取得し、記憶部31に記憶したものであってもよい。 FIG. 10 is a block diagram showing the configuration of an image processing device 3 of the second embodiment. In addition to a segmentation model 31M, a plaque detection model 32M is stored in the storage unit 31 of the image processing device 32. The plaque detection model 32M may be a copy of the plaque detection model 92M stored in a non-temporary storage medium 9 outside the device, read out via the input/output I/F 32. The plaque detection model 32M may be a model distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
 図11は、プラーク検出モデル32Mの概要図である。プラーク検出モデル32Mは、IVUSの断層画像I11又は矩形画像I01が入力された場合に、入力画像に、脂質性プラークが写っている確度を出力するモデルである。確度は、入力画像全体に、1つの領域でも写っている場合には「1」に近い数値として出力されてもよいし、入力画像内における走査信号に対応する径方向に分別し、径方向の基準からの角度毎に出力されてもよい。この場合、断層画像I11における12時の角度(画像中心又は血管中心から上方への基準の直線からの角度がゼロ°)、2時の角度(同基準の直線からの角度が60°)等の角度情報それぞれに対応する確度が出力される。プラーク検出モデル32Mは、セグメンテーションモデル31M同様に、入力画像内の脂質性プラークが写っている範囲を識別した結果を出力するものであってもよい。 11 is a schematic diagram of the plaque detection model 32M. The plaque detection model 32M is a model that outputs the probability that lipid plaque is present in the input image when an IVUS tomographic image I11 or a rectangular image I01 is input. The probability may be output as a value close to "1" if even one area is present in the entire input image, or may be output for each angle from a reference in the radial direction by dividing the accuracy into radial directions corresponding to the scanning signal in the input image. In this case, the accuracy corresponding to each angle information such as the 12 o'clock angle (angle from a reference line extending upward from the image center or blood vessel center is zero degrees) and the 2 o'clock angle (angle from the same reference line) in the tomographic image I11 is output. The plaque detection model 32M may output the result of identifying the area in the input image in which lipid plaque is present, similar to the segmentation model 31M.
 プラーク検出モデル32Mは、入力層321、中間層322及び出力層323を備えるニューラルネットワークを用いたモデルである。入力層321は、二次元の信号分布、即ち画像データを入力する。出力層323は、脂質性プラークが存在する確度を出力する。出力層323は、脂質性プラークが存在する確度を、例えば2°毎の0°,2°,4°,…,356°,358°の180個の配列の形式で出力してもよい。 The plaque detection model 32M is a model using a neural network including an input layer 321, an intermediate layer 322, and an output layer 323. The input layer 321 inputs a two-dimensional signal distribution, i.e., image data. The output layer 323 outputs the probability that lipid plaque is present. The output layer 323 may output the probability that lipid plaque is present in the form of an array of 180 values, for example, 0°, 2°, 4°, ..., 356°, 358°, in increments of 2°.
 処理部30はこのように、脂質性プラークを検出しやすいIVUSの断層画像I11を入力層321へ入力するか、又は、矩形画像I01を入力層321へ入力し、出力される確度を取得できる。処理部30は、プラーク検出モデル32Mから出力される、角度毎の、その角度における脂質性プラークが存在する確度の配列を取得し、確度が所定値以上で連続している部分を、脂質性プラークの角度範囲として取得してもよい。 In this way, the processing unit 30 can input the IVUS tomographic image I11, which is easy to detect lipid plaque, to the input layer 321, or input the rectangular image I01 to the input layer 321, and obtain the output probability. The processing unit 30 can obtain an array of the probability that lipid plaque exists at each angle output from the plaque detection model 32M, and obtain the continuous portion where the probability is equal to or greater than a predetermined value as the angle range of lipid plaque.
 プラーク検出モデル32Mは、画像処理装置3、又は、他の処理装置によって予め作成され、学習済みとされる。教師データは、アノテーション付きのIVUSの断層画像I11又は矩形画像I01である。アノテーションは、脂質性プラークの有無を示すデータ(例えば、存在する場合の確率は「1」、存在しない場合の確率は「0」)である。つまり、プラーク検出モデル32Mは、脂質性プラークの有無が判断済みのIVUSの断層画像I11又は矩形画像I01を教師データとして学習されている。プラーク検出モデル32Mが、脂質性プラークが存在する確度を、角度毎に出力するタイプのモデルである場合、アノテーションは、角度毎の脂質性プラークの有無を示すデータの配列である。アノテーションは、脂質性プラークの画像内での箇所が既知であるIVUSの断層画像I11又は矩形画像I01に基づき、脂質性プラークが存在しているか否かを角度毎に示すように作成される。例えば、作成された角度毎の脂質性プラークの有無が、角度順の配列で、IVUSの断層画像I11又は矩形画像I01にアノテーションとして付加されることによって、教師データが作成される。 The plaque detection model 32M is created in advance by the image processing device 3 or another processing device and is considered to have been trained. The teacher data is an IVUS tomographic image I11 or a rectangular image I01 with annotations. The annotation is data indicating the presence or absence of lipid plaque (for example, the probability of presence is "1" and the probability of absence is "0"). In other words, the plaque detection model 32M is trained using the IVUS tomographic image I11 or rectangular image I01, in which the presence or absence of lipid plaque has been determined, as teacher data. If the plaque detection model 32M is a model of the type that outputs the probability that lipid plaque exists for each angle, the annotation is an array of data indicating the presence or absence of lipid plaque for each angle. The annotation is created to indicate the presence or absence of lipid plaque for each angle based on the IVUS tomographic image I11 or rectangular image I01, in which the location of the lipid plaque in the image is known. For example, the presence or absence of lipid plaque for each created angle is added as an annotation to the IVUS tomographic image I11 or rectangular image I01 in an order of angle, thereby creating training data.
 図12及び図13は、第2実施形態の画像処理装置3による情報処理手順の一例を示すフローチャートである。図12及び図13に示す処理手順のうち、第1実施形態の図5及び図6のフローチャートに示した処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 12 and 13 are flowcharts showing an example of an information processing procedure by the image processing device 3 of the second embodiment. Among the processing procedures shown in Figs. 12 and 13, the same step numbers are used for the steps common to the processing procedures shown in the flowcharts of Figs. 5 and 6 of the first embodiment, and detailed descriptions thereof will be omitted.
 第2実施形態において処理部30は、IVUSの断層画像I11及びOCTの断層画像I21それぞれに対して実行した範囲の識別に基づき算出した解剖学的特徴を示すデータを記憶した後(S111)、以下の処理を実行する。 In the second embodiment, the processing unit 30 stores data indicating anatomical features calculated based on the range identification performed on each of the IVUS tomographic image I11 and the OCT tomographic image I21 (S111), and then performs the following processing.
 処理部30は、IVUSの断層画像I11又は矩形画像I01を、プラーク検出モデル32Mへ入力する(ステップS121)。処理部30は、プラーク検出モデル32Mから出力される確度の情報に基づき、脂質性プラークの存在の有無を決定する(ステップS122)。ステップS122において処理部30は、前後の断層画像I11又は矩形画像I01も含め、長軸方向における長さの閾値以上に連続して脂質性プラークが存在している場合に、存在すると決定してもよい。ステップS122において処理部30は、断層画像I11上の範囲、あるいは、角度範囲を決定してもよい。 The processing unit 30 inputs the IVUS tomographic image I11 or rectangular image I01 to the plaque detection model 32M (step S121). The processing unit 30 determines whether or not lipid plaque is present based on the accuracy information output from the plaque detection model 32M (step S122). In step S122, the processing unit 30 may determine that lipid plaque is present when lipid plaque is present continuously for a length greater than or equal to a threshold in the longitudinal direction, including the previous and following tomographic images I11 or rectangular images I01. In step S122, the processing unit 30 may determine the range on the tomographic image I11 or the angle range.
 処理部30は、ステップS122で決定した脂質性プラークの存在の有無(又は識別された範囲)を、血管の長軸上の位置に対応付けて記憶し(ステップS123)、処理をステップS112へ進める。 The processing unit 30 stores the presence or absence of lipid plaque (or the identified range) determined in step S122 in association with the position on the long axis of the blood vessel (step S123), and proceeds to step S112.
 第2実施形態において処理部30は、ステップS114において病変部の長軸上における位置及び範囲を特定した後(S114)、ステップS123で記憶してある情報に基づき、血管の長軸上における脂質性プラークが存在している箇所を特定する(ステップS124)。処理部30は、ステップS117において、ステップS114で特定した病変部(脂質性プラーク、繊維性プラーク、石灰化プラーク等)の位置及び範囲と、ステップS124で特定した脂質性プラークの存在箇所とを示すグラフィックを表示し(S117)、処理を終了する。 In the second embodiment, the processing unit 30 identifies the location and range of the lesion on the long axis in step S114 (S114), and then identifies the location of lipid plaque on the long axis of the blood vessel based on the information stored in step S123 (step S124). In step S117, the processing unit 30 displays a graphic showing the location and range of the lesion (lipid plaque, fibrous plaque, calcified plaque, etc.) identified in step S114 and the location of the lipid plaque identified in step S124 (S117), and ends the process.
 脂質性プラークの存在の有無は、OCTよりもIVUSの方が、血管の膜範囲の奥側(外膜側)まで観測できるために判別しやすいが、領域の識別は、脂質性プラークの柔らかさのために困難である。このため、セグメンテーションモデル31Mで、IVUS及びOCTの両方を用いて範囲を識別することに加え、画像処理装置3は、脂質性プラークに絞って各位置における存在の有無の判定を、画像に対するニューラルネットワークを用いたプラーク検出モデル32Mを用いて実行する。ステントの留置場所として避けることが推奨される脂質性プラークの範囲の認識の精度が向上する。精度良く、脂質性プラークの範囲を視認し、また、その脂質性プラークの位置を回避した参照部を決定することができる。 The presence or absence of lipid plaque is easier to determine with IVUS than with OCT because it can observe the inner part of the blood vessel membrane (the adventitia side), but identifying the area is difficult due to the softness of lipid plaque. For this reason, in addition to identifying the area using both IVUS and OCT in the segmentation model 31M, the image processing device 3 narrows down to lipid plaque and determines the presence or absence at each position using a plaque detection model 32M that uses a neural network for the image. This improves the accuracy of recognition of the area of lipid plaque that is recommended to be avoided as a location for stent placement. It is possible to visually confirm the area of lipid plaque with high accuracy, and also to determine a reference area that avoids the location of the lipid plaque.
 画像処理装置3の処理部30は、上述した脂質性プラークを検出するプラーク検出モデル32Mのみならず、IVUSの断層画像I11又はOCTの断層画像I21を入力された場合に、それらの断層画像に側枝が写っているか否かを検出する学習モデルを用いてもよい。この場合、処理部30は、血管の長軸上の位置それぞれに対応する断層画像I11,I21に側枝が写っているか否かを、学習モデルを用いて判定し、側枝が写っている箇所を記憶する。処理部30は、側枝の大きさを算出してもよい。側枝が存在する長軸上の位置及びその大きさを、図9に示したような画面400のグラフ404上又はグラフ404の近傍に表示する。 The processing unit 30 of the image processing device 3 may use not only the plaque detection model 32M for detecting lipid plaques described above, but also a learning model for detecting whether or not a side branch is shown in an IVUS tomographic image I11 or an OCT tomographic image I21 when the image is input. In this case, the processing unit 30 uses the learning model to determine whether or not a side branch is shown in the tomographic images I11 and I21 corresponding to the positions on the long axis of the blood vessel, and stores the location where the side branch is shown. The processing unit 30 may calculate the size of the side branch. The position on the long axis where the side branch is present and its size are displayed on the graph 404 on the screen 400 as shown in FIG. 9 or in the vicinity of the graph 404.
 図14は、表示装置4に表示される画面400の例を示す。図14に示す画面400の構成のうち、第1実施形態の図7に示した画面400と共通する構成については同一の符号を付して詳細な説明を省略する。 FIG. 14 shows an example of a screen 400 displayed on the display device 4. Among the components of the screen 400 shown in FIG. 14, components common to the screen 400 shown in FIG. 7 of the first embodiment are given the same reference numerals and detailed descriptions are omitted.
 図14に示す画面400では、脂質性プラークの範囲を示すグラフィック406と、繊維性プラーク又は石灰化プラークが存在することを示すグラフィック407とに加え、側枝が存在する位置を示す黒塗りの菱形のマーク410が表示されている。更に、マーク410の近傍に、側枝の大きさ(径)を示す数値が表示されている。これにより画面400を視認する検査オペレータ又は医療事業者は、側枝の有無及びその大きさを認識しながらステントを留置する位置を決定することが可能になる。 In the screen 400 shown in FIG. 14, in addition to a graphic 406 indicating the range of lipid plaque and a graphic 407 indicating the presence of fibrous plaque or calcified plaque, a black diamond mark 410 indicating the location of the side branch is displayed. Furthermore, a numerical value indicating the size (diameter) of the side branch is displayed near the mark 410. This allows the examination operator or medical provider visually viewing the screen 400 to determine the location for placing a stent while recognizing the presence or absence of a side branch and its size.
 (第3実施形態)
 第3実施形態では、画像処理装置3が、1回のカテーテル1を用いた撮影で得られた断層画像I11,I12又はI3を入力した場合に、適したステントの情報を出力する学習モデルを用いる。
Third Embodiment
In the third embodiment, the image processing device 3 uses a learning model that outputs appropriate stent information when it inputs a tomographic image I11, I12, or I3 obtained by a single imaging session using the catheter 1.
 第3実施形態の画像診断装置100の構成は、以下に示す画像処理装置3に記憶されているステント情報モデル33Mと、処理部30による処理の詳細とを除き、第1実施形態の画像診断装置100の構成と同様である。したがって、第3実施形態の画像診断装置100の構成のうち、第1実施形態の画像診断装置100と共通する構成については同一の符号を付して詳細な説明を省略する。 The configuration of the imaging diagnostic device 100 of the third embodiment is the same as that of the imaging diagnostic device 100 of the first embodiment, except for the stent information model 33M stored in the image processing device 3 and the details of the processing by the processing unit 30, which are described below. Therefore, among the configurations of the imaging diagnostic device 100 of the third embodiment, the configurations common to the imaging diagnostic device 100 of the first embodiment are given the same reference numerals and detailed descriptions are omitted.
 図15は、第3実施形態の画像処理装置3の構成を示すブロック図である。画像処理装置32の記憶部31には、セグメンテーションモデル31Mに加え、ステント情報モデル33Mが記憶されている。ステント情報モデル33Mは、装置外の非一時記憶媒体9に記憶されたステント情報モデル93Mを入出力I/F32を介して読み出して複製したものであってもよい。ステント情報モデル33Mは、遠隔のサーバ装置が配信するものを画像処理装置3が図示しない通信部を介して取得し、記憶部31に記憶したものであってもよい。 FIG. 15 is a block diagram showing the configuration of an image processing device 3 of the third embodiment. In addition to a segmentation model 31M, a stent information model 33M is stored in the storage unit 31 of the image processing device 32. The stent information model 33M may be a copy of a stent information model 93M stored in a non-temporary storage medium 9 outside the device read out via the input/output I/F 32. The stent information model 33M may be a model distributed by a remote server device, acquired by the image processing device 3 via a communication unit (not shown), and stored in the storage unit 31.
 図16は、ステント情報モデル33Mの概要図である。ステント情報モデル33Mは、入力層331、中間層332及び出力層333を備えるニューラルネットワークを用いた学習モデルである。入力層331は、プラークバーデンの長軸方向に対する分布を示すグラフを画像化した画像データと、平均内腔径の長軸方向に対する分布をしめすグラフを画像化した画像データとを入力する。グラフには、脂質性プラークの存在を示すグラフィック406、繊維性プラーク又は石灰化プラークの存在を示すグラフィック407が重畳されていてもよい。出力層333は、複数種類のステントの識別データそれぞれに対応付けた複数種類のステントの適格度の配列を出力する。 FIG. 16 is a schematic diagram of the stent information model 33M. The stent information model 33M is a learning model using a neural network equipped with an input layer 331, an intermediate layer 332, and an output layer 333. The input layer 331 inputs image data that visualizes a graph showing the distribution of plaque burden in the longitudinal direction, and image data that visualizes a graph showing the distribution of average lumen diameter in the longitudinal direction. A graphic 406 indicating the presence of lipid plaque and a graphic 407 indicating the presence of fibrous plaque or calcified plaque may be superimposed on the graph. The output layer 333 outputs an array of the qualifications of multiple types of stents corresponding to the identification data of each of the multiple types of stents.
 入力層331は、画像データに限らず、プラークバーデンの長軸方向に対する分布を示す数値群(長軸上の位置に対するプラークバーデンの数値群)、及び、平均内腔径の長軸方向に対する分布を示す数値群(長軸上の位置に対する平均内腔径の数値群)を入力してもよい。 The input layer 331 may input not only image data, but also a group of values indicating the distribution of plaque burden in the longitudinal direction (a group of values of plaque burden for positions on the longitudinal axis) and a group of values indicating the distribution of average lumen diameter in the longitudinal direction (a group of values of average lumen diameter for positions on the longitudinal axis).
 ステント情報モデル33Mは、以下、画像処理装置3によって予め作成されるものとして説明するが、他の処理装置によって予め作成され、学習済みとされてもよい。 The stent information model 33M will be described below as being created in advance by the image processing device 3, but it may also be created in advance by another processing device and considered to have been trained.
 図17は、ステント情報モデル33Mを生成する過程の一例を示すフローチャートである。 FIG. 17 is a flowchart showing an example of the process for generating a stent information model 33M.
 処理部30は、過去に、血管に対する1回のカテーテル1を用いた診断で記憶されたプラークバーデンの長軸方向に対する分布と、平均内腔径の長軸方向に対する分布とを読み出す(ステップS301)。前記1回のカテーテル1を用いた診断にて記憶された、脂質性プラーク、繊維性プラーク又は石灰化プラークの長軸上の位置及び範囲を読み出す(ステップS302)。 The processing unit 30 reads out the distribution of plaque burden in the longitudinal direction and the distribution of average lumen diameter in the longitudinal direction that were stored in a single diagnosis of a blood vessel using the catheter 1 in the past (step S301). The processing unit 30 reads out the positions and ranges on the longitudinal axis of lipid plaque, fibrous plaque, or calcified plaque that were stored in the single diagnosis using the catheter 1 (step S302).
 処理部30は、ステップS301で取得した分布に、ステップS302で特定した脂質性プラーク、繊維性プラーク又は石灰化プラークの位置を示すグラフィックを重畳させた画像を作成する(ステップS303)。 The processing unit 30 creates an image in which a graphic showing the location of the lipid plaque, fibrous plaque, or calcified plaque identified in step S302 is superimposed on the distribution obtained in step S301 (step S303).
 処理部30は、前記1回のカテーテル1を用いた診断に基づいて使用されたステントの識別データを、手技の記録から特定する(ステップS304)。 The processing unit 30 identifies the identification data of the stent used based on the diagnosis using the catheter 1 from the procedure record (step S304).
 処理部30は、作成したプラークバーデンの長軸方向に対する分布を示すグラフの画像と、平均内腔径の長軸方向に対する分布を示すグラフの画像と、ステップS304で特定したステントの識別データとの組を教師データとして記憶する(ステップS305)。 The processing unit 30 stores, as training data, a set of an image of the graph showing the distribution of plaque burden in the longitudinal direction, an image of the graph showing the distribution of average lumen diameter in the longitudinal direction, and the identification data of the stent identified in step S304 (step S305).
 ステップS301-S305の処理は、できる限り多数のデータが集まるまで行なわれることが望ましい。 It is desirable to carry out the processing of steps S301-S305 until as much data as possible is collected.
 ステント情報モデル33Mに対し、画像データを入力するのではなく、プラークバーデンの長軸方向に対する分布を示す数値群及び平均内腔径の長軸方向に対する分布を示す数値群を入力する場合、ステップS303は省略される。またこの場合、ステップS305において処理部30は、各々の数値群と、ステントの識別データとの組を教師データとして記憶する。 If a group of values indicating the distribution of plaque burden in the longitudinal direction and a group of values indicating the distribution of average lumen diameter in the longitudinal direction are input to the stent information model 33M instead of image data, step S303 is omitted. In this case, in step S305, the processing unit 30 stores pairs of each group of values and the stent identification data as training data.
 処理部30は、記憶した教師データのグラフの画像を、学習完了前のステント情報モデル33Mの入力層331へ入力する(ステップS306)。処理部30は、ステント情報モデル33Mの出力層333から出力されるステントの識別データ毎の適格度と、入力した画像に対応するステントの適格度とを用いた損失を算出し、これにより中間層332のパラメータを学習(更新)する(ステップS307)。 The processing unit 30 inputs an image of the graph of the stored training data to the input layer 331 of the stent information model 33M before learning is completed (step S306). The processing unit 30 calculates a loss using the eligibility of each stent identification data output from the output layer 333 of the stent information model 33M and the eligibility of the stent corresponding to the input image, and thereby learns (updates) the parameters of the intermediate layer 332 (step S307).
 処理部30は、学習条件を満たすか否かを判断し(ステップS308)、学習条件を満たさないと判断された場合(S308:NO)、処理をステップS306へ戻し、学習を続行する。 The processing unit 30 determines whether the learning conditions are met (step S308), and if it is determined that the learning conditions are not met (S308: NO), the processing unit 30 returns to step S306 and continues learning.
 学習条件を満たすと判断された場合(S308:YES)、処理部30は、ステント情報モデル33Mのネットワーク構成及び条件を示す記述データ、及び、中間層332のパラメータを記憶部31又は他の記憶媒体に記憶し(ステップS309)、モデル生成処理を終了する。なお、処理部30は、ステップS301-S305の処理は予め事前に実行しておき、収集した教師データについて、ステップS306-S309の処理を実行してもよい。 If it is determined that the learning conditions are met (S308: YES), the processing unit 30 stores the descriptive data indicating the network configuration and conditions of the stent information model 33M and the parameters of the intermediate layer 332 in the storage unit 31 or another storage medium (step S309), and ends the model generation process. Note that the processing unit 30 may execute the processes of steps S301-S305 in advance, and then execute the processes of steps S306-S309 for the collected teacher data.
 これにより、ステント情報モデル33Mは、診断のためのカテーテル1を用いた走査によって得られた解剖学的特徴を示すデータ群(又はそれを画像化した画像データ)が入力された場合に、適切なステントの情報を出力するように生成される。 As a result, the stent information model 33M is generated to output appropriate stent information when a data group showing anatomical features obtained by scanning with the diagnostic catheter 1 (or image data that visualizes the data group) is input.
 処理部30は、ステント情報モデル33Mに、平均内腔径のグラフを画像化した画像データ、プラークバーデンのグラフを画像化した画像データを入力し、出力された適格度の配列を取得できる。処理部30は、適格度が所定値以上であるステントの識別データに基づき、留置するステントの提案情報を作成できる。 The processing unit 30 can input image data of an image of a graph of the average lumen diameter and image data of an image of a graph of the plaque burden into the stent information model 33M, and obtain the output array of eligibility. The processing unit 30 can create suggested information for a stent to be placed based on the identification data of stents whose eligibility is equal to or greater than a predetermined value.
 図18及び図19は、第3実施形態の画像処理装置3による情報処理手順の一例を示すフローチャートである。図18及び図19に示す処理手順のうち、第1実施形態の図5及び図6のフローチャートに示した処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 18 and 19 are flowcharts showing an example of an information processing procedure by the image processing device 3 of the third embodiment. Among the processing procedures shown in Figs. 18 and 19, the same step numbers are used for the steps common to the processing procedures shown in the flowcharts of Figs. 5 and 6 of the first embodiment, and detailed descriptions thereof will be omitted.
 第3実施形態において処理部30は、血管に対する走査が完了したと判断され(S112:YES)、解剖学的特徴を示すグラフ及び病変部の位置、範囲、参照部を示すグラフィックを出力すると(S117)、以下の処理を実行する。 In the third embodiment, when the processing unit 30 determines that scanning of blood vessels is complete (S112: YES) and outputs a graph showing anatomical features and a graphic showing the position, range, and reference area of the lesion (S117), it executes the following process.
 処理部30は、解剖学的特徴を示すグラフ(プラークバーデンの長軸上の分布及び平均内腔径の長軸上の分布)に、ステップS117で出力させた脂質性プラーク、繊維性プラーク又は石灰化プラークの位置を示すグラフィックを重畳させた画像を作成する(ステップS131)。 The processing unit 30 creates an image in which a graphic showing the location of lipid plaque, fibrous plaque, or calcified plaque output in step S117 is superimposed on a graph showing anatomical features (distribution of plaque burden on the long axis and distribution of average lumen diameter on the long axis) (step S131).
 処理部30は、作成した画像をステント情報モデル33Mへ入力する(ステップS132)。処理部30は、ステント情報モデル33Mから出力される適格度の配列に基づき、適格度が所定値以上であるステントの識別データを抽出する(ステップS133)。 The processing unit 30 inputs the created image into the stent information model 33M (step S132). The processing unit 30 extracts identification data of stents whose qualifications are equal to or greater than a predetermined value based on the sequence of qualifications output from the stent information model 33M (step S133).
 処理部30は、ステップS133で抽出した識別データで識別されるステントの品番、サイズ等の情報を表示装置4に表示している画面に出力し(ステップS134)、処理を終了する。 The processing unit 30 outputs information such as the product number and size of the stent identified by the identification data extracted in step S133 to the screen displayed on the display device 4 (step S134), and ends the process.
 図20は、表示装置4に表示される画面400の例を示す。図20に示す画面400のうち、第1実施形態の図7に示した画面400と共通する構成については同一の符号を付して詳細な説明を省略する。 FIG. 20 shows an example of a screen 400 displayed on the display device 4. Among the screen 400 shown in FIG. 20, the components common to the screen 400 shown in FIG. 7 of the first embodiment are given the same reference numerals and detailed description is omitted.
 図20の画面400には、推奨されるステントの情報を含むテキストボックス411が表示されている。これにより画面400を視認する検査オペレータ又は医療事業者は、留置するステントを選択する際に参考にすることが可能になる。 The screen 400 in FIG. 20 displays a text box 411 that contains information about the recommended stent. This allows the examination operator or medical provider viewing the screen 400 to refer to it when selecting a stent to place.
 第3実施形態では、画像処理装置3は、ステントを提案する情報を出力するものとして説明したが、これに限らず、適切なバルーンを提案する情報を出力するようにニューラルネットワークを用いてモデルを学習し、学習されたモデルを用いて新たな提案を出力するようにしてもよい。 In the third embodiment, the image processing device 3 has been described as outputting information suggesting a stent, but this is not limited thereto. The image processing device 3 may also learn a model using a neural network to output information suggesting an appropriate balloon, and output new suggestions using the learned model.
 上述のように開示された実施の形態は全ての点で例示であって、制限的なものではない。本発明の範囲は、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内での全ての変更が含まれる。 The embodiments disclosed above are illustrative in all respects and are not restrictive. The scope of the present invention is defined by the claims, and includes all modifications within the meaning and scope of the claims.
 100 画像診断装置
 3 画像処理装置
 30 処理部
 31 記憶部
 31M セグメンテーションモデル
 33M ステント情報モデル
 4 表示装置
 5 入力装置
100 Image diagnostic device 3 Image processing device 30 Processing unit 31 Storage unit 31M Segmentation model 33M Stent information model 4 Display device 5 Input device

Claims (13)

  1.  コンピュータに、
     管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づき、前記管腔器官の解剖学的特徴を示すデータを算出し、
     算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、
     特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する
     処理を実行させるコンピュータプログラム。
    On the computer,
    calculating data indicative of anatomical characteristics of the hollow organ based on a signal output from an imaging device provided in a catheter inserted into the hollow organ;
    Identifying a range on a long axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicating anatomical characteristics;
    and outputting information for placing a stent in the tubular organ based on the identified range of the lesion on the long axis.
  2.  前記コンピュータに、
     前記管腔器官内にステントを留置するための情報として、前記ステントのランディングゾーンの情報を出力する
     処理を実行させる請求項1に記載のコンピュータプログラム。
    The computer includes:
    The computer program product according to claim 1 , which causes the computer to execute a process of outputting information on a landing zone of the stent as information for placing the stent in the tubular organ.
  3.  前記コンピュータに、
     前記管腔器官内にステントを留置するための情報として、前記病変部の長軸上の範囲の前後で内腔が大きい部分である参照部の位置の情報を出力する
     処理を実行させる請求項1に記載のコンピュータプログラム。
    The computer includes:
    The computer program according to claim 1 , which executes a process of outputting information on the position of a reference portion, which is a portion having a larger lumen before and after a range on a longitudinal axis of the lesion, as information for placing a stent in the tubular organ.
  4.  前記コンピュータに、
     前記病変部の長軸上の範囲の近傍における他の病変部の情報に基づいて、前記参照部の位置を変更し、
     変更後の参照部の位置の情報を出力する
     処理を実行させる請求項3に記載のコンピュータプログラム。
    The computer includes:
    changing the position of the reference portion based on information of other lesion portions in the vicinity of the range of the long axis of the lesion portion;
    The computer program product according to claim 3 , further comprising: outputting information on the position of the reference portion after the change.
  5.  前記コンピュータに、
     前記解剖学的特徴を示すデータ及び前記参照部の長軸上の位置に基づき、留置されるステントのサイズの提案を出力する
     処理を実行させる請求項3又は4に記載のコンピュータプログラム。
    The computer includes:
    5. The computer program product of claim 3, further comprising: outputting a recommendation for a size of a stent to be placed based on the data indicative of the anatomical features and the longitudinal position of the reference portion.
  6.  前記イメージングデバイスは、異なる波長の波の送信器及び受信器をそれぞれ含むカテーテルデバイスである
     請求項1に記載のコンピュータプログラム。
    The computer program product of claim 1 , wherein the imaging device is a catheter device including transmitters and receivers, respectively, of waves of different wavelengths.
  7.  前記イメージングデバイスは、IVUS(Intravascular Ultrasound)及びOCT(Optical Coherence Tomography)それぞれの送信器及び受信器を含むデュアルタイプのカテーテルデバイスである
     請求項1に記載のコンピュータプログラム。
    The computer program product of claim 1 , wherein the imaging device is a dual-type catheter device including a transmitter and a receiver for Intravascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT), respectively.
  8.  前記病変部は、脂質性プラーク、繊維性プラーク、及び石灰プラークを含む異なる種類の病変部であり、
     前記コンピュータに、
     異なる種類の病変部に応じた前記カテーテルデバイスからの信号に基づき、異なる種類の病変部それぞれの前記管腔器官の長軸上の範囲を特定する
     処理を実行させる請求項6又は7に記載のコンピュータプログラム。
    the lesions being different types of lesions including lipid plaque, fibrous plaque, and calcific plaque;
    The computer includes:
    The computer program according to claim 6 or 7, which executes a process of identifying a range of each of different types of lesions on the longitudinal axis of the tubular organ based on a signal from the catheter device corresponding to each of different types of lesions.
  9.  前記コンピュータに、
     IVUSのセンサから得られる信号に基づく前記管腔器官の断層画像からプラークバーデンの長軸方向に対する分布を算出し、
     OCTのセンサから得られる信号に基づき、脂質性プラーク又は繊維性プラークの前記管腔器官の長軸方向における位置を特定し、
     前記プラークバーデンの分布と、前記脂質性プラーク又は繊維性プラークの位置とに基づき、ステントを留置するための情報を出力する
     処理を実行させる請求項7に記載のコンピュータプログラム。
    The computer includes:
    Calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor;
    Identifying the location of lipid plaque or fibrous plaque in the longitudinal direction of the luminal organ based on the signal obtained from the OCT sensor;
    The computer program product according to claim 7, which executes a process of outputting information for placing a stent based on the distribution of the plaque burden and the location of the lipidic plaque or the fibrous plaque.
  10.  前記コンピュータに、
     IVUSのセンサから得られる信号に基づく前記管腔器官の断層画像からプラークバーデンの長軸方向に対する分布を算出し、
     前記IVUSのセンサから得られる信号に基づき、脂質性プラークの前記管腔器官の長さ方向における位置を特定し、
     OCTのセンサから得られる信号に基づき、脂質性プラーク又は繊維性プラークの前記管腔器官の長軸方向における位置を特定し、
     前記プラークバーデンの分布と、前記脂質性プラーク又は繊維性プラークの位置とに基づき、ステントを留置するための情報を出力する
     処理を実行させる請求項7に記載のコンピュータプログラム。
    The computer includes:
    Calculating the distribution of plaque burden in the longitudinal direction from a tomographic image of the tubular organ based on a signal obtained from an IVUS sensor;
    determining a location of lipid plaque along the length of the luminal organ based on signals obtained from the IVUS sensor;
    Identifying the location of lipid plaque or fibrous plaque in the longitudinal direction of the luminal organ based on the signal obtained from the OCT sensor;
    The computer program product according to claim 7, which executes a process of outputting information for placing a stent based on the distribution of the plaque burden and the location of the lipidic plaque or the fibrous plaque.
  11.  管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号を取得するコンピュータが、
     管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づき、前記管腔器官の解剖学的特徴を示すデータを算出し、
     算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、
     特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する
     情報処理方法。
    A computer that acquires a signal output from an imaging device provided in a catheter inserted into a hollow organ,
    calculating data indicative of anatomical characteristics of the hollow organ based on a signal output from an imaging device provided in a catheter inserted into the hollow organ;
    Identifying a range on a long axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicating anatomical characteristics;
    and outputting information for placing a stent in the tubular organ based on the identified range of the lesion on the long axis.
  12.  管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号を取得する情報処理装置において、
     前記信号に基づく前記管腔器官の断層画像が入力された場合に、前記断層画像に写っている組織又は病変部の範囲を分別するデータを出力する学習済みのモデルを記憶する記憶部と、
     前記イメージングデバイスからの信号に基づく画像処理を実行する処理部と
     を備え、
     前記処理部は、
     管腔器官に挿入されるカテーテルに備えられたイメージングデバイスから出力される信号に基づく画像を前記モデルへ入力し、
     前記モデルから出力されたデータに基づいて前記管腔器官の解剖学的特徴を示すデータを算出し、
     算出された解剖学的特徴を示すデータに基づき、前記管腔器官内の病変部が存在する前記管腔器官の長軸上の範囲を特定し、
     特定した病変部の長軸上の範囲に基づき、前記管腔器官内にステントを留置するための情報を出力する
     情報処理装置。
    1. An information processing device for acquiring a signal output from an imaging device provided in a catheter inserted into a tubular organ,
    a memory unit that stores a trained model that outputs data for discriminating the range of a tissue or a lesion in a tomographic image of the tubular organ based on the signal when the tomographic image is input;
    a processing unit that performs image processing based on a signal from the imaging device,
    The processing unit includes:
    An image based on a signal output from an imaging device provided in a catheter inserted into a hollow organ is input to the model;
    Calculating data indicating anatomical characteristics of the hollow organ based on the data output from the model;
    Identifying a range on a long axis of the tubular organ in which a lesion exists in the tubular organ based on the calculated data indicating anatomical characteristics;
    and an information processing device that outputs information for placing a stent in the tubular organ based on the identified range of the lesion on the long axis.
  13.  管腔器官の解剖学的特徴を示すデータの、前記管腔器官の長軸方向に対する分布に係るデータが入力される入力層と、
     前記管腔器官の病変部に留置するステントの適格度を出力する出力層と、
     前記分布と、該分布に基づき前記病変部に対して使用されたステントの実績とを含む教師データに基づいて学習された中間層と、
     を備え、
     管腔器官の解剖学的特徴を示すデータの、前記管腔器官の長軸方向に対する分布を前記入力層へ与え、前記中間層に基づいて演算し、前記分布に対応するステントの適格度を前記出力層から出力するようにコンピュータを機能させるための、学習モデル。
    an input layer to which data indicating anatomical features of a hollow organ is input, the data relating to the distribution of the hollow organ in a longitudinal direction;
    an output layer that outputs the suitability of a stent to be placed in a lesion of the luminal organ;
    An intermediate layer trained based on training data including the distribution and a track record of a stent used for the lesion based on the distribution;
    Equipped with
    A learning model for causing a computer to function as follows: providing a distribution of data indicating the anatomical characteristics of a luminal organ along the longitudinal direction of the luminal organ to the input layer, performing calculations based on the intermediate layer, and outputting the suitability of a stent corresponding to the distribution from the output layer.
PCT/JP2023/035280 2022-09-29 2023-09-27 Computer program, information processing method, information processing device, and learning model WO2024071251A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022156647 2022-09-29
JP2022-156647 2022-09-29

Publications (1)

Publication Number Publication Date
WO2024071251A1 true WO2024071251A1 (en) 2024-04-04

Family

ID=90477969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/035280 WO2024071251A1 (en) 2022-09-29 2023-09-27 Computer program, information processing method, information processing device, and learning model

Country Status (1)

Country Link
WO (1) WO2024071251A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008512171A (en) * 2004-09-09 2008-04-24 メディガイド リミテッド Method and system for transferring a medical device to a selected location within a lumen
JP2012200532A (en) * 2011-03-28 2012-10-22 Terumo Corp Imaging apparatus for diagnosis and display method
JP2013056113A (en) * 2011-09-09 2013-03-28 Toshiba Corp Image display device
JP2017534394A (en) * 2014-11-14 2017-11-24 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Percutaneous coronary intervention planning interface and related devices, systems, and methods
JP2019217263A (en) * 2018-05-03 2019-12-26 キヤノン ユーエスエイ, インコーポレイテッドCanon U.S.A., Inc Devices, systems and methods to emphasize regions of interest across multiple imaging modalities
JP2020503909A (en) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド Method of using a stent planning system and vascular representation
JP2021517034A (en) * 2018-03-15 2021-07-15 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Determination and visualization of anatomical markers for intraluminal lesion assessment and treatment planning
WO2022054805A1 (en) * 2020-09-14 2022-03-17 テルモ株式会社 Information processing device, information processing system, information processing method, and computer program
WO2022071181A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Information processing device, information processing method, program, and model generation method
WO2022071121A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Information processing device, information processing method, and program
JP2022079550A (en) * 2020-06-29 2022-05-26 ライトラボ・イメージング・インコーポレーテッド Method of operating processor device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008512171A (en) * 2004-09-09 2008-04-24 メディガイド リミテッド Method and system for transferring a medical device to a selected location within a lumen
JP2012200532A (en) * 2011-03-28 2012-10-22 Terumo Corp Imaging apparatus for diagnosis and display method
JP2013056113A (en) * 2011-09-09 2013-03-28 Toshiba Corp Image display device
JP2017534394A (en) * 2014-11-14 2017-11-24 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Percutaneous coronary intervention planning interface and related devices, systems, and methods
JP2020503909A (en) * 2016-09-28 2020-02-06 ライトラボ・イメージング・インコーポレーテッド Method of using a stent planning system and vascular representation
JP2021517034A (en) * 2018-03-15 2021-07-15 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Determination and visualization of anatomical markers for intraluminal lesion assessment and treatment planning
JP2019217263A (en) * 2018-05-03 2019-12-26 キヤノン ユーエスエイ, インコーポレイテッドCanon U.S.A., Inc Devices, systems and methods to emphasize regions of interest across multiple imaging modalities
JP2022079550A (en) * 2020-06-29 2022-05-26 ライトラボ・イメージング・インコーポレーテッド Method of operating processor device
WO2022054805A1 (en) * 2020-09-14 2022-03-17 テルモ株式会社 Information processing device, information processing system, information processing method, and computer program
WO2022071181A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Information processing device, information processing method, program, and model generation method
WO2022071121A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Information processing device, information processing method, and program

Similar Documents

Publication Publication Date Title
US11741613B2 (en) Systems and methods for classification of arterial image regions and features thereof
CN112512438A (en) System, device and method for displaying multiple intraluminal images in lumen assessment using medical imaging
WO2021199968A1 (en) Computer program, information processing method, information processing device, and method for generating model
WO2022071264A1 (en) Program, model generation method, information processing device, and information processing method
JP6170565B2 (en) Diagnostic imaging apparatus and operating method thereof
JP7489882B2 (en) Computer program, image processing method and image processing device
WO2023054467A1 (en) Model generation method, learning model, computer program, information processing method, and information processing device
WO2024071251A1 (en) Computer program, information processing method, information processing device, and learning model
WO2022071265A1 (en) Program, information processing device, and information processing method
US20220218309A1 (en) Diagnostic assistance device, diagnostic assistance system, and diagnostic assistance method
WO2024071121A1 (en) Computer program, information processing method, and information processing device
WO2021193024A1 (en) Program, information processing method, information processing device and model generating method
JP2024050056A (en) COMPUTER PROGRAM, LEARNING MODEL, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
WO2024071322A1 (en) Information processing method, learning model generation method, computer program, and information processing device
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2021199961A1 (en) Computer program, information processing method, and information processing device
JP2023051175A (en) Computer program, information processing method, and information processing device
WO2021199966A1 (en) Program, information processing method, training model generation method, retraining method for training model, and information processing system
US20240008849A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2021199967A1 (en) Program, information processing method, learning model generation method, learning model relearning method, and information processing system
WO2022209652A1 (en) Computer program, information processing method, and information processing device
WO2021193018A1 (en) Program, information processing method, information processing device, and model generation method
JP7421548B2 (en) Diagnostic support device and diagnostic support system
WO2022202323A1 (en) Program, information processing method, and information processing device
WO2023189260A1 (en) Computer program, information processing device, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23872473

Country of ref document: EP

Kind code of ref document: A1