WO2022202320A1 - Program, information processing method, and information processing device - Google Patents

Program, information processing method, and information processing device Download PDF

Info

Publication number
WO2022202320A1
WO2022202320A1 PCT/JP2022/010252 JP2022010252W WO2022202320A1 WO 2022202320 A1 WO2022202320 A1 WO 2022202320A1 JP 2022010252 W JP2022010252 W JP 2022010252W WO 2022202320 A1 WO2022202320 A1 WO 2022202320A1
Authority
WO
WIPO (PCT)
Prior art keywords
tomographic image
image
optical coherence
region
ultrasonic
Prior art date
Application number
PCT/JP2022/010252
Other languages
French (fr)
Japanese (ja)
Inventor
亮 上原
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2022202320A1 publication Critical patent/WO2022202320A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a program, an information processing method, and an information processing apparatus.
  • IVUS Intra Vascular Ultra Sound
  • OCT optical coherence tomography
  • Patent Document 1 discloses a medical examination system capable of achieving optimal diagnostic image quality by generating an image in which an ultrasonic tomographic image and an optical coherence tomographic image are combined. .
  • Patent Document 1 simply generates an image in which the central cutout of the inner side of the optical coherence tomogram and the outer side of the ultrasonic tomogram are combined, and the viewpoint of supporting efficient interpretation. There was room for improvement in
  • the purpose of the present disclosure is to provide a program or the like that can support efficient interpretation.
  • a program acquires an ultrasonic tomographic image and an optical coherence tomographic image generated based on a signal detected by a catheter inserted into a hollow organ, and acquires the ultrasonic tomographic image and the optical coherence tomographic image.
  • Complementary information for complementing the blurred region is acquired based on the information, and based on the acquired complementary information, a computer executes processing for generating one of the ultrasonic tomographic image and the optical coherence tomographic image that complement the blurred region.
  • FIG. 1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus
  • FIG. FIG. 2 is an explanatory diagram for explaining an outline of a diagnostic imaging catheter
  • FIG. 4 is an explanatory view showing a cross section of a blood vessel through which a sensor section is passed
  • FIG. 4 is an explanatory diagram for explaining a tomographic image
  • FIG. 4 is an explanatory diagram for explaining a tomographic image
  • 1 is a block diagram showing a configuration example of an image processing apparatus
  • FIG. FIG. 4 is an explanatory diagram showing an outline of a learning model
  • FIG. 10 is an explanatory diagram showing an outline of a learning model in the second embodiment
  • FIG. 11 is a flowchart showing an example of detailed procedures for specifying a blurred area in the second embodiment
  • FIG. 13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment
  • 13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment
  • FIG. 14 is a flowchart showing an example of detailed procedures for specifying a blurred area in the fourth embodiment
  • an intravascular examination of a subject using a catheter will be described as an example, but the lumenal organ targeted for catheter examination is not limited to blood vessels, such as bile ducts, pancreatic ducts, bronchi, intestines, and the like. other luminal organs.
  • FIG. 1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus 100.
  • an image diagnostic apparatus using a dual-type catheter having both intravascular ultrasound (IVUS) and optical coherence tomography (OCT) functions will be described.
  • Dual-type catheters are provided with a mode for acquiring ultrasound tomographic images only by IVUS, a mode for acquiring optical coherence tomographic images only by OCT, and a mode for acquiring both tomographic images by IVUS and OCT. , you can switch between these modes.
  • an ultrasound tomographic image and an optical coherence tomographic image will be referred to as an IVUS image and an OCT image, respectively.
  • IVUS images and OCT images are collectively referred to as tomographic images.
  • the diagnostic imaging apparatus 100 of this embodiment includes an intravascular examination apparatus 101 , an angiography apparatus 102 , an image processing apparatus 3 , a display apparatus 4 and an input apparatus 5 .
  • An intravascular examination apparatus 101 includes an imaging diagnostic catheter (catheter) 1 and an MDU (Motor Drive Unit) 2 .
  • the diagnostic imaging catheter 1 is connected to the image processing device 3 via the MDU 2 .
  • a display device 4 and an input device 5 are connected to the image processing device 3 .
  • the display device 4 is, for example, a liquid crystal display or an organic EL (Electro Luminescence) display, etc.
  • the input device 5 is, for example, a keyboard, mouse, trackball, microphone, or the like.
  • the display device 4 and the input device 5 may be laminated integrally to form a touch panel.
  • the input device 5 and the image processing device 3 may be configured integrally.
  • the input device 5 may be a sensor that accepts gesture input, line-of-sight input, or the like.
  • the angiography device 102 is connected to the image processing device 3.
  • the angiography apparatus 102 is an angiography apparatus for capturing an image of a blood vessel using X-rays from outside the patient's body while injecting a contrast agent into the patient's blood vessel to obtain an angiography image, which is a fluoroscopic image of the blood vessel.
  • the angiography apparatus 102 includes an X-ray source and an X-ray sensor, and the X-ray sensor receives X-rays emitted from the X-ray source to image a patient's X-ray fluoroscopic image.
  • the diagnostic imaging catheter 1 is provided with a marker that does not transmit X-rays, and the position of the diagnostic imaging catheter 1 (marker) is visualized in the angiographic image.
  • the angiography device 102 outputs an angio image obtained by imaging to the image processing device 3 and displayed on the display device 4 via the image processing device 3 .
  • the display device 4 displays an angiographic image and a tomographic image captured using the diagnostic imaging catheter 1 . Note that the angiography apparatus 102 is not essential in this embodiment.
  • FIG. 2 is an explanatory diagram for explaining the outline of the diagnostic imaging catheter 1.
  • FIG. The upper one-dot chain line area in FIG. 2 is an enlarged view of the lower one-dot chain line area.
  • the diagnostic imaging catheter 1 has a probe 11 and a connector portion 15 arranged at the end of the probe 11 .
  • the probe 11 is connected to the MDU 2 via the connector section 15 .
  • the side far from the connector portion 15 of the diagnostic imaging catheter 1 is referred to as the distal end side, and the connector portion 15 side is referred to as the proximal end side.
  • the probe 11 has a catheter sheath 11a, and a guide wire insertion portion 14 through which a guide wire can be inserted is provided at the distal end thereof.
  • the guidewire insertion part 14 constitutes a guidewire lumen, receives a guidewire previously inserted into the blood vessel, and is used to guide the probe 11 to the affected part by the guidewire.
  • the catheter sheath 11 a forms a continuous tube portion from the connection portion with the guide wire insertion portion 14 to the connection portion with the connector portion 15 .
  • a shaft 13 is inserted through the catheter sheath 11 a , and a sensor section 12 is connected to the distal end of the shaft 13 .
  • the sensor section 12 has a housing 12d, and the distal end side of the housing 12d is formed in a hemispherical shape to suppress friction and catching with the inner surface of the catheter sheath 11a.
  • an ultrasonic transmission/reception unit 12a (hereinafter referred to as an IVUS sensor 12a) for transmitting ultrasonic waves into the blood vessel and receiving reflected waves from the blood vessel
  • An optical transmitter/receiver 12b (hereinafter referred to as an OCT sensor 12b) for receiving reflected light from inside the blood vessel is arranged.
  • an IVUS sensor 12a is provided on the distal end side of the probe 11
  • an OCT sensor 12b is provided on the proximal end side.
  • the IVUS sensor 12a and the OCT sensor 12b are attached in a direction that is approximately 90 degrees to the axial direction of the shaft 13 (the radial direction of the shaft 13) as the transmitting/receiving direction of ultrasonic waves or near-infrared light. It is The IVUS sensor 12a and the OCT sensor 12b are desirably installed with a slight displacement from the radial direction so as not to receive reflected waves or reflected light from the inner surface of the catheter sheath 11a. In the present embodiment, for example, as indicated by the arrow in FIG.
  • the IVUS sensor 12a emits ultrasonic waves in a direction inclined toward the proximal side with respect to the radial direction, and the OCT sensor 12b It is attached so that the direction inclined toward the tip side is the irradiation direction of the near-infrared light.
  • the optical transmitter/receiver 12b may be a sensor for OFDI (Optical Frequency Domain Imaging).
  • An electric signal cable (not shown) connected to the IVUS sensor 12a and an optical fiber cable (not shown) connected to the OCT sensor 12b are inserted into the shaft 13.
  • the probe 11 is inserted into the blood vessel from the tip side.
  • the sensor unit 12 and the shaft 13 can move forward and backward inside the catheter sheath 11a, and can rotate in the circumferential direction.
  • the sensor unit 12 and the shaft 13 rotate around the central axis of the shaft 13 as a rotation axis.
  • an ultrasonic tomographic image IVUS image
  • OCT image optical interference image
  • the diagnostic imaging catheter 1 does not transmit X-rays in order to confirm the positional relationship between the IVUS image obtained by the IVUS sensor 12a or the OCT image obtained by the OCT sensor 12b and the angiographic image obtained by the angiography device 102.
  • markers In the example shown in FIG. 2, a marker 14a is provided at the distal end portion of the catheter sheath 11a, for example, the guide wire insertion portion 14, and a marker 12c is provided at the sensor portion 12 on the shaft 13 side.
  • the diagnostic imaging catheter 1 configured in this manner is imaged with X-rays, an angiographic image in which the markers 14a and 12c are visualized is obtained.
  • the positions at which the markers 14a and 12c are provided are examples, the marker 12c may be provided on the shaft 13 instead of the sensor section 12, and the marker 14a may be provided at a location other than the distal end of the catheter sheath 11a.
  • the MDU 2 is a driving device to which the probe 11 (catheter 1 for diagnostic imaging) is detachably attached via the connector portion 15 .
  • the MDU 2 controls the operation of the diagnostic imaging catheter 1 inserted into the blood vessel by driving a built-in motor according to the operation of the medical staff. For example, the MDU 2 performs a pullback operation in which the sensor unit 12 and the shaft 13 inserted into the probe 11 are pulled toward the MDU 2 side at a constant speed and rotated in the circumferential direction.
  • the sensor unit 12 continuously scans the inside of the blood vessel at predetermined time intervals while rotating while moving from the distal end side to the proximal end side by a pullback operation, thereby obtaining a plurality of transverse layer images substantially perpendicular to the probe 11 . are taken continuously at predetermined intervals.
  • the MDU 2 outputs to the image processing device 3 the reflected ultrasonic wave signal received by the IVUS sensor 12a and the reflected light received by the OCT sensor 12b.
  • the image processing device 3 acquires the reflected ultrasonic wave signal received by the IVUS sensor 12a via the MDU 2 and the reflected light received by the OCT sensor 12b.
  • the image processing device 3 generates ultrasonic line data from the signals of the reflected ultrasonic waves, and builds an ultrasonic tomographic image (IVUS image) in which a transverse layer of the blood vessel is imaged based on the generated ultrasonic line data.
  • the image processing device 3 generates light line data based on interference light generated by causing interference between the reflected light and the reference light obtained by, for example, separating the light from the light source in the image processing device 3. is generated, and an optical tomographic image (OCT image) obtained by imaging the transverse layer of the blood vessel is constructed based on the generated optical line data.
  • OCT image optical tomographic image
  • the image processing device 3 may be configured to acquire an IVUS image and an OCT image respectively from the diagnostic imaging catheter 1 having the IVUS sensor 12a and the diagnostic imaging catheter 1 having the OCT sensor 12b.
  • FIG. 3 is an explanatory view showing a cross section of a blood vessel through which the sensor section 12 is passed
  • FIG. 4 is an explanatory view explaining a tomographic image.
  • the operation of the IVUS sensor 12a and the OCT sensor 12b in the blood vessel, and the ultrasound line data and optical line data acquired by the IVUS sensor 12a and the OCT sensor 12b will be described using FIG.
  • the imaging core rotates about the central axis of the shaft 13 in the direction indicated by the arrow.
  • the IVUS sensor 12a transmits and receives ultrasonic waves at each rotation angle.
  • Lines 1, 2, . . . 512 indicate the transmission and reception directions of ultrasonic waves at each rotation angle.
  • the IVUS sensor 12a intermittently transmits and receives ultrasonic waves 512 times while rotating 360 degrees (one rotation) in the blood vessel.
  • the IVUS sensor 12a Since the IVUS sensor 12a obtains data of one line in the transmitting/receiving direction by one transmission/reception of ultrasonic waves, it is possible to obtain 512 ultrasonic line data radially extending from the center of rotation during one rotation. can.
  • the 512 ultrasonic line data are dense near the center of rotation, but become sparse with distance from the center of rotation. Therefore, the image processing device 3 generates a two-dimensional ultrasonic tomographic image (IVUS image) as shown on the left side of FIG. can be done.
  • the OCT sensor 12b also transmits and receives measurement light at each rotation angle. Since the OCT sensor 12b also transmits and receives measurement light 512 times while rotating 360 degrees inside the blood vessel, it is possible to obtain 512 optical line data radially extending from the center of rotation during one rotation. can.
  • the image processing device 3 generates a two-dimensional optical coherence tomographic image (OCT image) as shown on the right side of FIG. can be generated.
  • a two-dimensional tomographic image generated from 512 line data in this way is called a one-frame IVUS image or OCT image. Since the sensor unit 12 scans while moving inside the blood vessel, one frame of IVUS image or OCT image is acquired at each position after one rotation within the movement range (one pullback range). That is, since one frame of IVUS image or OCT image is acquired at each position from the distal side to the proximal side of the probe 11 in the movement range, as shown in FIG. 4B, multiple frames of IVUS images or An OCT image is acquired.
  • the ultrasound line data includes a high-brightness region corresponding to the surface portion of the calcified plaque and a region (unclear region) where the brightness value is greatly reduced after the high-brightness region. Then, in the IVUS image generated from the ultrasound line data, as shown on the left side of FIG. , and dark and unclear regions (blurred regions) with reduced luminance values.
  • the light line data includes two bright raised areas corresponding to the fibrous tissue surrounding the calcified plaque, and a calcified plaque between the two raised areas that is less bright than each raised area. area.
  • a bright rising region with a high luminance value formed around the calcified plaque and the boundary thereof are clearly shown by the rising region. and calcified plaque areas.
  • FIG. 5 is a block diagram showing a configuration example of the image processing device 3.
  • the image processing apparatus 3 is a computer and includes a control section 31 , a main storage section 32 , an input/output I/F 33 , an auxiliary storage section 34 and a reading section 35 .
  • the control unit 31 includes one or more CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. is configured using an arithmetic processing unit.
  • the control unit 31 is connected to each hardware unit constituting the image processing apparatus 3 via a bus.
  • the main storage unit 32 is a temporary storage area such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 31 to perform arithmetic processing.
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • flash memory etc.
  • the input/output I/F 33 is an interface to which the intravascular examination device 101, the angiography device 102, the display device 4 and the input device 5 are connected.
  • the control unit 31 acquires, from the intravascular examination apparatus 101 via the input/output I/F 33 , the signal of the reflected ultrasonic wave for the IVUS image and the reflected light for the OCT image.
  • the control unit 31 outputs various image signals such as an IVUS image, an OCT image, and a composite image to the display device 4 via the input/output I/F 33, thereby displaying various images on the display device 4. .
  • the control unit 31 receives information input to the input device 5 via the input/output I/F 33 .
  • the auxiliary storage unit 34 is a storage device such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), flash memory, or the like.
  • the auxiliary storage unit 34 stores the program 3P executed by the control unit 31 and various data necessary for the processing of the control unit 31 .
  • the auxiliary storage unit 34 may be an external storage device connected to the image processing device 3 .
  • the program 3P may be written in the auxiliary storage unit 34 at the manufacturing stage of the image processing device 3, or may be distributed by a remote server device and acquired by the image processing device 3 through communication and stored in the auxiliary storage unit 34. You may let The program 3P may be readable and recorded on the recording medium 30 such as a magnetic disk, an optical disk, or a semiconductor memory, or may be read from the recording medium 30 by the reading unit 35 and stored in the auxiliary storage unit 34 .
  • the auxiliary storage unit 34 also stores the learning model 3M.
  • the learning model 3M is a machine learning model that has learned training data.
  • the learning model 3M is assumed to be used as a program module that constitutes artificial intelligence software.
  • the image processing device 3 may be a multicomputer including a plurality of computers. Further, the image processing device 3 may be a server client system, a cloud server, or a virtual machine virtually constructed by software. In the following description, it is assumed that the image processing apparatus 3 is one computer.
  • the control unit 31 of the image processing device 3 uses the learning model 3M to identify the blurred area of the IVUS image, and complements the identified blurred area in the IVUS image with the OCT image to create a synthetic IVUS image (composite image). Generate.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image. A method of generating a composite IVUS image according to this embodiment will be specifically described with reference to FIGS. 6 and 7.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image. A method of generating a composite IVUS image according to this embodiment will be specifically described with reference to FIGS. 6 and 7.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image.
  • the learning model 3M is a machine learning model that receives an IVUS image and outputs information indicating the high-brightness region (surface portion) of the calcified plaque in the IVUS image. Specifically, the learning model 3M receives, as input, a plurality of frames of IVUS images that are continuous along the longitudinal direction of the blood vessel according to scanning by the diagnostic imaging catheter 1 . The learning model 3M identifies high intensity regions of calcified plaque in each successive frame of IVUS images along the time axis t.
  • the learning model 3M is, for example, a CNN (Convolutional Neural Network).
  • the learning model 3M uses image recognition technology using semantic segmentation to determine whether each pixel in the input image is a pixel corresponding to an object (high-brightness region of calcified plaque) region. Recognize in units.
  • the learning model 3M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores image feature values, and an output layer that outputs information indicating the position and range of an object included in the IVUS image.
  • the learning model 3M is U-Net, for example.
  • the input layer of the learning model 3M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer.
  • the intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer).
  • CONV layer convolution layer
  • DECONV layer deconvolution layer
  • a convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object.
  • the deconvolution layer performs the deconvolution process to restore the original dimensions.
  • the restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not.
  • the output layer has multiple nodes that output label images.
  • the label image is, for example, an image in which pixels corresponding to high intensity regions of calcified plaque are class "1" and pixels corresponding to other images are class "0".
  • the learning model 3M prepares training data in which an IVUS image containing objects (high-brightness regions of calcified plaque) and labels indicating the positions of each object are associated, and uses the training data to create an untrained neural It can be generated by machine learning the network.
  • the control unit 31 inputs a plurality of IVUS images included in the training data to the input layer of the neural network model before learning, performs arithmetic processing in the intermediate layer, and outputs the image output from the output layer. get.
  • the control unit 31 compares the image output from the output layer with the label image included in the training data, and performs arithmetic processing in the intermediate layer so that the image output from the output layer approaches the label image. Optimize the parameters used.
  • the parameters are, for example, weights (coupling coefficients) between neurons.
  • the parameter optimization method is not particularly limited, for example, the control unit 31 optimizes various parameters using the error backpropagation method. For the position of the object, for example, a judgment made by a doctor having specialized knowledge may be used as a correct label.
  • the control unit 31 may input each frame image to the learning model 3M one by one for processing, but may input a plurality of continuous frame images at the same time and identify a high-intensity region of calcified plaque from the plurality of frame images. can be detected simultaneously.
  • the control unit 31 sets the learning model 3M as a 3D-CNN (eg, 3D U-net) that handles three-dimensional input data.
  • the control unit 31 treats the coordinates of the two-dimensional frame images as three-dimensional data, with the coordinates of the two-dimensional frame images as two axes and the time (generation time point) t at which each frame image was acquired as one axis.
  • the control unit 31 inputs a set of multiple frame images (for example, 16 frames) for a predetermined unit time to the learning model 3M, and labels the high-intensity regions of the calcified plaque to each of the multiple frame images. Output images at the same time. As a result, it is possible to detect the high-intensity region of the calcified plaque in consideration of the preceding and following frame images that are consecutive in time series, and to improve the detection accuracy.
  • the configuration of the learning model 3M is not limited as long as it can identify high-intensity regions of calcified plaque contained in medical images.
  • the control unit 31 specifies the position and range of the blurred area outside the high-brightness area based on the high-brightness area of the calcified plaque detected by the learning model 3M. Specifically, the control unit 31 identifies a plurality of ultrasound line data corresponding to the high-brightness region based on the position of the high-brightness region in the circumferential direction of the IVUS image. The control unit 31 identifies a plurality of optical line data at the same angle (same line number) as each of the identified plurality of ultrasound line data.
  • the control unit 31 identifies the position (coordinates) of the calcified plaque regions formed between the rising regions based on the change in the luminance value in the depth direction of each light line data. With respect to the ultrasonic line data and the optical line data at the same angle, the control unit 31 controls the high luminance area from the calcified plaque area based on the position of the high intensity area in the ultrasonic line data and the position of the calcified plaque area in the optical line data. Identify the location of the area excluding the luminance area, that is, the blurred area.
  • the control unit 31 performs the above-described processing on all of the plurality of ultrasound line data corresponding to the high-brightness region in the IVUS image, and interpolates between each line by a known interpolation process to obtain the high-brightness region of the calcified plaque. Identify the location and extent of the blurred region outside the . Note that the control unit 31 generates a plurality of arbitrary lines in the radial direction from the rotation center of the probe 11 on the IVUS image and the OCT image, and uses the change in the luminance value of the generated arbitrary lines to determine the position of the blurred region described above. You may perform the process which specifies.
  • the method of specifying the blurred area is not limited to the above example.
  • the control unit 31 receives an IVUS image and uses a learning model 3M that outputs a blurred region caused by calcified plaque in the IVUS image to directly detect the blurred region caused by the calcified plaque from the IVUS image.
  • the control unit 31 may detect a plaque region from an IVUS image using a learning model 3M that receives an IVUS image as input and outputs a plaque region including a high-brightness region and a blurred region in the IVUS image.
  • the control unit 31 uses the detection result of the plaque region and the change in the brightness value of the ultrasound line data to determine the brightness value outside the plaque region and the region with the high brightness value (high brightness region). Low areas (blurred areas) may be identified.
  • the control unit 31 generates complementary information for complementing the blurred region in the identified IVUS image. Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. In the example of FIG. 7, the portion surrounded by the dashed line in the OCT image shown on the upper right side is the area corresponding to the blurred area. The control unit 31 acquires the pixel values of the area corresponding to the identified blurred area.
  • the complementary information is an image containing pixel values of a region corresponding to the blurred region of the OCT image with respect to the blurred region of the IVUS image, and is a partial image obtained by cutting out the region corresponding to the blurred region of the OCT image.
  • the control unit 31 Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image, as shown in the lower part of FIG.
  • control unit 31 corrects the pixel values of the OCT image in the complementary information according to the pixel values of the high-brightness region in the IVUS image, and uses the pixel values after correction to complement the blurred region in the IVUS image. good.
  • control unit 31 may adjust the area of the OCT image of the complementary information in consideration of the deviation of the imaging area between the IVUS image and the OCT image acquired at the same time as the IVUS image. For example, the control unit 31 may set, in the OCT image, an area shifted by a predetermined angle in the circumferential direction from the area corresponding to the blurred area of the IVUS image as the target of the complementary information.
  • the control unit 31 may use the area corresponding to the blurred area of the OCT image shifted by a predetermined number of frames, such as the preceding and succeeding frames of the OCT image acquired at the same time as the IVUS image, as the target of the complementary information.
  • control unit 31 may perform preprocessing for synchronizing the depths of the IVUS image and the OCT image.
  • the position and range (size) of the area corresponding to the blurred area in the OCT image may be specified.
  • FIG. 8 is a flowchart showing an example of a processing procedure executed by the image processing device 3.
  • FIG. When the ultrasound reflected wave signal and the reflected light are output from the intravascular examination apparatus 101, the control unit 31 of the image processing apparatus 3 executes the following processes according to the program 3P.
  • the control unit 31 of the image processing device 3 acquires a plurality of IVUS images and OCT images in chronological order via the intravascular examination device 101 (step S11). More specifically, the control unit 31 acquires an IVUS image and an OCT image generated based on reflected ultrasound wave signals and reflected light acquired via the intravascular examination apparatus 101 .
  • the IVUS image contains high intensity areas indicating the surface of the calcified plaque, and dark, indistinct blurred areas of reduced intensity values beyond the surface of the calcified plaque.
  • OCT images contain plaque regions that show calcified plaque at depth.
  • the control unit 31 identifies the blurred area in the acquired IVUS image (step S12).
  • a blurred region is a region formed radially outside the calcified plaque due to the calcified plaque and having a lower brightness than the calcified plaque.
  • FIG. 9 is a flowchart showing an example of a detailed procedure for specifying a blurred area.
  • the processing procedure shown in the flowchart of FIG. 9 corresponds to the details of step S12 in the flowchart of FIG.
  • the control unit 31 inputs the obtained IVUS image to the learning model 3M as input data (step S21).
  • the control unit 31 acquires the high-brightness region of the IVUS image output from the learning model 3M (step S22).
  • the control unit 31 identifies the blurred area formed radially outside the high-brightness area of the IVUS image based on the change in the brightness value of the IVUS image and the brightness value of the OCT image (step S23).
  • the control unit 31 returns the process to step S13 in the flowchart of FIG.
  • the control unit 31 acquires complementary information for complementing the blurred region of the IVUS image according to the identified blurred region of the IVUS image (step S13). Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. Get the pixel values of the area corresponding to the blurred area.
  • control unit 31 Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image (step S14). The control unit 31 displays a screen including the generated composite IVUS image on the display device 4 (step S15), and ends the series of processes.
  • FIG. 10 is a schematic diagram showing an example of a screen 40 displayed on the display device 4.
  • the screen 40 includes, for example, an IVUS image display section 41, an OCT image display section 42, and an input button 43 for inputting display/non-display of the composite IVUS image.
  • the IVUS image display unit 41 displays IVUS images acquired via the intravascular examination apparatus 101 .
  • the OCT image display unit 42 displays OCT images acquired via the intravascular examination apparatus 101 .
  • the input button 43 is displayed below the IVUS image display section 41, for example. Although the display position of the input button 43 is not limited, it is preferably displayed near the IVUS image display section 41 or superimposed on the IVUS image display section 41 .
  • a screen 40 is displayed via the display device 4 .
  • a screen 40 containing a composite IVUS image includes a composite IVUS image display portion 44 .
  • the composite IVUS image display section 44 is displayed at the same position as the IVUS image display section 41 on the screen 40 instead of the IVUS image display section 41, for example.
  • the synthetic IVUS image display unit 44 displays a synthetic IVUS image obtained by complementing the unclear region of the IVUS image of the IVUS image display unit 41 using complementary information.
  • the composite IVUS image display unit 44 identifiably displays the blurred area, that is, the area obtained by complementing the original IVUS image.
  • the control unit 31 of the image processing device 3 adds a frame line to the edge of the partial image generated using the OCT image based on the complementary information, and combines the partial image including the frame line with the IVUS image.
  • An image is generated and displayed on the composite IVUS image display unit 44 .
  • a user such as a physician, can easily distinguish between the complemented portion and the non-complemented portion of the actual IVUS image itself.
  • the method of marking the complementary portion is not limited, and may be, for example, coloring, shading, or the like.
  • the control unit 31 displays either the IVUS image or the composite IVUS image according to the switching operation of the input button 43 . Since the IVUS image and the synthesized IVUS image are displayed at the same position on the screen, the user can confirm the IVUS image and the synthesized IVUS image without moving the line of sight. Note that the control unit 31 may display the input button 43 in association with the IVUS image only when there is a composite IVUS image corresponding to the IVUS image. The user can recognize the presence of complementary information by displaying the input button 43 .
  • the screen 40 is an example and is not limited.
  • the screen 40 may include an IVUS image display section 41, an OCT image display section 42, and a composite IVUS image display section 44, and display all of the IVUS image, the OCT image, and the composite IVUS image.
  • the screen 40 may also include an angio image.
  • the screen 40 may include a three-dimensional image of calcified plaque generated by stacking a plurality of synthetic IVUS images (slice data) that are continuous in time series. A three-dimensional image can be generated, for example, by the voxel method.
  • the display device 4 is the output destination of the composite IVUS image, but it is of course possible to output the composite IVUS image to a device other than the display device 4 (for example, a personal computer).
  • the image processing apparatus 3 identifies the blurred region in the IVUS image according to the state of the IVUS image and the OCT image, and complements the blurred region in the IVUS image using an OCT image that is clearer than the IVUS image. display the combined IVUS image.
  • the image processing device 3 preferably performs interpolation on blurred areas formed at various positions in the IVUS image.
  • a composite IVUS image i.e., an IVUS image that includes a portion of an OCT image, allows the user to efficiently recognize information in both the IVUS and OCT images without comparative interpretation of the IVUS and OCT images. . Therefore, the image obtained by the diagnostic imaging catheter 1 can be easily interpreted, and an efficient diagnosis can be made.
  • This embodiment is particularly effective in examination of lower extremity blood vessels where calcified plaque is likely to occur.
  • the second embodiment differs from the first embodiment in the method of identifying the blurred area.
  • the differences from the first embodiment will be mainly described, and the same reference numerals will be given to the configurations common to the first embodiment, and detailed description thereof will be omitted.
  • the image processing apparatus 3 of the second embodiment stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34.
  • FIG. 11 is an explanatory diagram showing an overview of the learning model 3M in the second embodiment.
  • the first learning model 31M is a machine learning model that takes an IVUS image as input and outputs the blurred area in the IVUS image.
  • the second learning model 32M is a machine learning model that receives an OCT image as input and outputs a calcified plaque region in the OCT image. Since the first learning model 31M and the second learning model 32M have the same configuration, the configuration of the first learning model 31M will be described below.
  • the first learning model 31M is, for example, CNN.
  • the first learning model 31M recognizes on a pixel-by-pixel basis whether or not each pixel in the input image corresponds to an object region by image recognition technology using semantic segmentation.
  • the first learning model 31M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores the feature amount of the image, and an output layer that outputs information indicating the position and range of the object included in the IVUS image. have.
  • the first learning model 31M is U-Net, for example.
  • the input layer of the first learning model 31M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer.
  • the intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer).
  • CONV layer convolution layer
  • DECONV layer deconvolution layer
  • a convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object.
  • the deconvolution layer performs the deconvolution process to restore the original dimensions.
  • the restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not.
  • the output layer has one or more nodes that output label images.
  • the label image is, for example, an image in which pixels corresponding to blurred areas are class "1" and pixels corresponding to other images are class "0".
  • the first learning model 31M prepares training data in which an IVUS image containing an object (blurred area) and a label indicating the position of each object are associated, and uses the training data to machine an unlearned neural network. It can be generated by learning.
  • the second learning model 32M has the same configuration as the second learning model 32M, recognizes the calcified plaque region included in the image portion pixel by pixel, and outputs the generated label image.
  • the label image is, for example, an image in which pixels corresponding to calcified plaque regions are class "1" and pixels corresponding to other images are class "0".
  • the second learning model 32M prepares OCT images containing objects (calcified plaque regions) and training data in which labels indicating the positions of each object are associated, and uses the training data to prepare an unlearned neural network. can be generated by machine learning.
  • the first learning model 31M configured as described above, as shown in FIG. 11, by inputting an IVUS image including a blurred region to the first learning model 31M, a label image showing the blurred region in units of pixels is obtained. is obtained.
  • the second learning model 32M by inputting an OCT image containing a calcified plaque region into the second learning model 32M, a label image showing the calcified plaque region in units of pixels is obtained.
  • FIG. 12 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the second embodiment.
  • the processing procedure shown in the flowchart of FIG. 12 corresponds to the details of step S12 in the flowchart of FIG.
  • the control unit 31 of the image processing device 3 inputs the IVUS image acquired in step S11 of FIG. 8 to the first learning model 31M as input data (step S31).
  • the control unit 31 acquires the blurred region of the IVUS image output from the first learning model 31M (step S32).
  • the control unit 31 inputs the OCT image acquired in step S11 of FIG. 8 to the second learning model 32M as input data (step S33).
  • the control unit 31 acquires the calcified plaque region of the OCT image output from the second learning model 32M (step S34).
  • the control unit 31 compares the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image to determine whether the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match. Determine (step S35). In other words, the control unit 31 determines whether or not to complement the blurred region of the IVUS image using the information of the calcified plaque region of the OCT image.
  • step S35 NO
  • the control unit 31 ends the process.
  • the blurred region in the IVUS image is recognized by the first learning model 31M, and the calcified plaque region is not included in the position corresponding to the blurred region in the OCT image, the recognized blurred region is in the calcified plaque. Since it is presumed that it is not caused by this, no supplementation with the OCT image is performed. In this case, the processing from step S13 onward in FIG. 8 may not be executed because the blurred region is not specified. That is, the control unit 31 does not generate a composite IVUS image.
  • step S35 when it is determined that the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match (step S35: YES), the control unit 31 complements the blurred region output from the first learning model 31M. (step S36).
  • the control unit 31 executes the processes from step S13 onward in FIG. 8 to generate a synthetic IVUS image that complements the blurred area of the IVUS image based on the calcified plaque area of the OCT image.
  • the image processing apparatus 3 generates a composite IVUS image only when the blurred region in the IVUS image and the calcified plaque region in the OCT image are correlated, so that inappropriate interpolation of the blurred region is performed. can be prevented.
  • lipid plaque is used as an example of a substance that attenuates light.
  • the blood vessel contains lipid plaque
  • the light transmitted radially outward of the probe 11 is attenuated more than the lipid plaque. Therefore, in the OCT image, there are a high brightness region with a high brightness value corresponding to the surface part of the lipid plaque, and a dark and unclear region with a low brightness value shown outside the high brightness region (unclear region). and are included.
  • the attenuation of ultrasonic waves by a lipid plaque is small, so that the lipid plaque can be well delineated over the deep portion radially outward.
  • an IVUS image contains a bright raised area with a high luminance value formed around the lipid plaque and a lipid plaque area clearly demarcated by the raised area.
  • the control unit 31 of the image processing device 3 generates a synthetic OCT image in which the blurred area in the OCT image is complemented by the lipid plaque area in the IVUS image.
  • the learning model 3M is a model that receives an OCT image, for example, and outputs a high-intensity region of lipid plaque in the OCT image.
  • FIG. 13 and 14 are flowcharts showing an example of the processing procedure executed by the image processing device 3 according to the third embodiment.
  • the control unit 31 of the image processing apparatus 3 according to the third embodiment executes the same processes as steps S11 to S15 and steps S21 to S23 of the first embodiment, but differs from the first embodiment in the following points. different.
  • the control unit 31 identifies the blurred area included in the OCT image in step S42. Specifically, in step 51, the control unit 31 inputs the OCT image as input data to the learning model 3M, and in step 52, acquires the high-brightness region of the OCT image output from the learning model 3M. Identify blurred areas.
  • the control unit 31 acquires complementary information for complementing the blurred region of the OCT image in step S43. Specifically, based on the position and range of the blurred region in the OCT image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the IVUS image acquired at the same time as the OCT image. Get the pixel values of the area corresponding to the blurred area. In step S44, the control unit 31 generates a composite OCT image by replacing the pixel values of the blurred region in the OCT image with the pixel values of the acquired IVUS image.
  • a synthetic OCT image is an image in which a blurred region in the OCT image is combined with a portion of the lipid plaque region in the IVUS image.
  • a synthesized OCT image in which an unclear region in the OCT image is complemented by the IVUS image is presented, so efficient interpretation of the IVUS image and the OCT image can be supported.
  • the fourth embodiment differs from the third embodiment in the method of identifying the blurred region in the OCT image.
  • the differences from the first to third embodiments will be mainly described, and the same reference numerals will be assigned to the configurations common to the first to third embodiments, and detailed description thereof will be omitted. .
  • the image processing apparatus 3 is a synthetic OCT image that complements the blurred region in the OCT image containing a substance with low light transmittance such as lipid plaque with an IVUS image. to generate
  • the image processing device 3 stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34 .
  • the first learning model 31M is a model that receives an OCT image as an input and outputs a blurred area in the OCT image.
  • the second learning model 32M is a model that receives an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
  • FIG. 15 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the fourth embodiment.
  • the control unit 31 of the image processing device 3 inputs the OCT image as input data to the first learning model 31M (step S61), and acquires the blurred region in the OCT image output from the first learning model 31M (step S62). .
  • the control unit 31 also inputs the IVUS image as input data to the second learning model 32M (step S63), and acquires the lipid plaque region in the IVUS image output from the second learning model 32M (step S64). After that, the control unit 31 executes the same processing as in steps S35 and S36 of the second embodiment to identify the blurred region in the OCT image.
  • the image processing apparatus 3 generates a synthetic OCT image only when the blurred region in the OCT image and the lipid plaque region in the IVUS image are correlated, so that inappropriate interpolation of the blurred region can be prevented.
  • the image processing apparatus 3 includes the first learning model 31M and the second learning model 32M described in the second embodiment, and the first learning model 31M and the second learning model 32M described in the fourth embodiment.
  • a first learning model receives an IVUS image as an input and outputs a blurred region in the IVUS image.
  • the second learning model takes an OCT image as an input and outputs a calcified plaque region in the OCT image.
  • a third learning model receives an OCT image as an input and outputs a blurred region in the OCT image.
  • a fourth learning model takes an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
  • the control unit 31 of the image processing device 3 inputs the IVUS image and the OCT image acquired via the intravascular examination device 101 to each of the four types of learning models described above, and acquires the output recognition result.
  • the control unit 31 executes the complementing process described in the second embodiment.
  • the control unit 31 generates a composite IVUS image by interpolating the blurred region of the IVUS image with the OCT image.
  • control unit 31 when the OCT image includes a blurred region, the control unit 31 performs the complementing process described in the fourth embodiment when a lipid plaque region is included in a position corresponding to the blurred region of the IVUS image. .
  • the control unit 31 generates a synthetic OCT image in which the blurred region of the OCT image is interpolated with the IVUS image.
  • the image processing apparatus 3 can present a composite image that has been appropriately complemented according to the state of the IVUS image and the OCT image.
  • part or all of the processing executed by the image processing device 3 may be executed by an external server (not shown) communicably connected to the image processing device 3 .
  • the storage unit of the external server stores programs and learning models similar to the programs 3P and learning models 3M described above.
  • the external server acquires IVUS images and OCT images from the image processing device 3 via a network such as a LAN (Local Area Network) or the Internet.
  • the external server executes the same processing as the image processing apparatus 3 of each embodiment based on the acquired IVUS image and OCT image, and transmits the generated synthetic IVUS image or synthetic OCT image to the image processing apparatus 3 .
  • the image processing device 3 acquires the composite IVUS image or the composite OCT image transmitted from the external server, and causes the display device 4 to display it.
  • REFERENCE SIGNS LIST 100 diagnostic imaging device 101 intravascular examination device 102 angiography device 1 catheter for diagnostic imaging (catheter) 2 MDUs 3 image processing device 31 control section 32 main storage section 33 input/output I/F 34 auxiliary storage unit 35 reading unit 3P program 3M learning model 31M first learning model 32M second learning model 30 recording medium 4 display device 5 input device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Optics & Photonics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

Provided are a program and the like that can assist with efficient radiographic image interpretation. This program causes a computer to execute a process for: acquiring an ultrasound tomographic image and an optical coherence tomographic image generated on the basis of a signal detected by a catheter inserted into a luminal organ; identifying an indistinct region which is more indistinct in one of the ultrasound tomographic image or the optical coherence tomographic image than in the other of the ultrasound tomographic image or the optical coherence tomographic image; acquiring, on the basis of information regarding a region corresponding to the indistinct region in the other of the ultrasound tomographic image or the optical coherence tomographic image, complement information to complement the indistinct region; and generating, on the basis of the acquired complement information, the one of the ultrasound tomographic image or the optical coherence tomographic image which complements the indistinct region.

Description

プログラム、情報処理方法及び情報処理装置Program, information processing method and information processing apparatus
 本発明は、プログラム、情報処理方法及び情報処理装置に関する。 The present invention relates to a program, an information processing method, and an information processing apparatus.
 血管等の管腔器官の診断を行うための診断画像を取得するために使用する医療装置として、血管内超音波(IVUS:Intra Vascular Ultra Sound)用と光干渉断層撮影(OCT:Optical Coherence Tomography)用との両方の機能を備える画像診断用カテーテルの開発が行われている。このような画像診断用カテーテルを用いて、超音波断層像及び光干渉断層像の両方を生成することができる。 As a medical device used to acquire diagnostic images for diagnosing hollow organs such as blood vessels, intravascular ultrasound (IVUS: Intra Vascular Ultra Sound) and optical coherence tomography (OCT: Optical Coherence Tomography) Diagnostic imaging catheters are being developed that have both functions. Both ultrasonic tomograms and optical coherence tomograms can be generated using such diagnostic imaging catheters.
 超音波断層像は、組織深達度に優れる反面、分解能が限られているという欠点を有する。一方で光干渉断層像は、血管表面近くの構造物を非常に高い詳細分解能で表示できる反面、組織深達度が浅いという欠点を有する。そこで、特許文献1には、超音波断層像と光干渉断層像とが組み合わされた画像を生成することにより、最適な診断学上の画質を達成することができる医療検査システムが開示されている。  Ultrasound tomographic images are excellent in tissue penetration, but have the disadvantage of limited resolution. On the other hand, an optical coherence tomogram can display structures near the surface of a blood vessel with very high detailed resolution, but has the disadvantage that the depth of tissue penetration is shallow. Therefore, Patent Document 1 discloses a medical examination system capable of achieving optimal diagnostic image quality by generating an image in which an ultrasonic tomographic image and an optical coherence tomographic image are combined. .
特開2005-95624号公報JP-A-2005-95624
 特許文献1に開示された技術は、単に光干渉断層像の内側の中央の切り抜きと、超音波断層像の外側とが組み合わされた画像を生成するものであり、効率的な読影を支援する観点において改善の余地があった。 The technology disclosed in Patent Document 1 simply generates an image in which the central cutout of the inner side of the optical coherence tomogram and the outer side of the ultrasonic tomogram are combined, and the viewpoint of supporting efficient interpretation. There was room for improvement in
 本開示の目的は、効率的な読影を支援することができるプログラム等を提供することである。 The purpose of the present disclosure is to provide a program or the like that can support efficient interpretation.
 本開示の一態様に係るプログラムは、管腔器官に挿入されたカテーテルにて検出した信号に基づき生成された超音波断層画像及び光干渉断層画像を取得し、前記超音波断層画像及び光干渉断層画像の一方における前記超音波断層画像及び光干渉断層画像の他方よりも不鮮明である不鮮明領域を特定し、前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の情報に基づき、前記不鮮明領域を補完する補完情報を取得し、取得した前記補完情報に基づき、前記不鮮明領域を補完した前記超音波断層画像及び光干渉断層画像の一方を生成する処理をコンピュータに実行させる。 A program according to one aspect of the present disclosure acquires an ultrasonic tomographic image and an optical coherence tomographic image generated based on a signal detected by a catheter inserted into a hollow organ, and acquires the ultrasonic tomographic image and the optical coherence tomographic image. Identifying a blurred region in one of the images that is less clear than the other of the ultrasonic tomographic image and the optical coherence tomographic image, and obtaining information on the region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image Complementary information for complementing the blurred region is acquired based on the information, and based on the acquired complementary information, a computer executes processing for generating one of the ultrasonic tomographic image and the optical coherence tomographic image that complement the blurred region.
 本開示によれば、効率的な読影を支援することができる。 According to the present disclosure, it is possible to support efficient interpretation.
画像診断装置の構成例を示す説明図である。1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus; FIG. 画像診断用カテーテルの概要を説明する説明図である。FIG. 2 is an explanatory diagram for explaining an outline of a diagnostic imaging catheter; センサ部を挿通させた血管の断面を示す説明図である。FIG. 4 is an explanatory view showing a cross section of a blood vessel through which a sensor section is passed; 断層画像を説明する説明図である。FIG. 4 is an explanatory diagram for explaining a tomographic image; 断層画像を説明する説明図である。FIG. 4 is an explanatory diagram for explaining a tomographic image; 画像処理装置の構成例を示すブロック図である。1 is a block diagram showing a configuration example of an image processing apparatus; FIG. 学習モデルの概要を示す説明図である。FIG. 4 is an explanatory diagram showing an outline of a learning model; 合成IVUS画像の生成方法を説明する説明図である。FIG. 4 is an explanatory diagram illustrating a method of generating a composite IVUS image; 画像処理装置にて実行される処理手順の一例を示すフローチャートである。4 is a flow chart showing an example of a processing procedure executed by an image processing apparatus; 不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。4 is a flow chart showing an example of a detailed procedure of processing for specifying an unsharp area; 表示装置に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on a display apparatus. 第2実施形態における学習モデルの概要を示す説明図である。FIG. 10 is an explanatory diagram showing an outline of a learning model in the second embodiment; FIG. 第2実施形態における不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。FIG. 11 is a flowchart showing an example of detailed procedures for specifying a blurred area in the second embodiment; FIG. 第3実施形態に係る画像処理装置にて実行される処理手順の一例を示すフローチャートである。13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment; 第3実施形態に係る画像処理装置にて実行される処理手順の一例を示すフローチャートである。13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment; 第4実施形態における不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。FIG. 14 is a flowchart showing an example of detailed procedures for specifying a blurred area in the fourth embodiment; FIG.
 以下、本開示のプログラム、情報処理方法及び情報処理装置について、その実施形態を示す図面に基づいて詳述する。以下の各実施形態では、カテーテルを用いた被検者の血管内検査を一例に説明するが、カテーテル検査の対象とする管腔器官は血管に限定されず、例えば胆管、膵管、気管支、腸等の他の管腔器官であってもよい。 Hereinafter, the program, information processing method, and information processing apparatus of the present disclosure will be described in detail based on the drawings showing the embodiments. In each of the following embodiments, an intravascular examination of a subject using a catheter will be described as an example, but the lumenal organ targeted for catheter examination is not limited to blood vessels, such as bile ducts, pancreatic ducts, bronchi, intestines, and the like. other luminal organs.
(第1実施形態)
 図1は、画像診断装置100の構成例を示す説明図である。本実施形態では、血管内超音波診断法(IVUS)及び光干渉断層診断法(OCT)の両方の機能を備えるデュアルタイプのカテーテルを用いた画像診断装置について説明する。デュアルタイプのカテーテルでは、IVUSのみによって超音波断層画像を取得するモードと、OCTのみによって光干渉断層画像を取得するモードと、IVUS及びOCTによって両方の断層画像を取得するモードとが設けられており、これらのモードを切り替えて使用することができる。以下、超音波断層画像及び光干渉断層画像それぞれを適宜、IVUS画像及びOCT画像という。また、IVUS画像及びOCT画像を総称して断層画像という。
(First embodiment)
FIG. 1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus 100. As shown in FIG. In this embodiment, an image diagnostic apparatus using a dual-type catheter having both intravascular ultrasound (IVUS) and optical coherence tomography (OCT) functions will be described. Dual-type catheters are provided with a mode for acquiring ultrasound tomographic images only by IVUS, a mode for acquiring optical coherence tomographic images only by OCT, and a mode for acquiring both tomographic images by IVUS and OCT. , you can switch between these modes. Hereinafter, an ultrasound tomographic image and an optical coherence tomographic image will be referred to as an IVUS image and an OCT image, respectively. Also, IVUS images and OCT images are collectively referred to as tomographic images.
 本実施形態の画像診断装置100は、血管内検査装置101と、血管造影装置102と、画像処理装置3と、表示装置4と、入力装置5とを備える。血管内検査装置101は、画像診断用カテーテル(カテーテル)1及びMDU(Motor Drive Unit)2を備える。画像診断用カテーテル1は、MDU2を介して画像処理装置3に接続されている。画像処理装置3には、表示装置4及び入力装置5が接続されている。 The diagnostic imaging apparatus 100 of this embodiment includes an intravascular examination apparatus 101 , an angiography apparatus 102 , an image processing apparatus 3 , a display apparatus 4 and an input apparatus 5 . An intravascular examination apparatus 101 includes an imaging diagnostic catheter (catheter) 1 and an MDU (Motor Drive Unit) 2 . The diagnostic imaging catheter 1 is connected to the image processing device 3 via the MDU 2 . A display device 4 and an input device 5 are connected to the image processing device 3 .
 表示装置4は、例えば液晶ディスプレイ又は有機EL(Electro Luminescence)ディスプレイ等であり、入力装置5は、例えばキーボード、マウス、トラックボール又はマイク等である。表示装置4と入力装置5とは、一体に積層されて、タッチパネルを構成していてもよい。また入力装置5と画像処理装置3とは、一体に構成されていてもよい。更に入力装置5は、ジェスチャ入力又は視線入力等を受け付けるセンサであってもよい。 The display device 4 is, for example, a liquid crystal display or an organic EL (Electro Luminescence) display, etc., and the input device 5 is, for example, a keyboard, mouse, trackball, microphone, or the like. The display device 4 and the input device 5 may be laminated integrally to form a touch panel. Also, the input device 5 and the image processing device 3 may be configured integrally. Furthermore, the input device 5 may be a sensor that accepts gesture input, line-of-sight input, or the like.
 血管造影装置102は画像処理装置3に接続されている。血管造影装置102は、患者の血管に造影剤を注入しながら、患者の生体外からX線を用いて血管を撮像し、当該血管の透視画像であるアンギオ画像を得るためのアンギオグラフィ装置である。血管造影装置102は、X線源及びX線センサを備え、X線源から照射されたX線をX線センサが受信することにより、患者のX線透視画像をイメージングする。なお、画像診断用カテーテル1にはX線を透過しないマーカが設けられており、アンギオ画像において画像診断用カテーテル1(マーカ)の位置が可視化される。血管造影装置102は、撮像して得られたアンギオ画像を画像処理装置3へ出力し、画像処理装置3を介して表示装置4に表示される。なお、表示装置4には、アンギオ画像と、画像診断用カテーテル1を用いて撮像された断層画像とが表示される。なお、本実施形態において血管造影装置102は必須ではない。 The angiography device 102 is connected to the image processing device 3. The angiography apparatus 102 is an angiography apparatus for capturing an image of a blood vessel using X-rays from outside the patient's body while injecting a contrast agent into the patient's blood vessel to obtain an angiography image, which is a fluoroscopic image of the blood vessel. . The angiography apparatus 102 includes an X-ray source and an X-ray sensor, and the X-ray sensor receives X-rays emitted from the X-ray source to image a patient's X-ray fluoroscopic image. The diagnostic imaging catheter 1 is provided with a marker that does not transmit X-rays, and the position of the diagnostic imaging catheter 1 (marker) is visualized in the angiographic image. The angiography device 102 outputs an angio image obtained by imaging to the image processing device 3 and displayed on the display device 4 via the image processing device 3 . The display device 4 displays an angiographic image and a tomographic image captured using the diagnostic imaging catheter 1 . Note that the angiography apparatus 102 is not essential in this embodiment.
 図2は、画像診断用カテーテル1の概要を説明する説明図である。なお、図2中の上側の一点鎖線の領域は、下側の一点鎖線の領域を拡大したものである。画像診断用カテーテル1は、プローブ11と、プローブ11の端部に配置されたコネクタ部15とを有する。プローブ11は、コネクタ部15を介してMDU2に接続される。以下の説明では画像診断用カテーテル1のコネクタ部15から遠い側を先端側と記載し、コネクタ部15側を基端側と記載する。プローブ11は、カテーテルシース11aを備え、その先端部には、ガイドワイヤが挿通可能なガイドワイヤ挿通部14が設けられている。ガイドワイヤ挿通部14はガイドワイヤルーメンを構成し、予め血管内に挿入されたガイドワイヤを受け入れ、ガイドワイヤによってプローブ11を患部まで導くのに使用される。カテーテルシース11aは、ガイドワイヤ挿通部14との接続部分からコネクタ部15との接続部分に亘って連続する管部を形成している。カテーテルシース11aの内部にはシャフト13が挿通されており、シャフト13の先端側にはセンサ部12が接続されている。 FIG. 2 is an explanatory diagram for explaining the outline of the diagnostic imaging catheter 1. FIG. The upper one-dot chain line area in FIG. 2 is an enlarged view of the lower one-dot chain line area. The diagnostic imaging catheter 1 has a probe 11 and a connector portion 15 arranged at the end of the probe 11 . The probe 11 is connected to the MDU 2 via the connector section 15 . In the following description, the side far from the connector portion 15 of the diagnostic imaging catheter 1 is referred to as the distal end side, and the connector portion 15 side is referred to as the proximal end side. The probe 11 has a catheter sheath 11a, and a guide wire insertion portion 14 through which a guide wire can be inserted is provided at the distal end thereof. The guidewire insertion part 14 constitutes a guidewire lumen, receives a guidewire previously inserted into the blood vessel, and is used to guide the probe 11 to the affected part by the guidewire. The catheter sheath 11 a forms a continuous tube portion from the connection portion with the guide wire insertion portion 14 to the connection portion with the connector portion 15 . A shaft 13 is inserted through the catheter sheath 11 a , and a sensor section 12 is connected to the distal end of the shaft 13 .
 センサ部12は、ハウジング12dを有し、ハウジング12dの先端側は、カテーテルシース11aの内面との摩擦や引っ掛かりを抑制するために半球状に形成されている。ハウジング12d内には、超音波を血管内に送信すると共に血管内からの反射波を受信する超音波送受信部12a(以下ではIVUSセンサ12aという)と、近赤外光を血管内に送信すると共に血管内からの反射光を受信する光送受信部12b(以下ではOCTセンサ12bという)とが配置されている。図2に示す例では、プローブ11の先端側にIVUSセンサ12aが設けられており、基端側にOCTセンサ12bが設けられており、シャフト13の中心軸上(図2中の二点鎖線上)において軸方向に沿って距離xだけ離れて配置されている。画像診断用カテーテル1において、IVUSセンサ12a及びOCTセンサ12bは、シャフト13の軸方向に対して略90度となる方向(シャフト13の径方向)を超音波又は近赤外光の送受信方向として取り付けられている。なお、IVUSセンサ12a及びOCTセンサ12bは、カテーテルシース11aの内面での反射波又は反射光を受信しないように、径方向よりややずらして取り付けられることが望ましい。本実施形態では、例えば図2中の矢符で示すように、IVUSセンサ12aは径方向に対して基端側に傾斜した方向を超音波の照射方向とし、OCTセンサ12bは径方向に対して先端側に傾斜した方向を近赤外光の照射方向として取り付けられている。なお光送受信部12bは、OFDI(Optical Frequency Domain Imaging)用のセンサであってもよい。 The sensor section 12 has a housing 12d, and the distal end side of the housing 12d is formed in a hemispherical shape to suppress friction and catching with the inner surface of the catheter sheath 11a. In the housing 12d, there are an ultrasonic transmission/reception unit 12a (hereinafter referred to as an IVUS sensor 12a) for transmitting ultrasonic waves into the blood vessel and receiving reflected waves from the blood vessel, An optical transmitter/receiver 12b (hereinafter referred to as an OCT sensor 12b) for receiving reflected light from inside the blood vessel is arranged. In the example shown in FIG. 2, an IVUS sensor 12a is provided on the distal end side of the probe 11, and an OCT sensor 12b is provided on the proximal end side. ) are spaced apart by a distance x along the axial direction. In the diagnostic imaging catheter 1, the IVUS sensor 12a and the OCT sensor 12b are attached in a direction that is approximately 90 degrees to the axial direction of the shaft 13 (the radial direction of the shaft 13) as the transmitting/receiving direction of ultrasonic waves or near-infrared light. It is The IVUS sensor 12a and the OCT sensor 12b are desirably installed with a slight displacement from the radial direction so as not to receive reflected waves or reflected light from the inner surface of the catheter sheath 11a. In the present embodiment, for example, as indicated by the arrow in FIG. 2, the IVUS sensor 12a emits ultrasonic waves in a direction inclined toward the proximal side with respect to the radial direction, and the OCT sensor 12b It is attached so that the direction inclined toward the tip side is the irradiation direction of the near-infrared light. The optical transmitter/receiver 12b may be a sensor for OFDI (Optical Frequency Domain Imaging).
 シャフト13には、IVUSセンサ12aに接続された電気信号ケーブル(図示せず)と、OCTセンサ12bに接続された光ファイバケーブル(図示せず)とが内挿されている。プローブ11は、先端側から血管内に挿入される。センサ部12及びシャフト13は、カテーテルシース11aの内部で進退可能であり、また、周方向に回転することができる。センサ部12及びシャフト13は、シャフト13の中心軸を回転軸として回転する。画像診断装置100では、センサ部12及びシャフト13によって構成されるイメージングコアを用いることにより、血管の内側から撮影された超音波断層画像(IVUS画像)、又は、血管の内側から撮影された光干渉断層画像(OCT画像)によって血管内部の状態を測定する。 An electric signal cable (not shown) connected to the IVUS sensor 12a and an optical fiber cable (not shown) connected to the OCT sensor 12b are inserted into the shaft 13. The probe 11 is inserted into the blood vessel from the tip side. The sensor unit 12 and the shaft 13 can move forward and backward inside the catheter sheath 11a, and can rotate in the circumferential direction. The sensor unit 12 and the shaft 13 rotate around the central axis of the shaft 13 as a rotation axis. In the diagnostic imaging apparatus 100, by using an imaging core configured by the sensor unit 12 and the shaft 13, an ultrasonic tomographic image (IVUS image) captured from the inside of the blood vessel, or an optical interference image captured from the inside of the blood vessel. The condition inside the blood vessel is measured using a tomographic image (OCT image).
 画像診断用カテーテル1は、IVUSセンサ12aによって得られるIVUS画像又はOCTセンサ12bによって得られるOCT画像と、血管造影装置102によって得られるアンギオ画像との位置関係を確認するために、X線を透過しないマーカを有する。図2に示す例では、カテーテルシース11aの先端部、例えばガイドワイヤ挿通部14にマーカ14aが設けられており、センサ部12のシャフト13側にマーカ12cが設けられている。このように構成された画像診断用カテーテル1をX線で撮像すると、マーカ14a,12cが可視化されたアンギオ画像が得られる。マーカ14a,12cを設ける位置は一例であり、マーカ12cはセンサ部12ではなくシャフト13に設けてもよく、マーカ14aはカテーテルシース11aの先端部以外の箇所に設けてもよい。 The diagnostic imaging catheter 1 does not transmit X-rays in order to confirm the positional relationship between the IVUS image obtained by the IVUS sensor 12a or the OCT image obtained by the OCT sensor 12b and the angiographic image obtained by the angiography device 102. have markers. In the example shown in FIG. 2, a marker 14a is provided at the distal end portion of the catheter sheath 11a, for example, the guide wire insertion portion 14, and a marker 12c is provided at the sensor portion 12 on the shaft 13 side. When the diagnostic imaging catheter 1 configured in this manner is imaged with X-rays, an angiographic image in which the markers 14a and 12c are visualized is obtained. The positions at which the markers 14a and 12c are provided are examples, the marker 12c may be provided on the shaft 13 instead of the sensor section 12, and the marker 14a may be provided at a location other than the distal end of the catheter sheath 11a.
 MDU2は、コネクタ部15によってプローブ11(画像診断用カテーテル1)が着脱可能に取り付けられる駆動装置である。MDU2は、医療従事者の操作に応じて内蔵モータを駆動することにより、血管内に挿入された画像診断用カテーテル1の動作を制御する。例えばMDU2は、プローブ11に内挿されたセンサ部12及びシャフト13を一定の速度でMDU2側に向けて引っ張りながら周方向に回転させるプルバック操作を行う。センサ部12は、プルバック操作によって先端側から基端側に移動しながら回転しつつ、所定の時間間隔で連続的に血管内を走査することにより、プローブ11に略垂直な複数枚の横断層像を所定の間隔で連続的に撮影する。MDU2は、IVUSセンサ12aが受信した超音波の反射波の信号と、OCTセンサ12bが受信した反射光とを画像処理装置3へ出力する。 The MDU 2 is a driving device to which the probe 11 (catheter 1 for diagnostic imaging) is detachably attached via the connector portion 15 . The MDU 2 controls the operation of the diagnostic imaging catheter 1 inserted into the blood vessel by driving a built-in motor according to the operation of the medical staff. For example, the MDU 2 performs a pullback operation in which the sensor unit 12 and the shaft 13 inserted into the probe 11 are pulled toward the MDU 2 side at a constant speed and rotated in the circumferential direction. The sensor unit 12 continuously scans the inside of the blood vessel at predetermined time intervals while rotating while moving from the distal end side to the proximal end side by a pullback operation, thereby obtaining a plurality of transverse layer images substantially perpendicular to the probe 11 . are taken continuously at predetermined intervals. The MDU 2 outputs to the image processing device 3 the reflected ultrasonic wave signal received by the IVUS sensor 12a and the reflected light received by the OCT sensor 12b.
 画像処理装置3は、MDU2を介してIVUSセンサ12aが受信した超音波の反射波の信号と、OCTセンサ12bが受信した反射光とを取得する。画像処理装置3は、超音波の反射波の信号から超音波ラインデータを生成し、生成した超音波ラインデータに基づいて血管の横断層を撮像した超音波断層画像(IVUS画像)を構築する。また、画像処理装置3は、反射光と、例えば画像処理装置3内の光源からの光を分離することで得られた参照光とを干渉させることで生成される干渉光に基づいて光ラインデータを生成し、生成した光ラインデータに基づいて血管の横断層を撮像した光断層画像(OCT画像)を構築する。 The image processing device 3 acquires the reflected ultrasonic wave signal received by the IVUS sensor 12a via the MDU 2 and the reflected light received by the OCT sensor 12b. The image processing device 3 generates ultrasonic line data from the signals of the reflected ultrasonic waves, and builds an ultrasonic tomographic image (IVUS image) in which a transverse layer of the blood vessel is imaged based on the generated ultrasonic line data. In addition, the image processing device 3 generates light line data based on interference light generated by causing interference between the reflected light and the reference light obtained by, for example, separating the light from the light source in the image processing device 3. is generated, and an optical tomographic image (OCT image) obtained by imaging the transverse layer of the blood vessel is constructed based on the generated optical line data.
 なお、上記ではIVUSセンサ12aとOCTセンサ12bとの両方のセンサを備えるデュアルタイプの画像診断用カテーテル1の例を説明したが、本実施形態は限定されるものではない。画像処理装置3は、IVUSセンサ12aを有する画像診断用カテーテル1及びOCTセンサ12bを有する画像診断用カテーテル1から、IVUS画像及びOCT画像をそれぞれ取得する構成であってもよい。 Although an example of the dual-type diagnostic imaging catheter 1 including both the IVUS sensor 12a and the OCT sensor 12b has been described above, the present embodiment is not limited. The image processing device 3 may be configured to acquire an IVUS image and an OCT image respectively from the diagnostic imaging catheter 1 having the IVUS sensor 12a and the diagnostic imaging catheter 1 having the OCT sensor 12b.
 ここで、超音波ラインデータ及び光ラインデータと、これらから構築されるIVUS画像及びOCT画像とについて説明する。図3はセンサ部12を挿通させた血管の断面を示す説明図であり、図4は断層画像を説明する説明図である。 Here, the ultrasound line data, the optical line data, and the IVUS image and the OCT image constructed from them will be described. FIG. 3 is an explanatory view showing a cross section of a blood vessel through which the sensor section 12 is passed, and FIG. 4 is an explanatory view explaining a tomographic image.
 まず、図3を用いて、血管内におけるIVUSセンサ12a及びOCTセンサ12bの動作と、IVUSセンサ12a及びOCTセンサ12bによって取得される超音波ラインデータ及び光ラインデータについて説明する。イメージングコアが血管内に挿通された状態で断層画像の撮像が開始されると、イメージングコアが矢符で示す方向に、シャフト13の中心軸を回転中心として回転する。このとき、IVUSセンサ12aは、各回転角度において超音波の送信及び受信を行う。ライン1,2,…512は各回転角度における超音波の送受信方向を示している。本実施形態では、IVUSセンサ12aは、血管内において360度回動(1回転)する間に512回の超音波の送信及び受信を断続的に行う。IVUSセンサ12aは、1回の超音波の送受信により、送受信方向の1ラインのデータを取得するので、1回転の間に、回転中心から放射線状に延びる512本の超音波ラインデータを得ることができる。512本の超音波ラインデータは、回転中心の近傍では密であるが、回転中心から離れるにつれて互いに疎になっていく。そこで、画像処理装置3は、各ラインの空いた空間における画素を周知の補間処理によって生成することにより、図4Aの左側に示すような2次元の超音波断層画像(IVUS画像)を生成することができる。 First, the operation of the IVUS sensor 12a and the OCT sensor 12b in the blood vessel, and the ultrasound line data and optical line data acquired by the IVUS sensor 12a and the OCT sensor 12b will be described using FIG. When the imaging of a tomographic image is started with the imaging core inserted into the blood vessel, the imaging core rotates about the central axis of the shaft 13 in the direction indicated by the arrow. At this time, the IVUS sensor 12a transmits and receives ultrasonic waves at each rotation angle. Lines 1, 2, . . . 512 indicate the transmission and reception directions of ultrasonic waves at each rotation angle. In this embodiment, the IVUS sensor 12a intermittently transmits and receives ultrasonic waves 512 times while rotating 360 degrees (one rotation) in the blood vessel. Since the IVUS sensor 12a obtains data of one line in the transmitting/receiving direction by one transmission/reception of ultrasonic waves, it is possible to obtain 512 ultrasonic line data radially extending from the center of rotation during one rotation. can. The 512 ultrasonic line data are dense near the center of rotation, but become sparse with distance from the center of rotation. Therefore, the image processing device 3 generates a two-dimensional ultrasonic tomographic image (IVUS image) as shown on the left side of FIG. can be done.
 同様に、OCTセンサ12bも、各回転角度において測定光の送信及び受信を行う。OCTセンサ12bも血管内において360度回動する間に512回の測定光の送信及び受信を行うので、1回転の間に、回転中心から放射線状に延びる512本の光ラインデータを得ることができる。光ラインデータについても、画像処理装置3は、各ラインの空いた空間における画素を周知の補間処理によって生成することにより、図4Aの右側に示すような2次元の光干渉断層画像(OCT画像)を生成することができる。 Similarly, the OCT sensor 12b also transmits and receives measurement light at each rotation angle. Since the OCT sensor 12b also transmits and receives measurement light 512 times while rotating 360 degrees inside the blood vessel, it is possible to obtain 512 optical line data radially extending from the center of rotation during one rotation. can. As for the optical line data, the image processing device 3 generates a two-dimensional optical coherence tomographic image (OCT image) as shown on the right side of FIG. can be generated.
 このように512本のラインデータから生成される2次元の断層画像を1フレームのIVUS画像又はOCT画像という。なお、センサ部12は血管内を移動しながら走査するため、移動範囲(1回のプルバック範囲)内において1回転した各位置で1フレームのIVUS画像又はOCT画像が取得される。即ち、移動範囲においてプローブ11の先端側から基端側への各位置で1フレームのIVUS画像又はOCT画像が取得されるので、図4Bに示すように、移動範囲内で複数フレームのIVUS画像又はOCT画像が取得される。 A two-dimensional tomographic image generated from 512 line data in this way is called a one-frame IVUS image or OCT image. Since the sensor unit 12 scans while moving inside the blood vessel, one frame of IVUS image or OCT image is acquired at each position after one rotation within the movement range (one pullback range). That is, since one frame of IVUS image or OCT image is acquired at each position from the distal side to the proximal side of the probe 11 in the movement range, as shown in FIG. 4B, multiple frames of IVUS images or An OCT image is acquired.
 図3に示す如く、ラインN上に超音波を減衰させる(弾性率の高い)物質が存在する場合、IVUS法では、強反射体である物質に大部分の超音波が反射されることで、物質よりもプローブ11の径方向外側に透過する超音波が大きく減衰する。以下では、超音波を減衰させる物質として石灰化したプラーク(以下、石灰化プラークという)を例に説明する。この場合、超音波ラインデータには、石灰化プラークの表面部分に相当する高輝度領域と、高輝度領域以降、輝度値が大きく低下した領域(不鮮明領域)とが含まれる。そして超音波ラインデータにより生成されるIVUS画像には、図4Aの左側に示す如く、石灰化プラークの表面部分に相当する輝度値が高く明るい高輝度領域と、当該高輝度領域の外側に示される、輝度値が低下した暗く不明瞭な領域(不鮮明領域)とが含まれる。 As shown in FIG. 3, when there is a substance that attenuates ultrasonic waves (high elastic modulus) on the line N, in the IVUS method, most of the ultrasonic waves are reflected by the substance that is a strong reflector. Ultrasonic waves transmitted radially outward of the probe 11 are attenuated more than the substance. In the following, calcified plaque (hereinafter referred to as calcified plaque) will be described as an example of a substance that attenuates ultrasonic waves. In this case, the ultrasound line data includes a high-brightness region corresponding to the surface portion of the calcified plaque and a region (unclear region) where the brightness value is greatly reduced after the high-brightness region. Then, in the IVUS image generated from the ultrasound line data, as shown on the left side of FIG. , and dark and unclear regions (blurred regions) with reduced luminance values.
 一方、OCT法では、ラインN上に石灰化プラークが存在する場合であっても、弾性率による光の減衰は生じないため、径方向外側の深部に亘って石灰化プラークを良好に描写することができる。この場合、光ラインデータには、石灰化プラークの周囲にある線維性組織に相当する2つの輝度の立ち上がり領域と、2つの立ち上がり領域の間にあり、各立ち上がり領域よりも輝度の低い石灰化プラーク領域とが含まれる。そして光ラインデータにより生成されるOCT画像には、図4Aの右側に示す如く、石灰化プラークの周囲に形成される輝度値が高く明るい立ち上がり領域と、当該立ち上がり領域によりその境界が明瞭に示される石灰化プラーク領域とが含まれる。 On the other hand, in the OCT method, even if a calcified plaque exists on the line N, the light is not attenuated by the elastic modulus. can be done. In this case, the light line data includes two bright raised areas corresponding to the fibrous tissue surrounding the calcified plaque, and a calcified plaque between the two raised areas that is less bright than each raised area. area. In the OCT image generated from the light line data, as shown on the right side of FIG. 4A, a bright rising region with a high luminance value formed around the calcified plaque and the boundary thereof are clearly shown by the rising region. and calcified plaque areas.
 このように、2つ以上の断層画像に含まれる物質の性質により、IVUS画像(いずれか一方の断層画像)の一部において、OCT画像(他方の断層画像)よりも物質の情報が不鮮明(不明瞭)である不鮮明領域が形成される。本実施形態では、後述の処理により、鮮明なOCT画像を用いてIVUS画像の不鮮明領域を補完することで、読影し易い合成画像を提供する。 In this way, due to the nature of the substance contained in two or more tomographic images, information on the substance is less clear (blurred) in a part of the IVUS image (one of the tomographic images) than in the OCT image (the other tomographic image). clear) is formed. In the present embodiment, a composite image that is easy to read is provided by interpolating the blurred region of the IVUS image using a clear OCT image by the processing described later.
 図5は、画像処理装置3の構成例を示すブロック図である。画像処理装置3はコンピュータであり、制御部31、主記憶部32、入出力I/F33、補助記憶部34、読取部35を備える。 FIG. 5 is a block diagram showing a configuration example of the image processing device 3. As shown in FIG. The image processing apparatus 3 is a computer and includes a control section 31 , a main storage section 32 , an input/output I/F 33 , an auxiliary storage section 34 and a reading section 35 .
 制御部31は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)、GPGPU(General-purpose computing on graphics processing units)、TPU(Tensor Processing Unit)等の演算処理装置を用いて構成されている。制御部31は、バスを介して画像処理装置3を構成するハードウェア各部と接続されている。 The control unit 31 includes one or more CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. is configured using an arithmetic processing unit. The control unit 31 is connected to each hardware unit constituting the image processing apparatus 3 via a bus.
 主記憶部32は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部31が演算処理を実行するために必要なデータを一時的に記憶する。 The main storage unit 32 is a temporary storage area such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 31 to perform arithmetic processing. Remember.
 入出力I/F33は、血管内検査装置101及び血管造影装置102、表示装置4及び入力装置5が接続されるインタフェースである。制御部31は、入出力I/F33を介して、血管内検査装置101からIVUS画像に係る超音波の反射波の信号及びOCT画像に係る反射光を取得する。また、制御部31は、入出力I/F33を介して、IVUS画像、OCT画像及び合成画像等の各種の画像信号を表示装置4へ出力することによって、表示装置4に各種の画像を表示する。更に、制御部31は、入出力I/F33を介して、入力装置5に入力された情報を受け付ける。 The input/output I/F 33 is an interface to which the intravascular examination device 101, the angiography device 102, the display device 4 and the input device 5 are connected. The control unit 31 acquires, from the intravascular examination apparatus 101 via the input/output I/F 33 , the signal of the reflected ultrasonic wave for the IVUS image and the reflected light for the OCT image. In addition, the control unit 31 outputs various image signals such as an IVUS image, an OCT image, and a composite image to the display device 4 via the input/output I/F 33, thereby displaying various images on the display device 4. . Furthermore, the control unit 31 receives information input to the input device 5 via the input/output I/F 33 .
 補助記憶部34は、ハードディスク、EEPROM(Electrically Erasable Programmable ROM)、フラッシュメモリ等の記憶装置である。補助記憶部34は、制御部31が実行するプログラム3P、制御部31の処理に必要な各種データを記憶する。なお、補助記憶部34は画像処理装置3に接続された外部記憶装置であってもよい。プログラム3Pは、画像処理装置3の製造段階において補助記憶部34に書き込まれてもよいし、遠隔のサーバ装置が配信するものを画像処理装置3が通信にて取得して補助記憶部34に記憶させてもよい。プログラム3Pは、磁気ディスク、光ディスク、半導体メモリ等の記録媒体30に読み出し可能に記録された態様であってもよく、読取部35が記録媒体30から読み出して補助記憶部34に記憶させてもよい。 The auxiliary storage unit 34 is a storage device such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), flash memory, or the like. The auxiliary storage unit 34 stores the program 3P executed by the control unit 31 and various data necessary for the processing of the control unit 31 . Incidentally, the auxiliary storage unit 34 may be an external storage device connected to the image processing device 3 . The program 3P may be written in the auxiliary storage unit 34 at the manufacturing stage of the image processing device 3, or may be distributed by a remote server device and acquired by the image processing device 3 through communication and stored in the auxiliary storage unit 34. You may let The program 3P may be readable and recorded on the recording medium 30 such as a magnetic disk, an optical disk, or a semiconductor memory, or may be read from the recording medium 30 by the reading unit 35 and stored in the auxiliary storage unit 34 .
 補助記憶部34はまた、学習モデル3Mを記憶している。学習モデル3Mは、訓練データを学習済みの機械学習モデルである。学習モデル3Mは、人工知能ソフトウェアを構成するプログラムモジュールとしての利用が想定される。 The auxiliary storage unit 34 also stores the learning model 3M. The learning model 3M is a machine learning model that has learned training data. The learning model 3M is assumed to be used as a program module that constitutes artificial intelligence software.
 画像処理装置3は、複数のコンピュータを含んで構成されるマルチコンピュータであってよい。また、画像処理装置3は、サーバクライアントシステムや、クラウドサーバ、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。以下の説明では、画像処理装置3が1台のコンピュータであるものとして説明する。 The image processing device 3 may be a multicomputer including a plurality of computers. Further, the image processing device 3 may be a server client system, a cloud server, or a virtual machine virtually constructed by software. In the following description, it is assumed that the image processing apparatus 3 is one computer.
 画像処理装置3の制御部31は、学習モデル3Mを用いて、IVUS画像の不鮮明領域を特定し、特定したIVUS画像における不鮮明領域を、OCT画像により補完することにより合成IVUS画像(合成画像)を生成する。図6は学習モデル3Mの概要を示す説明図であり、図7は合成IVUS画像の生成方法を説明する説明図である。図6及び図7を用いて、本実施形態における合成IVUS画像の生成方法について具体的に説明する。 The control unit 31 of the image processing device 3 uses the learning model 3M to identify the blurred area of the IVUS image, and complements the identified blurred area in the IVUS image with the OCT image to create a synthetic IVUS image (composite image). Generate. FIG. 6 is an explanatory diagram showing an outline of the learning model 3M, and FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image. A method of generating a composite IVUS image according to this embodiment will be specifically described with reference to FIGS. 6 and 7. FIG.
 図6に示す如く、学習モデル3Mは、IVUS画像を入力として、当該IVUS画像における石灰化プラークの高輝度領域(表面部分)を示す情報を出力する機械学習モデルである。具体的には、学習モデル3Mは、画像診断用カテーテル1の走査に従い、血管の長手方向に沿って連続する複数フレームのIVUS画像を入力として受け付ける。学習モデル3Mは、時間軸tに沿って連続する各フレームのIVUS画像における石灰化プラークの高輝度領域を識別する。 As shown in FIG. 6, the learning model 3M is a machine learning model that receives an IVUS image and outputs information indicating the high-brightness region (surface portion) of the calcified plaque in the IVUS image. Specifically, the learning model 3M receives, as input, a plurality of frames of IVUS images that are continuous along the longitudinal direction of the blood vessel according to scanning by the diagnostic imaging catheter 1 . The learning model 3M identifies high intensity regions of calcified plaque in each successive frame of IVUS images along the time axis t.
 学習モデル3Mは、例えばCNN(Convolutional Neural Network)である。学習モデル3Mは、セマンティックセグメンテーション(Semantic Segmentation )を用いた画像認識技術により、入力される画像内の各画素がオブジェクト(石灰化プラークの高輝度領域)領域に対応する画素であるか否か、画素単位で認識する。学習モデル3Mは、IVUS画像が入力される入力層と、画像の特徴量を抽出し復元する中間層と、IVUS画像に含まれるオブジェクトの位置及び範囲を示す情報を出力する出力層とを有する。学習モデル3Mは、例えばU-Netである。 The learning model 3M is, for example, a CNN (Convolutional Neural Network). The learning model 3M uses image recognition technology using semantic segmentation to determine whether each pixel in the input image is a pixel corresponding to an object (high-brightness region of calcified plaque) region. Recognize in units. The learning model 3M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores image feature values, and an output layer that outputs information indicating the position and range of an object included in the IVUS image. The learning model 3M is U-Net, for example.
 学習モデル3Mの入力層は、IVUS画像に含まれる各画素の画素値の入力を受け付ける複数のノードを有し、入力された画素値を中間層に受け渡す。中間層は、畳み込み層(CONV層)と、逆畳み込み層(DECONV層)とを有する。畳み込み層は、画像データを次元圧縮する層である。次元圧縮により、オブジェクトの特徴量が抽出される。逆畳み込み層は逆畳み込み処理を行い、元の次元に復元する。逆畳み込み層における復元処理により、IVUS画像の各画素がオブジェクトであるか否かを示す二値化されたラベル画像が生成される。出力層は、ラベル画像を出力する複数のノードを有する。ラベル画像は、例えば、石灰化プラークの高輝度領域に対応する画素がクラス「1」、その他の画像に対応する画素がクラス「0」の画像である。 The input layer of the learning model 3M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer. The intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer). A convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object. The deconvolution layer performs the deconvolution process to restore the original dimensions. The restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not. The output layer has multiple nodes that output label images. The label image is, for example, an image in which pixels corresponding to high intensity regions of calcified plaque are class "1" and pixels corresponding to other images are class "0".
 学習モデル3Mは、オブジェクト(石灰化プラークの高輝度領域)を含むIVUS画像と、各オブジェクトの位置を示すラベルとが対応付けられた訓練データを用意し、当該訓練データを用いて未学習のニューラルネットワークを機械学習させることにより生成することができる。具体的には、制御部31は、訓練データに含まれる複数のIVUS画像を学習前のニューラルネットワークモデルの入力層に入力し、中間層での演算処理を経て、出力層から出力される画像を取得する。そして、制御部31は、出力層から出力された画像と、訓練データに含まれるラベル画像とを比較し、出力層から出力される画像がラベル画像に近づくように、中間層での演算処理に用いるパラメータを最適化する。当該パラメータは、例えばニューロン間の重み(結合係数)などである。パラメータの最適化の方法は特に限定されないが、例えば制御部31は誤差逆伝播法を用いて各種パラメータの最適化を行う。オブジェクトの位置は、例えば専門知識を有する医師が行った判断を正解ラベルとしてよい。 The learning model 3M prepares training data in which an IVUS image containing objects (high-brightness regions of calcified plaque) and labels indicating the positions of each object are associated, and uses the training data to create an untrained neural It can be generated by machine learning the network. Specifically, the control unit 31 inputs a plurality of IVUS images included in the training data to the input layer of the neural network model before learning, performs arithmetic processing in the intermediate layer, and outputs the image output from the output layer. get. Then, the control unit 31 compares the image output from the output layer with the label image included in the training data, and performs arithmetic processing in the intermediate layer so that the image output from the output layer approaches the label image. Optimize the parameters used. The parameters are, for example, weights (coupling coefficients) between neurons. Although the parameter optimization method is not particularly limited, for example, the control unit 31 optimizes various parameters using the error backpropagation method. For the position of the object, for example, a judgment made by a doctor having specialized knowledge may be used as a correct label.
 制御部31は、各フレーム画像を学習モデル3Mに一枚ずつ入力して処理してもよいが、連続する複数のフレーム画像を同時に入力して、複数のフレーム画像から石灰化プラークの高輝度領域を同時に検出できるようにすると好適である。例えば制御部31は、学習モデル3Mを、3次元の入力データを取り扱う3D-CNN(例えば3D U-net)とする。そして制御部31は、2次元のフレーム画像の座標を2軸とし、各フレーム画像を取得した時間(生成時点)tを1軸とする3次元データとして取り扱う。制御部31は、所定の単位時間分の複数フレーム画像(例えば16フレーム)を一セットとして学習モデル3Mに入力し、複数のフレーム画像それぞれに対して石灰化プラークの高輝度領域にラベルを付した画像を同時に出力する。これにより、時系列で連続する前後のフレーム画像も考慮して石灰化プラークの高輝度領域を検出することができ、検出精度を向上させることができる。 The control unit 31 may input each frame image to the learning model 3M one by one for processing, but may input a plurality of continuous frame images at the same time and identify a high-intensity region of calcified plaque from the plurality of frame images. can be detected simultaneously. For example, the control unit 31 sets the learning model 3M as a 3D-CNN (eg, 3D U-net) that handles three-dimensional input data. Then, the control unit 31 treats the coordinates of the two-dimensional frame images as three-dimensional data, with the coordinates of the two-dimensional frame images as two axes and the time (generation time point) t at which each frame image was acquired as one axis. The control unit 31 inputs a set of multiple frame images (for example, 16 frames) for a predetermined unit time to the learning model 3M, and labels the high-intensity regions of the calcified plaque to each of the multiple frame images. Output images at the same time. As a result, it is possible to detect the high-intensity region of the calcified plaque in consideration of the preceding and following frame images that are consecutive in time series, and to improve the detection accuracy.
 このように学習された学習モデル3Mによれば、図6に示すようにIVUS画像を学習モデル3Mに入力することによって、石灰化プラークの高輝度領域を画素単位で示すラベル画像が得られる。 According to the learning model 3M learned in this way, by inputting an IVUS image into the learning model 3M as shown in FIG. 6, a label image showing the high-brightness region of the calcified plaque on a pixel-by-pixel basis can be obtained.
 上記では、学習モデル3MがCNNである例を説明したが、学習モデル3Mの構成は限定されるものではなく、医用画像に含まれる石灰化プラークの高輝度領域を識別可能であればよい。 Although an example in which the learning model 3M is a CNN has been described above, the configuration of the learning model 3M is not limited as long as it can identify high-intensity regions of calcified plaque contained in medical images.
 図7の上部左側に示す如く、制御部31は、学習モデル3Mにより検出した石灰化プラークの高輝度領域に基づき、高輝度領域の外側にある不鮮明領域の位置及び範囲を特定する。具体的には、制御部31は、IVUS画像の周方向における高輝度領域の位置に基づき、高輝度領域に相当する複数の超音波ラインデータを特定する。制御部31は、特定した複数の超音波ラインデータそれぞれと同一の角度(同一のライン番号)における複数の光ラインデータを特定する。制御部31は、各光ラインデータの深度方向における輝度値の変化に基づき、立ち上がり領域の間に形成される石灰化プラーク領域の位置(座標)を特定する。制御部31は、同一角度における超音波ラインデータ及び光ラインデータについて、超音波ラインデータにおける高輝度領域の位置と、光ラインデータにおける石灰化プラーク領域の位置とに基づき、石灰化プラーク領域から高輝度領域を除いた領域、すなわち不鮮明領域の位置を特定する。 As shown in the upper left part of FIG. 7, the control unit 31 specifies the position and range of the blurred area outside the high-brightness area based on the high-brightness area of the calcified plaque detected by the learning model 3M. Specifically, the control unit 31 identifies a plurality of ultrasound line data corresponding to the high-brightness region based on the position of the high-brightness region in the circumferential direction of the IVUS image. The control unit 31 identifies a plurality of optical line data at the same angle (same line number) as each of the identified plurality of ultrasound line data. The control unit 31 identifies the position (coordinates) of the calcified plaque regions formed between the rising regions based on the change in the luminance value in the depth direction of each light line data. With respect to the ultrasonic line data and the optical line data at the same angle, the control unit 31 controls the high luminance area from the calcified plaque area based on the position of the high intensity area in the ultrasonic line data and the position of the calcified plaque area in the optical line data. Identify the location of the area excluding the luminance area, that is, the blurred area.
 制御部31は、IVUS画像における高輝度領域に相当する複数の超音波ラインデータ全てに対し上述の処理を行い、各ライン間を周知の補間処理によって補間することにより、石灰化プラークの高輝度領域の外側における不鮮明領域の位置及び範囲を特定する。なお制御部31は、IVUS画像及びOCT画像上に、プローブ11の回転中心から径方向に任意のラインを複数生成し、生成した任意のラインの輝度値の変化を用いて上述した不鮮明領域の位置を特定する処理を行ってもよい。 The control unit 31 performs the above-described processing on all of the plurality of ultrasound line data corresponding to the high-brightness region in the IVUS image, and interpolates between each line by a known interpolation process to obtain the high-brightness region of the calcified plaque. Identify the location and extent of the blurred region outside the . Note that the control unit 31 generates a plurality of arbitrary lines in the radial direction from the rotation center of the probe 11 on the IVUS image and the OCT image, and uses the change in the luminance value of the generated arbitrary lines to determine the position of the blurred region described above. You may perform the process which specifies.
 不鮮明領域の特定方法は上述の例に限定されるものではない。例えば制御部31は、IVUS画像を入力として、当該IVUS画像における石灰化プラークに起因する不鮮明領域を出力する学習モデル3Mを用いて、IVUS画像から石灰化プラークに起因する不鮮明領域を直接的に検出してもよい。あるいは、制御部31は、IVUS画像を入力として、当該IVUS画像における高輝度領域及び不鮮明領域を含むプラーク領域を出力する学習モデル3Mを用いて、IVUS画像からプラーク領域を検出してもよい。この場合、制御部31は、プラーク領域の検出結果と、超音波ラインデータの輝度値の変化とを用いて、プラーク領域から輝度値の高い領域(高輝度領域)よりも外側にある輝度値の低い領域(不鮮明領域)を特定するとよい。  The method of specifying the blurred area is not limited to the above example. For example, the control unit 31 receives an IVUS image and uses a learning model 3M that outputs a blurred region caused by calcified plaque in the IVUS image to directly detect the blurred region caused by the calcified plaque from the IVUS image. You may Alternatively, the control unit 31 may detect a plaque region from an IVUS image using a learning model 3M that receives an IVUS image as input and outputs a plaque region including a high-brightness region and a blurred region in the IVUS image. In this case, the control unit 31 uses the detection result of the plaque region and the change in the brightness value of the ultrasound line data to determine the brightness value outside the plaque region and the region with the high brightness value (high brightness region). Low areas (blurred areas) may be identified.
 次に、制御部31は、特定したIVUS画像における不鮮明領域に対し、不鮮明領域を補完するための補完情報を生成する。具体的には、制御部31は、IVUS画像における不鮮明領域の位置及び範囲に基づき、当該IVUS画像と同一時点で取得したOCT画像における不鮮明領域に対応する領域の位置及び範囲を特定する。図7の例において、上部右側に示すOCT画像の破線で囲まれた部分が、不鮮明領域に対応する領域である。制御部31は、特定した不鮮明領域に対応する領域の画素値を取得する。すなわち、補完情報は、IVUS画像における不鮮明領域に対する、OCT画像の当該不鮮明領域に対応する領域の画素値を含む画像であり、OCT画像の当該不鮮明領域に対応する領域を切り取った部分画像である。制御部31は、補完情報に基づき、図7の下側に示す如く、IVUS画像における不鮮明領域の画素値を、取得したOCT画像の画素値で置換した合成IVUS画像を生成する。 Next, the control unit 31 generates complementary information for complementing the blurred region in the identified IVUS image. Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. In the example of FIG. 7, the portion surrounded by the dashed line in the OCT image shown on the upper right side is the area corresponding to the blurred area. The control unit 31 acquires the pixel values of the area corresponding to the identified blurred area. That is, the complementary information is an image containing pixel values of a region corresponding to the blurred region of the OCT image with respect to the blurred region of the IVUS image, and is a partial image obtained by cutting out the region corresponding to the blurred region of the OCT image. Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image, as shown in the lower part of FIG.
 なお、制御部31は、IVUS画像における高輝度領域の画素値に応じて、補完情報におけるOCT画像の画素値を補正し、補正後の画素値を用いてIVUS画像における不鮮明領域を補完してもよい。また制御部31は、IVUS画像と、該IVUS画像と同一時点で取得したOCT画像との撮像領域のずれを考慮し、補完情報のOCT画像の領域を調整してもよい。例えば制御部31は、OCT画像において、IVUS画像の不鮮明領域に対応する領域よりも周方向に所定角度ずらした領域を、補完情報の対象としてもよい。制御部31は、IVUS画像と同一時点で取得したOCT画像の前後フレーム等、所定フレーム数ずらしたOCT画像の不鮮明領域に対応する領域を、補完情報の対象としてもよい。 Note that the control unit 31 corrects the pixel values of the OCT image in the complementary information according to the pixel values of the high-brightness region in the IVUS image, and uses the pixel values after correction to complement the blurred region in the IVUS image. good. In addition, the control unit 31 may adjust the area of the OCT image of the complementary information in consideration of the deviation of the imaging area between the IVUS image and the OCT image acquired at the same time as the IVUS image. For example, the control unit 31 may set, in the OCT image, an area shifted by a predetermined angle in the circumferential direction from the area corresponding to the blurred area of the IVUS image as the target of the complementary information. The control unit 31 may use the area corresponding to the blurred area of the OCT image shifted by a predetermined number of frames, such as the preceding and succeeding frames of the OCT image acquired at the same time as the IVUS image, as the target of the complementary information.
 上記では、予めIVUS画像及びOCT画像の深度が同期されているものとして補完処理を行う例を説明した。IVUS画像及びOCT画像の深度が異なる場合には、制御部31は、IVUS画像及びOCT画像の深度を同期させる前処理を実行してもよく、予め記憶するIVUS画像及びOCT画像の深度の対比関係に応じて、OCT画像における不鮮明領域に対応する領域の位置及び範囲(大きさ)を特定してもよい。 In the above, an example of performing complementation processing assuming that the depths of the IVUS image and the OCT image are synchronized in advance has been described. When the depths of the IVUS image and the OCT image are different, the control unit 31 may perform preprocessing for synchronizing the depths of the IVUS image and the OCT image. , the position and range (size) of the area corresponding to the blurred area in the OCT image may be specified.
 図8は、画像処理装置3にて実行される処理手順の一例を示すフローチャートである。血管内検査装置101から超音波の反射波の信号及び反射光が出力されると、画像処理装置3の制御部31はプログラム3Pに従って以下の処理を実行する。 FIG. 8 is a flowchart showing an example of a processing procedure executed by the image processing device 3. FIG. When the ultrasound reflected wave signal and the reflected light are output from the intravascular examination apparatus 101, the control unit 31 of the image processing apparatus 3 executes the following processes according to the program 3P.
 画像処理装置3の制御部31は、血管内検査装置101を介して、時系列順の複数のIVUS画像及びOCT画像を取得する(ステップS11)。詳細には、制御部31は、血管内検査装置101を介して取得した超音波の反射波の信号及び反射光に基づき生成したIVUS画像及びOCT画像を取得する。IVUS画像には、石灰化プラークの表面を示す高輝度領域と、石灰化プラークの表面以降の輝度値が低下した暗く不明瞭な不鮮明領域とが含まれている。OCT画像には、石灰化プラークを深部に亘って示すプラーク領域が含まれている。 The control unit 31 of the image processing device 3 acquires a plurality of IVUS images and OCT images in chronological order via the intravascular examination device 101 (step S11). More specifically, the control unit 31 acquires an IVUS image and an OCT image generated based on reflected ultrasound wave signals and reflected light acquired via the intravascular examination apparatus 101 . The IVUS image contains high intensity areas indicating the surface of the calcified plaque, and dark, indistinct blurred areas of reduced intensity values beyond the surface of the calcified plaque. OCT images contain plaque regions that show calcified plaque at depth.
 制御部31は、取得したIVUS画像における不鮮明領域を特定する(ステップS12)。不鮮明領域は、石灰化プラークに起因して当該石灰化プラークよりも径方向外側に形成される、石灰化プラークよりも輝度が低い領域である。 The control unit 31 identifies the blurred area in the acquired IVUS image (step S12). A blurred region is a region formed radially outside the calcified plaque due to the calcified plaque and having a lower brightness than the calcified plaque.
 図9は、不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。図9のフローチャートに示す処理手順は、図8のフローチャートにおけるステップS12の詳細に対応する。 FIG. 9 is a flowchart showing an example of a detailed procedure for specifying a blurred area. The processing procedure shown in the flowchart of FIG. 9 corresponds to the details of step S12 in the flowchart of FIG.
 制御部31は、取得したIVUS画像を入力データとして学習モデル3Mに入力する(ステップS21)。制御部31は、学習モデル3Mから出力されるIVUS画像の高輝度領域を取得する(ステップS22)。制御部31は、IVUS画像の輝度値及びOCT画像の輝度値の変化に基づき、IVUS画像の高輝度領域よりも径方向外側に形成される不鮮明領域を特定する(ステップS23)。制御部31は、図8のフローチャートにおけるステップS13へ処理を戻す。 The control unit 31 inputs the obtained IVUS image to the learning model 3M as input data (step S21). The control unit 31 acquires the high-brightness region of the IVUS image output from the learning model 3M (step S22). The control unit 31 identifies the blurred area formed radially outside the high-brightness area of the IVUS image based on the change in the brightness value of the IVUS image and the brightness value of the OCT image (step S23). The control unit 31 returns the process to step S13 in the flowchart of FIG.
 図8に戻り説明を続ける。制御部31は、特定したIVUS画像の不鮮明領域に応じて、IVUS画像の不鮮明領域を補完する補完情報を取得する(ステップS13)。具体的には、制御部31は、IVUS画像における不鮮明領域の位置及び範囲に基づき、当該IVUS画像と同一時点で取得したOCT画像における不鮮明領域に対応する領域の位置及び範囲を特定し、特定した不鮮明領域に対応する領域の画素値を取得する。 Return to Fig. 8 and continue the explanation. The control unit 31 acquires complementary information for complementing the blurred region of the IVUS image according to the identified blurred region of the IVUS image (step S13). Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. Get the pixel values of the area corresponding to the blurred area.
 制御部31は、補完情報に基づき、IVUS画像における不鮮明領域の画素値を、取得したOCT画像の画素値で置換した合成IVUS画像を生成する(ステップS14)。制御部31は、生成した合成IVUS画像を含む画面を表示装置4に表示し(ステップS15)、一連の処理を終了する。 Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image (step S14). The control unit 31 displays a screen including the generated composite IVUS image on the display device 4 (step S15), and ends the series of processes.
 図10は、表示装置4に表示される画面40の一例を示す模式図である。画面40は、例えばIVUS画像表示部41、OCT画像表示部42及び合成IVUS画像の表示/非表示を入力するための入力ボタン43を含む。IVUS画像表示部41は、血管内検査装置101を介して取得したIVUS画像を表示する。OCT画像表示部42は、血管内検査装置101を介して取得したOCT画像を表示する。入力ボタン43は、例えばIVUS画像表示部41の下側に表示されている。入力ボタン43の表示位置は限定されるものではないが、IVUS画像表示部41の近傍、又はIVUS画像表示部41に重畳して表示されることが好ましい。 FIG. 10 is a schematic diagram showing an example of a screen 40 displayed on the display device 4. FIG. The screen 40 includes, for example, an IVUS image display section 41, an OCT image display section 42, and an input button 43 for inputting display/non-display of the composite IVUS image. The IVUS image display unit 41 displays IVUS images acquired via the intravascular examination apparatus 101 . The OCT image display unit 42 displays OCT images acquired via the intravascular examination apparatus 101 . The input button 43 is displayed below the IVUS image display section 41, for example. Although the display position of the input button 43 is not limited, it is preferably displayed near the IVUS image display section 41 or superimposed on the IVUS image display section 41 .
 画像処理装置3の制御部31は、図10中の上側に示す画面40が表示されている状態で、入力ボタン43のタップ操作を受け付けると、図10中の下側に示す合成IVUS画像を含む画面40を、表示装置4を介して表示する。合成IVUS画像を含む画面40は、合成IVUS画像表示部44を含む。合成IVUS画像表示部44は、例えばIVUS画像表示部41に代えて、画面40上においてIVUS画像表示部41と同じ位置に表示される。 When the control unit 31 of the image processing apparatus 3 accepts a tap operation of the input button 43 while the screen 40 shown in the upper side of FIG. 10 is displayed, the synthetic IVUS image shown in the lower side of FIG. 10 is included. A screen 40 is displayed via the display device 4 . A screen 40 containing a composite IVUS image includes a composite IVUS image display portion 44 . The composite IVUS image display section 44 is displayed at the same position as the IVUS image display section 41 on the screen 40 instead of the IVUS image display section 41, for example.
 合成IVUS画像表示部44は、IVUS画像表示部41のIVUS画像に対し、補完情報を用いて不鮮明領域を補完した合成IVUS画像を表示する。合成IVUS画像表示部44は、不鮮明領域、すなわち元のIVUS画像に対し補完を行った領域を識別可能に表示する。例えば、画像処理装置3の制御部31は、補完情報に基づきOCT画像を用いて生成した部分画像の縁に枠線を追加し、枠線を含む部分画像とIVUS画像とを合成した画像合成IVUS画像を生成し、合成IVUS画像表示部44に表示する。医師等のユーザは、補完された部分と、補完されていない実際のIVUS画像そのものの部分とを容易に識別できる。補完部分のマーキング手法は限定されるものではなく、例えば着色、網掛け等によるものであってもよい。図10中の下側に示す画面40が表示されている状態で、入力ボタン43のタップ操作を受け付けると、図10中の上側に示すIVUS画像を含む画面40が再び表示される。 The synthetic IVUS image display unit 44 displays a synthetic IVUS image obtained by complementing the unclear region of the IVUS image of the IVUS image display unit 41 using complementary information. The composite IVUS image display unit 44 identifiably displays the blurred area, that is, the area obtained by complementing the original IVUS image. For example, the control unit 31 of the image processing device 3 adds a frame line to the edge of the partial image generated using the OCT image based on the complementary information, and combines the partial image including the frame line with the IVUS image. An image is generated and displayed on the composite IVUS image display unit 44 . A user, such as a physician, can easily distinguish between the complemented portion and the non-complemented portion of the actual IVUS image itself. The method of marking the complementary portion is not limited, and may be, for example, coloring, shading, or the like. When the input button 43 is tapped while the screen 40 shown in the lower part of FIG. 10 is displayed, the screen 40 including the IVUS image shown in the upper part of FIG. 10 is displayed again.
 制御部31は、入力ボタン43の切り替え操作に応じて、IVUS画像又は合成IVUS画像のいずれかを表示させる。IVUS画像及び合成IVUS画像は、画面上の同じ位置に表示されるため、ユーザは視線を移動させることなくIVUS画像及び合成IVUS画像を確認することができる。なお、制御部31は、IVUS画像に対応する合成IVUS画像が存在する場合にのみ、入力ボタン43をIVUS画像に対応付けて表示してもよい。ユーザは、入力ボタン43が表示されることにより、補完情報の存在を認識することができる。 The control unit 31 displays either the IVUS image or the composite IVUS image according to the switching operation of the input button 43 . Since the IVUS image and the synthesized IVUS image are displayed at the same position on the screen, the user can confirm the IVUS image and the synthesized IVUS image without moving the line of sight. Note that the control unit 31 may display the input button 43 in association with the IVUS image only when there is a composite IVUS image corresponding to the IVUS image. The user can recognize the presence of complementary information by displaying the input button 43 .
 画面40は一例であり、限定されるものではない。例えば、画面40はIVUS画像表示部41、OCT画像表示部42及び合成IVUS画像表示部44を含み、IVUS画像、OCT画像及び合成IVUS画像を全て表示するものであってもよい。また、画面40には、アンギオ画像が含まれていてもよい。画面40には、時系列で連続する複数の合成IVUS画像(スライスデータ)を積層することにより生成される石灰化プラークの3次元画像が含まれていてもよい。3次元画像は、例えばボクセル法により生成することができる。 The screen 40 is an example and is not limited. For example, the screen 40 may include an IVUS image display section 41, an OCT image display section 42, and a composite IVUS image display section 44, and display all of the IVUS image, the OCT image, and the composite IVUS image. The screen 40 may also include an angio image. The screen 40 may include a three-dimensional image of calcified plaque generated by stacking a plurality of synthetic IVUS images (slice data) that are continuous in time series. A three-dimensional image can be generated, for example, by the voxel method.
 本実施形態では合成IVUS画像の出力先が表示装置4であるものとして説明するが、表示装置4以外の装置(例えばパーソナルコンピュータ)に合成IVUS画像を出力してもよいことは勿論である。 In this embodiment, the display device 4 is the output destination of the composite IVUS image, but it is of course possible to output the composite IVUS image to a device other than the display device 4 (for example, a personal computer).
 本実施形態によれば、画像処理装置3は、IVUS画像及びOCT画像の状態に応じてIVUS画像における不鮮明領域を特定し、IVUS画像における不鮮明領域をIVUS画像よりも鮮明なOCT画像を用いて補完した合成IVUS画像を表示する。画像処理装置3は、IVUS画像中の様々な位置に形成される不鮮明領域に対し、好適に補完を実行する。ユーザは、合成IVUS画像、すなわちOCT画像の一部を含むIVUS画像により、IVUS画像とOCT画像とを比較読影することなく、IVUS画像及びOCT画像の両方の情報を効率的に認識することができる。従って、画像診断用カテーテル1により得られる画像の読影が容易となり、効率的な診断が可能となる。本実施形態は、石灰化プラークの発生し易い下肢血管の検査において特に有効である。 According to the present embodiment, the image processing apparatus 3 identifies the blurred region in the IVUS image according to the state of the IVUS image and the OCT image, and complements the blurred region in the IVUS image using an OCT image that is clearer than the IVUS image. display the combined IVUS image. The image processing device 3 preferably performs interpolation on blurred areas formed at various positions in the IVUS image. A composite IVUS image, i.e., an IVUS image that includes a portion of an OCT image, allows the user to efficiently recognize information in both the IVUS and OCT images without comparative interpretation of the IVUS and OCT images. . Therefore, the image obtained by the diagnostic imaging catheter 1 can be easily interpreted, and an efficient diagnosis can be made. This embodiment is particularly effective in examination of lower extremity blood vessels where calcified plaque is likely to occur.
(第2実施形態)
 第2実施形態では、不鮮明領域の特定方法が第1実施形態と異なる。以下では主に第1実施形態との相違点を説明し、第1実施形態と共通する構成については同一の符号を付してその詳細な説明を省略する。
(Second embodiment)
The second embodiment differs from the first embodiment in the method of identifying the blurred area. In the following, the differences from the first embodiment will be mainly described, and the same reference numerals will be given to the configurations common to the first embodiment, and detailed description thereof will be omitted.
 第2実施形態の画像処理装置3は、補助記憶部34に第1学習モデル31M及び第2学習モデル32Mを含む学習モデル3Mを記憶している。図11は、第2実施形態における学習モデル3Mの概要を示す説明図である。 The image processing apparatus 3 of the second embodiment stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34. FIG. 11 is an explanatory diagram showing an overview of the learning model 3M in the second embodiment.
 第1学習モデル31Mは、IVUS画像を入力として、当該IVUS画像における不鮮明領域を出力する機械学習モデルである。第2学習モデル32Mは、OCT画像を入力として、当該OCT画像における石灰化プラーク領域を出力する機械学習モデルである。第1学習モデル31M及び第2学習モデル32Mは同様の構成であるため、以下では第1学習モデル31Mの構成について説明する。 The first learning model 31M is a machine learning model that takes an IVUS image as input and outputs the blurred area in the IVUS image. The second learning model 32M is a machine learning model that receives an OCT image as input and outputs a calcified plaque region in the OCT image. Since the first learning model 31M and the second learning model 32M have the same configuration, the configuration of the first learning model 31M will be described below.
 第1学習モデル31Mは、例えばCNNである。第1学習モデル31Mは、セマンティックセグメンテーションを用いた画像認識技術により、入力される画像内の各画素がオブジェクト領域に対応する画素であるか否か、画素単位で認識する。第1学習モデル31Mは、IVUS画像が入力される入力層と、画像の特徴量を抽出し復元する中間層と、IVUS画像に含まれるオブジェクトの位置及び範囲を示す情報を出力する出力層とを有する。第1学習モデル31Mは、例えばU-Netである。 The first learning model 31M is, for example, CNN. The first learning model 31M recognizes on a pixel-by-pixel basis whether or not each pixel in the input image corresponds to an object region by image recognition technology using semantic segmentation. The first learning model 31M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores the feature amount of the image, and an output layer that outputs information indicating the position and range of the object included in the IVUS image. have. The first learning model 31M is U-Net, for example.
 第1学習モデル31Mの入力層は、IVUS画像に含まれる各画素の画素値の入力を受け付ける複数のノードを有し、入力された画素値を中間層に受け渡す。中間層は、畳み込み層(CONV層)と、逆畳み込み層(DECONV層)とを有する。畳み込み層は、画像データを次元圧縮する層である。次元圧縮により、オブジェクトの特徴量が抽出される。逆畳み込み層は逆畳み込み処理を行い、元の次元に復元する。逆畳み込み層における復元処理により、IVUS画像の各画素がオブジェクトであるか否かを示す二値化されたラベル画像が生成される。出力層は、ラベル画像を出力する一又は複数のノードを有する。ラベル画像は、例えば、不鮮明領域に対応する画素がクラス「1」、その他の画像に対応する画素がクラス「0」の画像である。 The input layer of the first learning model 31M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer. The intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer). A convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object. The deconvolution layer performs the deconvolution process to restore the original dimensions. The restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not. The output layer has one or more nodes that output label images. The label image is, for example, an image in which pixels corresponding to blurred areas are class "1" and pixels corresponding to other images are class "0".
 第1学習モデル31Mは、オブジェクト(不鮮明領域)を含むIVUS画像と、各オブジェクトの位置を示すラベルとが対応付けられた訓練データを用意し、当該訓練データを用いて未学習のニューラルネットワークを機械学習させることにより生成することができる。 The first learning model 31M prepares training data in which an IVUS image containing an object (blurred area) and a label indicating the position of each object are associated, and uses the training data to machine an unlearned neural network. It can be generated by learning.
 第2学習モデル32Mは、第2学習モデル32Mと同様の構成であり、画像部分に含まれる石灰化プラーク領域を画素単位で認識し、生成されたラベル画像を出力する。ラベル画像は、例えば、石灰化プラーク領域に対応する画素がクラス「1」、その他の画像に対応する画素がクラス「0」の画像である。 The second learning model 32M has the same configuration as the second learning model 32M, recognizes the calcified plaque region included in the image portion pixel by pixel, and outputs the generated label image. The label image is, for example, an image in which pixels corresponding to calcified plaque regions are class "1" and pixels corresponding to other images are class "0".
 第2学習モデル32Mは、オブジェクト(石灰化プラーク領域)を含むOCT画像と、各オブジェクトの位置を示すラベルとが対応付けられた訓練データを用意し、当該訓練データを用いて未学習のニューラルネットワークを機械学習させることにより生成することができる。 The second learning model 32M prepares OCT images containing objects (calcified plaque regions) and training data in which labels indicating the positions of each object are associated, and uses the training data to prepare an unlearned neural network. can be generated by machine learning.
 上述のように構成された第1学習モデル31Mによれば、図11に示すように、不鮮明領域を含むIVUS画像を第1学習モデル31Mに入力することにより、画素単位で不鮮明領域を示すラベル画像が得られる。同様に、第2学習モデル32Mによれば、図11に示すように、石灰化プラーク領域を含むOCT画像を第2学習モデル32Mに入力することにより、画素単位で石灰化プラーク領域を示すラベル画像が得られる。 According to the first learning model 31M configured as described above, as shown in FIG. 11, by inputting an IVUS image including a blurred region to the first learning model 31M, a label image showing the blurred region in units of pixels is obtained. is obtained. Similarly, according to the second learning model 32M, as shown in FIG. 11, by inputting an OCT image containing a calcified plaque region into the second learning model 32M, a label image showing the calcified plaque region in units of pixels is obtained.
 図12は、第2実施形態における不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。図12のフローチャートに示す処理手順は、図8のフローチャートにおけるステップS12の詳細に対応する。 FIG. 12 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the second embodiment. The processing procedure shown in the flowchart of FIG. 12 corresponds to the details of step S12 in the flowchart of FIG.
 画像処理装置3の制御部31は、図8のステップS11にて取得したIVUS画像を入力データとして第1学習モデル31Mに入力する(ステップS31)。制御部31は、第1学習モデル31Mから出力されるIVUS画像の不鮮明領域を取得する(ステップS32)。 The control unit 31 of the image processing device 3 inputs the IVUS image acquired in step S11 of FIG. 8 to the first learning model 31M as input data (step S31). The control unit 31 acquires the blurred region of the IVUS image output from the first learning model 31M (step S32).
 制御部31は、図8のステップS11にて取得したOCT画像を入力データとして第2学習モデル32Mに入力する(ステップS33)。制御部31は、第2学習モデル32Mから出力されるOCT画像の石灰化プラーク領域を取得する(ステップS34)。 The control unit 31 inputs the OCT image acquired in step S11 of FIG. 8 to the second learning model 32M as input data (step S33). The control unit 31 acquires the calcified plaque region of the OCT image output from the second learning model 32M (step S34).
 制御部31は、IVUS画像の不鮮明領域とOCT画像の石灰化プラーク領域との位置を比較することにより、IVUS画像の不鮮明領域とOCT画像の石灰化プラーク領域との位置が一致するか否かを判定する(ステップS35)。換言すれば、制御部31は、IVUS画像の不鮮明領域に対し、OCT画像の石灰化プラーク領域の情報を用いて補完を行うか否かを判定する。 The control unit 31 compares the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image to determine whether the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match. Determine (step S35). In other words, the control unit 31 determines whether or not to complement the blurred region of the IVUS image using the information of the calcified plaque region of the OCT image.
 IVUS画像の不鮮明領域とOCT画像の石灰化プラーク領域との位置が一致しないと判定した場合(ステップS35:NO)、制御部31は処理を終了する。第1学習モデル31MによりIVUS画像における不鮮明領域が認識された場合において、OCT画像の当該不鮮明領域に対応する位置に石灰化プラーク領域が含まれないときは、認識された不鮮明領域は石灰化プラークに起因するものではないと推定されるため、OCT画像による補完を行わない。この場合、不鮮明領域の特定が行われなかったことにより、図8におけるステップS13以降の処理も実行されないものであってよい。すなわち、制御部31は、合成IVUS画像の生成を実行しない。 When it is determined that the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image do not match (step S35: NO), the control unit 31 ends the process. When the blurred region in the IVUS image is recognized by the first learning model 31M, and the calcified plaque region is not included in the position corresponding to the blurred region in the OCT image, the recognized blurred region is in the calcified plaque. Since it is presumed that it is not caused by this, no supplementation with the OCT image is performed. In this case, the processing from step S13 onward in FIG. 8 may not be executed because the blurred region is not specified. That is, the control unit 31 does not generate a composite IVUS image.
 一方、IVUS画像の不鮮明領域とOCT画像の石灰化プラーク領域との位置が一致すると判定した場合(ステップS35:YES)、制御部31は、第1学習モデル31Mから出力された不鮮明領域を、補完の対象となる不鮮明領域と特定する(ステップS36)。第1学習モデル31MによりIVUS画像における不鮮明領域が認識された場合において、OCT画像の当該不鮮明領域に対応する位置に石灰化プラーク領域が含まれるときは、認識された不鮮明領域は石灰化プラークに起因するものであると推定されるため、OCT画像による補完を行う。制御部31は、図8のステップS13以降の処理を実行し、OCT画像の石灰化プラーク領域に基づきIVUS画像の不鮮明領域を補完する合成IVUS画像を生成する。 On the other hand, when it is determined that the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match (step S35: YES), the control unit 31 complements the blurred region output from the first learning model 31M. (step S36). When the blurred area in the IVUS image is recognized by the first learning model 31M, and the calcified plaque area is included in the position corresponding to the blurred area in the OCT image, the recognized blurred area is caused by calcified plaque. Since it is presumed to The control unit 31 executes the processes from step S13 onward in FIG. 8 to generate a synthetic IVUS image that complements the blurred area of the IVUS image based on the calcified plaque area of the OCT image.
 本実施形態によれば、画像処理装置3は、IVUS画像における不鮮明領域とOCT画像の石灰化プラーク領域とが相関を有する場合にのみ合成IVUS画像を生成するため、不鮮明領域に対する不適切な補完を防止することができる。 According to the present embodiment, the image processing apparatus 3 generates a composite IVUS image only when the blurred region in the IVUS image and the calcified plaque region in the OCT image are correlated, so that inappropriate interpolation of the blurred region is performed. can be prevented.
(第3実施形態)
 第3実施形態では、OCT画像における不鮮明領域をIVUS画像により補完する構成を説明する。以下では主に第1実施形態との相違点を説明し、第1実施形態と共通する構成については同一の符号を付してその詳細な説明を省略する。
(Third embodiment)
In the third embodiment, a configuration will be described in which a blurred region in an OCT image is complemented with an IVUS image. In the following, the differences from the first embodiment will be mainly described, and the same reference numerals will be given to the configurations common to the first embodiment, and detailed description thereof will be omitted.
 例えば、血管内に光を減衰させる(光の透過性が低い)物質が含まれる場合、第1実施形態のように血管内に高弾性物質が含まれる場合と逆の現象がOCT画像及びIVUS画像にみられる。以下では、光を減衰させる物質として脂質性プラークを例に説明する。 For example, when a substance that attenuates light (low light transmittance) is contained in a blood vessel, the phenomenon opposite to the case where a highly elastic substance is contained in the blood vessel as in the first embodiment occurs in OCT images and IVUS images. seen in In the following, lipid plaque is used as an example of a substance that attenuates light.
 血管内に脂質性プラークが含まれる場合、OCT法では、脂質性プラークよりもプローブ11の径方向外側に透過する光が大きく減衰する。従って、OCT画像には、脂質性プラークの表面部分に相当する輝度値が高く明るい高輝度領域と、当該高輝度領域の外側に示される、輝度値が低下し暗く不明瞭な領域(不鮮明領域)とが含まれる。一方、IVUS法では、脂質性プラークによる超音波の減衰は小さいため、径方向外側の深部に亘って脂質性プラークを良好に描写することができる。従って、IVUS画像には、脂質性プラークの周囲に形成される輝度値が高く明るい立ち上がり領域と、当該立ち上がり領域によりその境界が明瞭に示される脂質性プラーク領域とが含まれる。画像処理装置3の制御部31は、OCT画像における不鮮明領域を、IVUS画像の脂質性プラーク領域により補完する合成OCT画像を生成する。 When the blood vessel contains lipid plaque, in the OCT method, the light transmitted radially outward of the probe 11 is attenuated more than the lipid plaque. Therefore, in the OCT image, there are a high brightness region with a high brightness value corresponding to the surface part of the lipid plaque, and a dark and unclear region with a low brightness value shown outside the high brightness region (unclear region). and are included. On the other hand, in the IVUS method, the attenuation of ultrasonic waves by a lipid plaque is small, so that the lipid plaque can be well delineated over the deep portion radially outward. Therefore, an IVUS image contains a bright raised area with a high luminance value formed around the lipid plaque and a lipid plaque area clearly demarcated by the raised area. The control unit 31 of the image processing device 3 generates a synthetic OCT image in which the blurred area in the OCT image is complemented by the lipid plaque area in the IVUS image.
 第3実施形態に係る学習モデル3Mは、例えばOCT画像を入力として、当該OCT画像における脂質性プラークの高輝度領域を出力するモデルである。 The learning model 3M according to the third embodiment is a model that receives an OCT image, for example, and outputs a high-intensity region of lipid plaque in the OCT image.
 図13及び図14は、第3実施形態に係る画像処理装置3にて実行される処理手順の一例を示すフローチャートである。第3実施形態に係る画像処理装置3の制御部31は、第1実施形態のステップS11~ステップS15及びステップS21~ステップS23と同様の処理を実行するが、以下の点で第1実施形態と異なる。 13 and 14 are flowcharts showing an example of the processing procedure executed by the image processing device 3 according to the third embodiment. The control unit 31 of the image processing apparatus 3 according to the third embodiment executes the same processes as steps S11 to S15 and steps S21 to S23 of the first embodiment, but differs from the first embodiment in the following points. different.
 制御部31は、ステップS42においてOCT画像に含まれる不鮮明領域を特定する。詳細には、制御部31は、ステップ51においてOCT画像を入力データとして学習モデル3Mに入力し、ステップ52において学習モデル3Mから出力されるOCT画像の高輝度領域を取得することにより、OCT画像における不鮮明領域を特定する。 The control unit 31 identifies the blurred area included in the OCT image in step S42. Specifically, in step 51, the control unit 31 inputs the OCT image as input data to the learning model 3M, and in step 52, acquires the high-brightness region of the OCT image output from the learning model 3M. Identify blurred areas.
 制御部31は、ステップS43においてOCT画像の不鮮明領域を補完する補完情報を取得する。具体的には、制御部31は、OCT画像における不鮮明領域の位置及び範囲に基づき、当該OCT画像と同一時点で取得したIVUS画像における不鮮明領域に対応する領域の位置及び範囲を特定し、特定した不鮮明領域に対応する領域の画素値を取得する。制御部31は、ステップS44において、OCT画像における不鮮明領域の画素値を、取得したIVUS画像の画素値で置換した合成OCT画像を生成する。例えば合成OCT画像は、OCT画像における不鮮明領域に、IVUS画像の脂質性プラーク領域の一部を合成した画像である。 The control unit 31 acquires complementary information for complementing the blurred region of the OCT image in step S43. Specifically, based on the position and range of the blurred region in the OCT image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the IVUS image acquired at the same time as the OCT image. Get the pixel values of the area corresponding to the blurred area. In step S44, the control unit 31 generates a composite OCT image by replacing the pixel values of the blurred region in the OCT image with the pixel values of the acquired IVUS image. For example, a synthetic OCT image is an image in which a blurred region in the OCT image is combined with a portion of the lipid plaque region in the IVUS image.
 本実施形態によれば、OCT画像における不鮮明領域をIVUS画像により補完した合成OCT画像が提示されるため、IVUS画像及びOCT画像の効率的な読影を支援することができる。 According to the present embodiment, a synthesized OCT image in which an unclear region in the OCT image is complemented by the IVUS image is presented, so efficient interpretation of the IVUS image and the OCT image can be supported.
(第4実施形態)
 第4実施形態では、OCT画像における不鮮明領域の特定方法が第3実施形態と異なる。以下では主に第1実施形態から第3実施形態との相違点を説明し、第1実施形態から第3実施形態と共通する構成については同一の符号を付してその詳細な説明を省略する。
(Fourth embodiment)
The fourth embodiment differs from the third embodiment in the method of identifying the blurred region in the OCT image. In the following, the differences from the first to third embodiments will be mainly described, and the same reference numerals will be assigned to the configurations common to the first to third embodiments, and detailed description thereof will be omitted. .
 第4実施形態に係る画像処理装置3は、第3実施形態と同様に、例えば脂質性プラーク等の光の透過性が低い物質を含むOCT画像における不鮮明領域を、IVUS画像により補完する合成OCT画像を生成する。画像処理装置3は、補助記憶部34に第1学習モデル31M及び第2学習モデル32Mを含む学習モデル3Mを記憶している。 As in the third embodiment, the image processing apparatus 3 according to the fourth embodiment is a synthetic OCT image that complements the blurred region in the OCT image containing a substance with low light transmittance such as lipid plaque with an IVUS image. to generate The image processing device 3 stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34 .
 第4実施形態に係る第1学習モデル31Mは、OCT画像を入力として、当該OCT画像における不鮮明領域を出力するモデルである。第2学習モデル32Mは、IVUS画像を入力として、当該IVUS画像における脂質性プラーク領域を出力するモデルである。 The first learning model 31M according to the fourth embodiment is a model that receives an OCT image as an input and outputs a blurred area in the OCT image. The second learning model 32M is a model that receives an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
 図15は、第4実施形態における不鮮明領域を特定する処理の詳細な手順の一例を示すフローチャートである。画像処理装置3の制御部31は、OCT画像を入力データとして第1学習モデル31Mに入力し(ステップS61)、第1学習モデル31Mから出力されるOCT画像における不鮮明領域を取得する(ステップS62)。また制御部31は、IVUS画像を入力データとして第2学習モデル32Mに入力し(ステップS63)、第2学習モデル32Mから出力されるIVUS画像における脂質性プラーク領域を取得する(ステップS64)。以降制御部31は、第2実施形態のステップS35~ステップS36と同様の処理を実行し、OCT画像おける不鮮明領域を特定する。 FIG. 15 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the fourth embodiment. The control unit 31 of the image processing device 3 inputs the OCT image as input data to the first learning model 31M (step S61), and acquires the blurred region in the OCT image output from the first learning model 31M (step S62). . The control unit 31 also inputs the IVUS image as input data to the second learning model 32M (step S63), and acquires the lipid plaque region in the IVUS image output from the second learning model 32M (step S64). After that, the control unit 31 executes the same processing as in steps S35 and S36 of the second embodiment to identify the blurred region in the OCT image.
 本実施形態によれば、画像処理装置3は、OCT画像における不鮮明領域とIVUS画像の脂質性プラーク領域とが相関を有する場合にのみ、合成OCT画像を生成するため、不鮮明領域に対する不適切な補完を防止することができる。 According to this embodiment, the image processing apparatus 3 generates a synthetic OCT image only when the blurred region in the OCT image and the lipid plaque region in the IVUS image are correlated, so that inappropriate interpolation of the blurred region can be prevented.
(第5実施形態)
 第5実施形態では、他の不鮮明領域の特定方法について説明する。第5実施形態に係る画像処理装置3は、第2実施形態で説明した第1学習モデル31M及び第2学習モデル32Mと、第4実施形態で説明した第1学習モデル31M及び第2学習モデル32Mとを含む学習モデル3Mを補助記憶部34に記憶する。すなわち、画像処理装置3は、以下の4種類の学習モデルを備える。第1の学習モデルは、IVUS画像を入力として当該IVUS画像における不鮮明領域を出力する。第2の学習モデルは、OCT画像を入力として当該OCT画像における石灰化プラーク領域を出力する。第3の学習モデルは、OCT画像を入力として当該OCT画像における不鮮明領域を出力する。第4の学習モデルは、IVUS画像を入力として当該IVUS画像における脂質性プラーク領域を出力する。
(Fifth embodiment)
In the fifth embodiment, another method for specifying a blurred area will be described. The image processing apparatus 3 according to the fifth embodiment includes the first learning model 31M and the second learning model 32M described in the second embodiment, and the first learning model 31M and the second learning model 32M described in the fourth embodiment. A learning model 3M including and is stored in the auxiliary storage unit . That is, the image processing device 3 has the following four types of learning models. A first learning model receives an IVUS image as an input and outputs a blurred region in the IVUS image. The second learning model takes an OCT image as an input and outputs a calcified plaque region in the OCT image. A third learning model receives an OCT image as an input and outputs a blurred region in the OCT image. A fourth learning model takes an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
 画像処理装置3の制御部31は、血管内検査装置101を介して取得したIVUS画像及びOCT画像を、上述の4種類の学習モデルそれぞれに入力し、出力される認識結果を取得する。制御部31は、IVUS画像に不鮮明領域が含まれる場合において、OCT画像の不鮮明領域に対応する位置に石灰化プラーク領域が含まれるときは、第2実施形態で説明した補完処理を実行する。制御部31は、IVUS画像の不鮮明領域をOCT画像で補完した合成IVUS画像を生成する。 The control unit 31 of the image processing device 3 inputs the IVUS image and the OCT image acquired via the intravascular examination device 101 to each of the four types of learning models described above, and acquires the output recognition result. When the IVUS image includes an unclear region and the OCT image includes a calcified plaque region at a position corresponding to the unclear region, the control unit 31 executes the complementing process described in the second embodiment. The control unit 31 generates a composite IVUS image by interpolating the blurred region of the IVUS image with the OCT image.
 また、制御部31は、OCT画像に不鮮明領域が含まれる場合において、IVUS画像の不鮮明領域に対応する位置に脂質性プラーク領域が含まれるときは、第4実施形態で説明した補完処理を実行する。制御部31は、OCT画像の不鮮明領域をIVUS画像で補完した合成OCT画像を生成する。 In addition, when the OCT image includes a blurred region, the control unit 31 performs the complementing process described in the fourth embodiment when a lipid plaque region is included in a position corresponding to the blurred region of the IVUS image. . The control unit 31 generates a synthetic OCT image in which the blurred region of the OCT image is interpolated with the IVUS image.
 本実施形態によれば、画像処理装置3は、IVUS画像及びOCT画像の状態に応じて、好適に補完を施した合成画像を提示することができる。 According to the present embodiment, the image processing apparatus 3 can present a composite image that has been appropriately complemented according to the state of the IVUS image and the OCT image.
 上述の各フローチャートにおいて、画像処理装置3が実行する処理の一部又は全部は、画像処理装置3と通信可能に接続された不図示の外部サーバにより実行されてもよい。この場合、外部サーバの記憶部には、上述したプログラム3P及び学習モデル3Mと同様のプログラム及び学習モデルが記憶されている。外部サーバは、LAN(Local Area Network)、インターネット等のネットワークを介して画像処理装置3からIVUS画像及びOCT画像を取得する。外部サーバは、取得したIVUS画像及びOCT画像に基づいて、各実施形態の画像処理装置3と同様の処理を実行し、生成した合成IVUS画像又は合成OCT画像を画像処理装置3へ送信する。画像処理装置3は、外部サーバから送信された合成IVUS画像又は合成OCT画像を取得し、表示装置4に表示させる。 In each of the flowcharts described above, part or all of the processing executed by the image processing device 3 may be executed by an external server (not shown) communicably connected to the image processing device 3 . In this case, the storage unit of the external server stores programs and learning models similar to the programs 3P and learning models 3M described above. The external server acquires IVUS images and OCT images from the image processing device 3 via a network such as a LAN (Local Area Network) or the Internet. The external server executes the same processing as the image processing apparatus 3 of each embodiment based on the acquired IVUS image and OCT image, and transmits the generated synthetic IVUS image or synthetic OCT image to the image processing apparatus 3 . The image processing device 3 acquires the composite IVUS image or the composite OCT image transmitted from the external server, and causes the display device 4 to display it.
 上記の各実施形態に示した例は、各実施形態に示した構成の全部又は一部を組み合わせて他の実施の形態を実現することが可能である。また上記の各実施形態に示したシーケンスは限定されるものではなく、各処理手順はその順序を変更して実行されてもよく、また並行して複数の処理が実行されてもよい。 In the examples shown in the above embodiments, it is possible to implement other embodiments by combining all or part of the configurations shown in the respective embodiments. Also, the sequences shown in the above embodiments are not limited, and each processing procedure may be executed in a different order, or multiple processes may be executed in parallel.
 今回開示した実施の形態は、全ての点で例示であって、制限的なものではないと考えられるべきである。各実施例にて記載されている技術的特徴は互いに組み合わせることができ、本発明の範囲は、請求の範囲内での全ての変更及び請求の範囲と均等の範囲が含まれることが意図される。 The embodiments disclosed this time should be considered as examples in all respects and not restrictive. The technical features described in each embodiment can be combined with each other, and the scope of the present invention is intended to include all modifications within the scope of the claims and the range equivalent to the scope of the claims. .
 100 画像診断装置
 101 血管内検査装置
 102 血管造影装置
 1 画像診断用カテーテル(カテーテル)
 2 MDU
 3 画像処理装置
 31 制御部
 32 主記憶部
 33 入出力I/F
 34 補助記憶部
 35 読取部
 3P プログラム
 3M 学習モデル
 31M 第1学習モデル
 32M 第2学習モデル
 30 記録媒体
 4 表示装置
 5 入力装置
 
REFERENCE SIGNS LIST 100 diagnostic imaging device 101 intravascular examination device 102 angiography device 1 catheter for diagnostic imaging (catheter)
2 MDUs
3 image processing device 31 control section 32 main storage section 33 input/output I/F
34 auxiliary storage unit 35 reading unit 3P program 3M learning model 31M first learning model 32M second learning model 30 recording medium 4 display device 5 input device

Claims (12)

  1.  管腔器官に挿入されたカテーテルにて検出した信号に基づき生成された超音波断層画像及び光干渉断層画像を取得し、
     前記超音波断層画像及び光干渉断層画像の一方における前記超音波断層画像及び光干渉断層画像の他方よりも不鮮明である不鮮明領域を特定し、
     前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の情報に基づき、前記不鮮明領域を補完する補完情報を取得し、
     取得した前記補完情報に基づき、前記不鮮明領域を補完した前記超音波断層画像及び光干渉断層画像の一方を生成する
     処理をコンピュータに実行させるためのプログラム。
    Acquiring an ultrasonic tomographic image and an optical coherence tomographic image generated based on signals detected by a catheter inserted into a hollow organ,
    Identifying a blurred region in one of the ultrasonic tomographic image and the optical coherence tomographic image that is less clear than the other of the ultrasonic tomographic image and the optical coherence tomographic image,
    Acquiring complementary information for complementing the blurred region based on information of a region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image,
    A program for causing a computer to execute a process of generating one of the ultrasonic tomographic image and the optical coherence tomographic image in which the blurred region is complemented based on the acquired complement information.
  2.  前記補完情報は、前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の画素値を含み、
     前記不鮮明領域に前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の画素値を合成することにより、前記不鮮明領域を補完した前記超音波断層画像及び光干渉断層画像の一方を生成する
     請求項1に記載のプログラム。
    The complementary information includes a pixel value of a region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image,
    The ultrasonic tomographic image and the optical coherence tomographic image in which the blurred region is complemented by synthesizing the pixel values of the region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image with the blurred region. 2. The program of claim 1, which generates one.
  3.  前記不鮮明領域は、前記超音波断層画像及び光干渉断層画像の一方における超音波又は光を減衰させる物質による輝度よりも低輝度の領域を含み、
     前記超音波断層画像及び光干渉断層画像の一方における輝度の変化に基づき前記不鮮明領域を特定する
     請求項1又は請求項2に記載のプログラム。
    The blurred region includes a region of lower brightness than the brightness due to a substance that attenuates ultrasonic waves or light in one of the ultrasonic tomographic image and the optical coherence tomographic image,
    3. The program according to claim 1, wherein the blurred region is specified based on a change in luminance in one of the ultrasonic tomographic image and the optical coherence tomographic image.
  4.  超音波断層画像を入力した場合に、前記超音波断層画像における超音波を減衰させる物質又は不鮮明領域を検出する第1の学習モデルを用いて、前記超音波断層画像における前記不鮮明領域を特定し、
     前記光干渉断層画像における前記不鮮明領域に対応する領域の情報に基づき、前記超音波断層画像における前記不鮮明領域を補完する前記補完情報を取得する
     請求項1から請求項3のいずれか1項に記載のプログラム。
    When an ultrasonic tomographic image is input, a first learning model that detects a substance that attenuates ultrasonic waves or a blurred region in the ultrasonic tomographic image is used to identify the blurred region in the ultrasonic tomographic image,
    4. The complementary information for complementing the blurred region in the ultrasonic tomographic image is obtained based on information of a region corresponding to the blurred region in the optical coherence tomographic image. program.
  5.  光干渉断層画像を入力した場合に、前記光干渉断層画像における超音波を減衰させる物質を検出する第2の学習モデルを用いて、前記光干渉断層画像における超音波を減衰させる物質を検出し、
     前記超音波断層画像における前記不鮮明領域と、前記光干渉断層画像における超音波を減衰させる物質とに基づき、前記超音波断層画像における前記超音波を減衰させる物質に起因する前記不鮮明領域を特定する
     請求項4に記載のプログラム。
    When an optical coherence tomographic image is input, a substance that attenuates ultrasonic waves in the optical coherence tomographic image is detected using a second learning model that detects a substance that attenuates ultrasonic waves in the optical coherence tomographic image,
    Identifying the blurred region caused by the substance that attenuates the ultrasonic wave in the ultrasonic tomographic image based on the blurred region in the ultrasonic tomographic image and the substance that attenuates the ultrasonic wave in the optical coherence tomographic image Item 4. The program according to item 4.
  6.  前記超音波を減衰させる物質は石灰化したプラークである
     請求項4又は請求項5に記載のプログラム。
    6. The program according to claim 4 or 5, wherein the substance that attenuates ultrasound waves is calcified plaque.
  7.  光干渉断層画像を入力した場合に、前記光干渉断層画像における光を減衰させる物質又は不鮮明領域を検出する第3の学習モデルを用いて、前記光干渉断層画像における前記不鮮明領域を特定し、
     前記超音波断層画像における前記不鮮明領域に対応する領域の情報に基づき、前記光干渉断層画像における前記不鮮明領域を補完する前記補完情報を取得する
     請求項1から請求項6のいずれか1項に記載のプログラム。
    When an optical coherence tomographic image is input, a third learning model that detects a substance that attenuates light or a blurred region in the optical coherence tomographic image is used to identify the blurred region in the optical coherence tomographic image,
    7. The complementary information for complementing the blurred region in the optical coherence tomographic image is acquired based on the information of the region corresponding to the blurred region in the ultrasonic tomographic image. program.
  8.  超音波断層画像を入力した場合に、前記超音波断層画像における光を減衰させる物質を検出する第4の学習モデルを用いて、前記超音波断層画像における光を減衰させる物質を検出し、
     前記光干渉断層画像における前記不鮮明領域と、前記超音波断層画像における光を減衰させる物質とに基づき、前記光干渉断層画像における前記光を減衰させる物質に起因する前記不鮮明領域を特定する、
     請求項7に記載のプログラム。
    When an ultrasonic tomographic image is input, a substance that attenuates light in the ultrasonic tomographic image is detected using a fourth learning model that detects a substance that attenuates light in the ultrasonic tomographic image,
    Based on the blurred region in the optical coherence tomographic image and the substance that attenuates light in the ultrasonic tomographic image, the blurred region due to the substance that attenuates light in the optical coherence tomographic image is identified.
    8. A program according to claim 7.
  9.  前記光を減衰させる物質は脂質性プラークである
     請求項7又は請求項8に記載のプログラム。
    9. A program according to claim 7 or claim 8, wherein the light-attenuating substance is lipid plaque.
  10.  補完を行った前記不鮮明領域を識別可能に表示する前記超音波断層画像及び光干渉断層画像の一方を生成する
     請求項1から請求項9のいずれか1項に記載のプログラム。
    10. The program according to any one of claims 1 to 9, which generates one of the ultrasonic tomographic image and the optical coherence tomographic image that displays the blurred region that has been complemented in a identifiable manner.
  11.  管腔器官に挿入されたカテーテルにて検出した信号に基づき生成された超音波断層画像及び光干渉断層画像を取得し、
     前記超音波断層画像及び光干渉断層画像の一方における前記超音波断層画像及び光干渉断層画像の他方よりも不鮮明である不鮮明領域を特定し、
     前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の情報に基づき、前記不鮮明領域を補完する補完情報を取得し、
     取得した前記補完情報に基づき、前記不鮮明領域を補完した前記超音波断層画像及び光干渉断層画像の一方を生成する
     情報処理方法。
    Acquiring an ultrasonic tomographic image and an optical coherence tomographic image generated based on signals detected by a catheter inserted into a hollow organ,
    Identifying a blurred region in one of the ultrasonic tomographic image and the optical coherence tomographic image that is less clear than the other of the ultrasonic tomographic image and the optical coherence tomographic image,
    Acquiring complementary information for complementing the blurred region based on information of a region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image,
    An information processing method for generating one of the ultrasonic tomographic image and the optical coherence tomographic image in which the blurred region is complemented based on the obtained complementary information.
  12.  管腔器官に挿入されたカテーテルにて検出した信号に基づき生成された超音波断層画像及び光干渉断層画像を取得する断層画像取得部と、
     前記超音波断層画像及び光干渉断層画像の一方における前記超音波断層画像及び光干渉断層画像の他方よりも不鮮明である不鮮明領域を特定する特定部と、
     前記超音波断層画像及び光干渉断層画像の他方における前記不鮮明領域に対応する領域の情報に基づき、前記不鮮明領域を補完する補完情報を取得する補完情報取得部と、
     取得した前記補完情報に基づき、前記不鮮明領域を補完した前記超音波断層画像及び光干渉断層画像の一方を生成する生成部と
     を備える情報処理装置。
    a tomographic image acquisition unit that acquires an ultrasonic tomographic image and an optical coherence tomographic image generated based on signals detected by a catheter inserted into a hollow organ;
    a specifying unit that specifies a blurred region in one of the ultrasonic tomographic image and the optical coherence tomographic image that is less clear than the other of the ultrasonic tomographic image and the optical coherence tomographic image;
    a complementary information acquisition unit that acquires complementary information that complements the blurred region based on information of a region corresponding to the blurred region in the other of the ultrasonic tomographic image and the optical coherence tomographic image;
    and a generation unit that generates one of the ultrasonic tomographic image and the optical coherence tomographic image in which the blurred region is complemented based on the obtained complementary information.
PCT/JP2022/010252 2021-03-25 2022-03-09 Program, information processing method, and information processing device WO2022202320A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-052011 2021-03-25
JP2021052011 2021-03-25

Publications (1)

Publication Number Publication Date
WO2022202320A1 true WO2022202320A1 (en) 2022-09-29

Family

ID=83394917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010252 WO2022202320A1 (en) 2021-03-25 2022-03-09 Program, information processing method, and information processing device

Country Status (1)

Country Link
WO (1) WO2022202320A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000229079A (en) * 1999-02-09 2000-08-22 Ge Yokogawa Medical Systems Ltd Ultrasonography and ultrasonograph
JP2005095624A (en) * 2003-09-22 2005-04-14 Siemens Ag Medical check and/or treatment system
JP2015515916A (en) * 2012-05-11 2015-06-04 ヴォルカノ コーポレイションVolcano Corporation Apparatus and system for measuring images and blood flow velocity
JP2019518581A (en) * 2016-06-08 2019-07-04 リサーチ ディヴェロプメント ファウンデーション System and method for automated feature analysis and risk assessment of coronary artery plaques using intravascular optical coherence tomography

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000229079A (en) * 1999-02-09 2000-08-22 Ge Yokogawa Medical Systems Ltd Ultrasonography and ultrasonograph
JP2005095624A (en) * 2003-09-22 2005-04-14 Siemens Ag Medical check and/or treatment system
JP2015515916A (en) * 2012-05-11 2015-06-04 ヴォルカノ コーポレイションVolcano Corporation Apparatus and system for measuring images and blood flow velocity
JP2019518581A (en) * 2016-06-08 2019-07-04 リサーチ ディヴェロプメント ファウンデーション System and method for automated feature analysis and risk assessment of coronary artery plaques using intravascular optical coherence tomography

Similar Documents

Publication Publication Date Title
JP2007083057A (en) Catheter device, medical treatment device, and method for creating examination images when performing atherectomy
WO2014136137A1 (en) Diagnostic imaging apparatus, information processing device and control methods, programs and computer-readable storage media therefor
WO2014162365A1 (en) Image diagnostic device, method for controlling same, program, and computer-readable storage medium
WO2021199968A1 (en) Computer program, information processing method, information processing device, and method for generating model
JP2017225819A (en) Medical image diagnostic device and medical image processing device
US20230230244A1 (en) Program, model generation method, information processing device, and information processing method
US20240013514A1 (en) Information processing device, information processing method, and program
JP6717801B2 (en) Image diagnostic apparatus and image construction method
US20240013385A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
US20240013434A1 (en) Program, information processing method, and information processing device
JP2022055170A (en) Computer program, image processing method and image processing device
WO2023054467A1 (en) Model generation method, learning model, computer program, information processing method, and information processing device
WO2022202320A1 (en) Program, information processing method, and information processing device
US20230017227A1 (en) Program, information processing method, information processing apparatus, and model generation method
WO2022071265A1 (en) Program, information processing device, and information processing method
JP2022146621A (en) Computer program, image quality improvement learning model, learning model generating method, image processing method, and image processing device
US20240008849A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2022209652A1 (en) Computer program, information processing method, and information processing device
WO2023132332A1 (en) Computer program, image processing method, and image processing device
WO2023189308A1 (en) Computer program, image processing method, and image processing device
WO2024071252A1 (en) Computer program, information processing method, and information processing device
WO2024202465A1 (en) Program, image processing method, and image processing device
WO2024071322A1 (en) Information processing method, learning model generation method, computer program, and information processing device
JP2024142137A (en) PROGRAM, IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22775112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22775112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP