WO2022202320A1 - Programme, procédé de traitement d'informations et dispositif de traitement d'informations - Google Patents

Programme, procédé de traitement d'informations et dispositif de traitement d'informations Download PDF

Info

Publication number
WO2022202320A1
WO2022202320A1 PCT/JP2022/010252 JP2022010252W WO2022202320A1 WO 2022202320 A1 WO2022202320 A1 WO 2022202320A1 JP 2022010252 W JP2022010252 W JP 2022010252W WO 2022202320 A1 WO2022202320 A1 WO 2022202320A1
Authority
WO
WIPO (PCT)
Prior art keywords
tomographic image
image
optical coherence
region
ultrasonic
Prior art date
Application number
PCT/JP2022/010252
Other languages
English (en)
Japanese (ja)
Inventor
亮 上原
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2022202320A1 publication Critical patent/WO2022202320A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a program, an information processing method, and an information processing apparatus.
  • IVUS Intra Vascular Ultra Sound
  • OCT optical coherence tomography
  • Patent Document 1 discloses a medical examination system capable of achieving optimal diagnostic image quality by generating an image in which an ultrasonic tomographic image and an optical coherence tomographic image are combined. .
  • Patent Document 1 simply generates an image in which the central cutout of the inner side of the optical coherence tomogram and the outer side of the ultrasonic tomogram are combined, and the viewpoint of supporting efficient interpretation. There was room for improvement in
  • the purpose of the present disclosure is to provide a program or the like that can support efficient interpretation.
  • a program acquires an ultrasonic tomographic image and an optical coherence tomographic image generated based on a signal detected by a catheter inserted into a hollow organ, and acquires the ultrasonic tomographic image and the optical coherence tomographic image.
  • Complementary information for complementing the blurred region is acquired based on the information, and based on the acquired complementary information, a computer executes processing for generating one of the ultrasonic tomographic image and the optical coherence tomographic image that complement the blurred region.
  • FIG. 1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus
  • FIG. FIG. 2 is an explanatory diagram for explaining an outline of a diagnostic imaging catheter
  • FIG. 4 is an explanatory view showing a cross section of a blood vessel through which a sensor section is passed
  • FIG. 4 is an explanatory diagram for explaining a tomographic image
  • FIG. 4 is an explanatory diagram for explaining a tomographic image
  • 1 is a block diagram showing a configuration example of an image processing apparatus
  • FIG. FIG. 4 is an explanatory diagram showing an outline of a learning model
  • FIG. 10 is an explanatory diagram showing an outline of a learning model in the second embodiment
  • FIG. 11 is a flowchart showing an example of detailed procedures for specifying a blurred area in the second embodiment
  • FIG. 13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment
  • 13 is a flow chart showing an example of a processing procedure executed by an image processing apparatus according to the third embodiment
  • FIG. 14 is a flowchart showing an example of detailed procedures for specifying a blurred area in the fourth embodiment
  • an intravascular examination of a subject using a catheter will be described as an example, but the lumenal organ targeted for catheter examination is not limited to blood vessels, such as bile ducts, pancreatic ducts, bronchi, intestines, and the like. other luminal organs.
  • FIG. 1 is an explanatory diagram showing a configuration example of an image diagnostic apparatus 100.
  • an image diagnostic apparatus using a dual-type catheter having both intravascular ultrasound (IVUS) and optical coherence tomography (OCT) functions will be described.
  • Dual-type catheters are provided with a mode for acquiring ultrasound tomographic images only by IVUS, a mode for acquiring optical coherence tomographic images only by OCT, and a mode for acquiring both tomographic images by IVUS and OCT. , you can switch between these modes.
  • an ultrasound tomographic image and an optical coherence tomographic image will be referred to as an IVUS image and an OCT image, respectively.
  • IVUS images and OCT images are collectively referred to as tomographic images.
  • the diagnostic imaging apparatus 100 of this embodiment includes an intravascular examination apparatus 101 , an angiography apparatus 102 , an image processing apparatus 3 , a display apparatus 4 and an input apparatus 5 .
  • An intravascular examination apparatus 101 includes an imaging diagnostic catheter (catheter) 1 and an MDU (Motor Drive Unit) 2 .
  • the diagnostic imaging catheter 1 is connected to the image processing device 3 via the MDU 2 .
  • a display device 4 and an input device 5 are connected to the image processing device 3 .
  • the display device 4 is, for example, a liquid crystal display or an organic EL (Electro Luminescence) display, etc.
  • the input device 5 is, for example, a keyboard, mouse, trackball, microphone, or the like.
  • the display device 4 and the input device 5 may be laminated integrally to form a touch panel.
  • the input device 5 and the image processing device 3 may be configured integrally.
  • the input device 5 may be a sensor that accepts gesture input, line-of-sight input, or the like.
  • the angiography device 102 is connected to the image processing device 3.
  • the angiography apparatus 102 is an angiography apparatus for capturing an image of a blood vessel using X-rays from outside the patient's body while injecting a contrast agent into the patient's blood vessel to obtain an angiography image, which is a fluoroscopic image of the blood vessel.
  • the angiography apparatus 102 includes an X-ray source and an X-ray sensor, and the X-ray sensor receives X-rays emitted from the X-ray source to image a patient's X-ray fluoroscopic image.
  • the diagnostic imaging catheter 1 is provided with a marker that does not transmit X-rays, and the position of the diagnostic imaging catheter 1 (marker) is visualized in the angiographic image.
  • the angiography device 102 outputs an angio image obtained by imaging to the image processing device 3 and displayed on the display device 4 via the image processing device 3 .
  • the display device 4 displays an angiographic image and a tomographic image captured using the diagnostic imaging catheter 1 . Note that the angiography apparatus 102 is not essential in this embodiment.
  • FIG. 2 is an explanatory diagram for explaining the outline of the diagnostic imaging catheter 1.
  • FIG. The upper one-dot chain line area in FIG. 2 is an enlarged view of the lower one-dot chain line area.
  • the diagnostic imaging catheter 1 has a probe 11 and a connector portion 15 arranged at the end of the probe 11 .
  • the probe 11 is connected to the MDU 2 via the connector section 15 .
  • the side far from the connector portion 15 of the diagnostic imaging catheter 1 is referred to as the distal end side, and the connector portion 15 side is referred to as the proximal end side.
  • the probe 11 has a catheter sheath 11a, and a guide wire insertion portion 14 through which a guide wire can be inserted is provided at the distal end thereof.
  • the guidewire insertion part 14 constitutes a guidewire lumen, receives a guidewire previously inserted into the blood vessel, and is used to guide the probe 11 to the affected part by the guidewire.
  • the catheter sheath 11 a forms a continuous tube portion from the connection portion with the guide wire insertion portion 14 to the connection portion with the connector portion 15 .
  • a shaft 13 is inserted through the catheter sheath 11 a , and a sensor section 12 is connected to the distal end of the shaft 13 .
  • the sensor section 12 has a housing 12d, and the distal end side of the housing 12d is formed in a hemispherical shape to suppress friction and catching with the inner surface of the catheter sheath 11a.
  • an ultrasonic transmission/reception unit 12a (hereinafter referred to as an IVUS sensor 12a) for transmitting ultrasonic waves into the blood vessel and receiving reflected waves from the blood vessel
  • An optical transmitter/receiver 12b (hereinafter referred to as an OCT sensor 12b) for receiving reflected light from inside the blood vessel is arranged.
  • an IVUS sensor 12a is provided on the distal end side of the probe 11
  • an OCT sensor 12b is provided on the proximal end side.
  • the IVUS sensor 12a and the OCT sensor 12b are attached in a direction that is approximately 90 degrees to the axial direction of the shaft 13 (the radial direction of the shaft 13) as the transmitting/receiving direction of ultrasonic waves or near-infrared light. It is The IVUS sensor 12a and the OCT sensor 12b are desirably installed with a slight displacement from the radial direction so as not to receive reflected waves or reflected light from the inner surface of the catheter sheath 11a. In the present embodiment, for example, as indicated by the arrow in FIG.
  • the IVUS sensor 12a emits ultrasonic waves in a direction inclined toward the proximal side with respect to the radial direction, and the OCT sensor 12b It is attached so that the direction inclined toward the tip side is the irradiation direction of the near-infrared light.
  • the optical transmitter/receiver 12b may be a sensor for OFDI (Optical Frequency Domain Imaging).
  • An electric signal cable (not shown) connected to the IVUS sensor 12a and an optical fiber cable (not shown) connected to the OCT sensor 12b are inserted into the shaft 13.
  • the probe 11 is inserted into the blood vessel from the tip side.
  • the sensor unit 12 and the shaft 13 can move forward and backward inside the catheter sheath 11a, and can rotate in the circumferential direction.
  • the sensor unit 12 and the shaft 13 rotate around the central axis of the shaft 13 as a rotation axis.
  • an ultrasonic tomographic image IVUS image
  • OCT image optical interference image
  • the diagnostic imaging catheter 1 does not transmit X-rays in order to confirm the positional relationship between the IVUS image obtained by the IVUS sensor 12a or the OCT image obtained by the OCT sensor 12b and the angiographic image obtained by the angiography device 102.
  • markers In the example shown in FIG. 2, a marker 14a is provided at the distal end portion of the catheter sheath 11a, for example, the guide wire insertion portion 14, and a marker 12c is provided at the sensor portion 12 on the shaft 13 side.
  • the diagnostic imaging catheter 1 configured in this manner is imaged with X-rays, an angiographic image in which the markers 14a and 12c are visualized is obtained.
  • the positions at which the markers 14a and 12c are provided are examples, the marker 12c may be provided on the shaft 13 instead of the sensor section 12, and the marker 14a may be provided at a location other than the distal end of the catheter sheath 11a.
  • the MDU 2 is a driving device to which the probe 11 (catheter 1 for diagnostic imaging) is detachably attached via the connector portion 15 .
  • the MDU 2 controls the operation of the diagnostic imaging catheter 1 inserted into the blood vessel by driving a built-in motor according to the operation of the medical staff. For example, the MDU 2 performs a pullback operation in which the sensor unit 12 and the shaft 13 inserted into the probe 11 are pulled toward the MDU 2 side at a constant speed and rotated in the circumferential direction.
  • the sensor unit 12 continuously scans the inside of the blood vessel at predetermined time intervals while rotating while moving from the distal end side to the proximal end side by a pullback operation, thereby obtaining a plurality of transverse layer images substantially perpendicular to the probe 11 . are taken continuously at predetermined intervals.
  • the MDU 2 outputs to the image processing device 3 the reflected ultrasonic wave signal received by the IVUS sensor 12a and the reflected light received by the OCT sensor 12b.
  • the image processing device 3 acquires the reflected ultrasonic wave signal received by the IVUS sensor 12a via the MDU 2 and the reflected light received by the OCT sensor 12b.
  • the image processing device 3 generates ultrasonic line data from the signals of the reflected ultrasonic waves, and builds an ultrasonic tomographic image (IVUS image) in which a transverse layer of the blood vessel is imaged based on the generated ultrasonic line data.
  • the image processing device 3 generates light line data based on interference light generated by causing interference between the reflected light and the reference light obtained by, for example, separating the light from the light source in the image processing device 3. is generated, and an optical tomographic image (OCT image) obtained by imaging the transverse layer of the blood vessel is constructed based on the generated optical line data.
  • OCT image optical tomographic image
  • the image processing device 3 may be configured to acquire an IVUS image and an OCT image respectively from the diagnostic imaging catheter 1 having the IVUS sensor 12a and the diagnostic imaging catheter 1 having the OCT sensor 12b.
  • FIG. 3 is an explanatory view showing a cross section of a blood vessel through which the sensor section 12 is passed
  • FIG. 4 is an explanatory view explaining a tomographic image.
  • the operation of the IVUS sensor 12a and the OCT sensor 12b in the blood vessel, and the ultrasound line data and optical line data acquired by the IVUS sensor 12a and the OCT sensor 12b will be described using FIG.
  • the imaging core rotates about the central axis of the shaft 13 in the direction indicated by the arrow.
  • the IVUS sensor 12a transmits and receives ultrasonic waves at each rotation angle.
  • Lines 1, 2, . . . 512 indicate the transmission and reception directions of ultrasonic waves at each rotation angle.
  • the IVUS sensor 12a intermittently transmits and receives ultrasonic waves 512 times while rotating 360 degrees (one rotation) in the blood vessel.
  • the IVUS sensor 12a Since the IVUS sensor 12a obtains data of one line in the transmitting/receiving direction by one transmission/reception of ultrasonic waves, it is possible to obtain 512 ultrasonic line data radially extending from the center of rotation during one rotation. can.
  • the 512 ultrasonic line data are dense near the center of rotation, but become sparse with distance from the center of rotation. Therefore, the image processing device 3 generates a two-dimensional ultrasonic tomographic image (IVUS image) as shown on the left side of FIG. can be done.
  • the OCT sensor 12b also transmits and receives measurement light at each rotation angle. Since the OCT sensor 12b also transmits and receives measurement light 512 times while rotating 360 degrees inside the blood vessel, it is possible to obtain 512 optical line data radially extending from the center of rotation during one rotation. can.
  • the image processing device 3 generates a two-dimensional optical coherence tomographic image (OCT image) as shown on the right side of FIG. can be generated.
  • a two-dimensional tomographic image generated from 512 line data in this way is called a one-frame IVUS image or OCT image. Since the sensor unit 12 scans while moving inside the blood vessel, one frame of IVUS image or OCT image is acquired at each position after one rotation within the movement range (one pullback range). That is, since one frame of IVUS image or OCT image is acquired at each position from the distal side to the proximal side of the probe 11 in the movement range, as shown in FIG. 4B, multiple frames of IVUS images or An OCT image is acquired.
  • the ultrasound line data includes a high-brightness region corresponding to the surface portion of the calcified plaque and a region (unclear region) where the brightness value is greatly reduced after the high-brightness region. Then, in the IVUS image generated from the ultrasound line data, as shown on the left side of FIG. , and dark and unclear regions (blurred regions) with reduced luminance values.
  • the light line data includes two bright raised areas corresponding to the fibrous tissue surrounding the calcified plaque, and a calcified plaque between the two raised areas that is less bright than each raised area. area.
  • a bright rising region with a high luminance value formed around the calcified plaque and the boundary thereof are clearly shown by the rising region. and calcified plaque areas.
  • FIG. 5 is a block diagram showing a configuration example of the image processing device 3.
  • the image processing apparatus 3 is a computer and includes a control section 31 , a main storage section 32 , an input/output I/F 33 , an auxiliary storage section 34 and a reading section 35 .
  • the control unit 31 includes one or more CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. is configured using an arithmetic processing unit.
  • the control unit 31 is connected to each hardware unit constituting the image processing apparatus 3 via a bus.
  • the main storage unit 32 is a temporary storage area such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 31 to perform arithmetic processing.
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • flash memory etc.
  • the input/output I/F 33 is an interface to which the intravascular examination device 101, the angiography device 102, the display device 4 and the input device 5 are connected.
  • the control unit 31 acquires, from the intravascular examination apparatus 101 via the input/output I/F 33 , the signal of the reflected ultrasonic wave for the IVUS image and the reflected light for the OCT image.
  • the control unit 31 outputs various image signals such as an IVUS image, an OCT image, and a composite image to the display device 4 via the input/output I/F 33, thereby displaying various images on the display device 4. .
  • the control unit 31 receives information input to the input device 5 via the input/output I/F 33 .
  • the auxiliary storage unit 34 is a storage device such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), flash memory, or the like.
  • the auxiliary storage unit 34 stores the program 3P executed by the control unit 31 and various data necessary for the processing of the control unit 31 .
  • the auxiliary storage unit 34 may be an external storage device connected to the image processing device 3 .
  • the program 3P may be written in the auxiliary storage unit 34 at the manufacturing stage of the image processing device 3, or may be distributed by a remote server device and acquired by the image processing device 3 through communication and stored in the auxiliary storage unit 34. You may let The program 3P may be readable and recorded on the recording medium 30 such as a magnetic disk, an optical disk, or a semiconductor memory, or may be read from the recording medium 30 by the reading unit 35 and stored in the auxiliary storage unit 34 .
  • the auxiliary storage unit 34 also stores the learning model 3M.
  • the learning model 3M is a machine learning model that has learned training data.
  • the learning model 3M is assumed to be used as a program module that constitutes artificial intelligence software.
  • the image processing device 3 may be a multicomputer including a plurality of computers. Further, the image processing device 3 may be a server client system, a cloud server, or a virtual machine virtually constructed by software. In the following description, it is assumed that the image processing apparatus 3 is one computer.
  • the control unit 31 of the image processing device 3 uses the learning model 3M to identify the blurred area of the IVUS image, and complements the identified blurred area in the IVUS image with the OCT image to create a synthetic IVUS image (composite image). Generate.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image. A method of generating a composite IVUS image according to this embodiment will be specifically described with reference to FIGS. 6 and 7.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image. A method of generating a composite IVUS image according to this embodiment will be specifically described with reference to FIGS. 6 and 7.
  • FIG. 6 is an explanatory diagram showing an outline of the learning model 3M
  • FIG. 7 is an explanatory diagram explaining a method of generating a synthetic IVUS image.
  • the learning model 3M is a machine learning model that receives an IVUS image and outputs information indicating the high-brightness region (surface portion) of the calcified plaque in the IVUS image. Specifically, the learning model 3M receives, as input, a plurality of frames of IVUS images that are continuous along the longitudinal direction of the blood vessel according to scanning by the diagnostic imaging catheter 1 . The learning model 3M identifies high intensity regions of calcified plaque in each successive frame of IVUS images along the time axis t.
  • the learning model 3M is, for example, a CNN (Convolutional Neural Network).
  • the learning model 3M uses image recognition technology using semantic segmentation to determine whether each pixel in the input image is a pixel corresponding to an object (high-brightness region of calcified plaque) region. Recognize in units.
  • the learning model 3M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores image feature values, and an output layer that outputs information indicating the position and range of an object included in the IVUS image.
  • the learning model 3M is U-Net, for example.
  • the input layer of the learning model 3M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer.
  • the intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer).
  • CONV layer convolution layer
  • DECONV layer deconvolution layer
  • a convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object.
  • the deconvolution layer performs the deconvolution process to restore the original dimensions.
  • the restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not.
  • the output layer has multiple nodes that output label images.
  • the label image is, for example, an image in which pixels corresponding to high intensity regions of calcified plaque are class "1" and pixels corresponding to other images are class "0".
  • the learning model 3M prepares training data in which an IVUS image containing objects (high-brightness regions of calcified plaque) and labels indicating the positions of each object are associated, and uses the training data to create an untrained neural It can be generated by machine learning the network.
  • the control unit 31 inputs a plurality of IVUS images included in the training data to the input layer of the neural network model before learning, performs arithmetic processing in the intermediate layer, and outputs the image output from the output layer. get.
  • the control unit 31 compares the image output from the output layer with the label image included in the training data, and performs arithmetic processing in the intermediate layer so that the image output from the output layer approaches the label image. Optimize the parameters used.
  • the parameters are, for example, weights (coupling coefficients) between neurons.
  • the parameter optimization method is not particularly limited, for example, the control unit 31 optimizes various parameters using the error backpropagation method. For the position of the object, for example, a judgment made by a doctor having specialized knowledge may be used as a correct label.
  • the control unit 31 may input each frame image to the learning model 3M one by one for processing, but may input a plurality of continuous frame images at the same time and identify a high-intensity region of calcified plaque from the plurality of frame images. can be detected simultaneously.
  • the control unit 31 sets the learning model 3M as a 3D-CNN (eg, 3D U-net) that handles three-dimensional input data.
  • the control unit 31 treats the coordinates of the two-dimensional frame images as three-dimensional data, with the coordinates of the two-dimensional frame images as two axes and the time (generation time point) t at which each frame image was acquired as one axis.
  • the control unit 31 inputs a set of multiple frame images (for example, 16 frames) for a predetermined unit time to the learning model 3M, and labels the high-intensity regions of the calcified plaque to each of the multiple frame images. Output images at the same time. As a result, it is possible to detect the high-intensity region of the calcified plaque in consideration of the preceding and following frame images that are consecutive in time series, and to improve the detection accuracy.
  • the configuration of the learning model 3M is not limited as long as it can identify high-intensity regions of calcified plaque contained in medical images.
  • the control unit 31 specifies the position and range of the blurred area outside the high-brightness area based on the high-brightness area of the calcified plaque detected by the learning model 3M. Specifically, the control unit 31 identifies a plurality of ultrasound line data corresponding to the high-brightness region based on the position of the high-brightness region in the circumferential direction of the IVUS image. The control unit 31 identifies a plurality of optical line data at the same angle (same line number) as each of the identified plurality of ultrasound line data.
  • the control unit 31 identifies the position (coordinates) of the calcified plaque regions formed between the rising regions based on the change in the luminance value in the depth direction of each light line data. With respect to the ultrasonic line data and the optical line data at the same angle, the control unit 31 controls the high luminance area from the calcified plaque area based on the position of the high intensity area in the ultrasonic line data and the position of the calcified plaque area in the optical line data. Identify the location of the area excluding the luminance area, that is, the blurred area.
  • the control unit 31 performs the above-described processing on all of the plurality of ultrasound line data corresponding to the high-brightness region in the IVUS image, and interpolates between each line by a known interpolation process to obtain the high-brightness region of the calcified plaque. Identify the location and extent of the blurred region outside the . Note that the control unit 31 generates a plurality of arbitrary lines in the radial direction from the rotation center of the probe 11 on the IVUS image and the OCT image, and uses the change in the luminance value of the generated arbitrary lines to determine the position of the blurred region described above. You may perform the process which specifies.
  • the method of specifying the blurred area is not limited to the above example.
  • the control unit 31 receives an IVUS image and uses a learning model 3M that outputs a blurred region caused by calcified plaque in the IVUS image to directly detect the blurred region caused by the calcified plaque from the IVUS image.
  • the control unit 31 may detect a plaque region from an IVUS image using a learning model 3M that receives an IVUS image as input and outputs a plaque region including a high-brightness region and a blurred region in the IVUS image.
  • the control unit 31 uses the detection result of the plaque region and the change in the brightness value of the ultrasound line data to determine the brightness value outside the plaque region and the region with the high brightness value (high brightness region). Low areas (blurred areas) may be identified.
  • the control unit 31 generates complementary information for complementing the blurred region in the identified IVUS image. Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. In the example of FIG. 7, the portion surrounded by the dashed line in the OCT image shown on the upper right side is the area corresponding to the blurred area. The control unit 31 acquires the pixel values of the area corresponding to the identified blurred area.
  • the complementary information is an image containing pixel values of a region corresponding to the blurred region of the OCT image with respect to the blurred region of the IVUS image, and is a partial image obtained by cutting out the region corresponding to the blurred region of the OCT image.
  • the control unit 31 Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image, as shown in the lower part of FIG.
  • control unit 31 corrects the pixel values of the OCT image in the complementary information according to the pixel values of the high-brightness region in the IVUS image, and uses the pixel values after correction to complement the blurred region in the IVUS image. good.
  • control unit 31 may adjust the area of the OCT image of the complementary information in consideration of the deviation of the imaging area between the IVUS image and the OCT image acquired at the same time as the IVUS image. For example, the control unit 31 may set, in the OCT image, an area shifted by a predetermined angle in the circumferential direction from the area corresponding to the blurred area of the IVUS image as the target of the complementary information.
  • the control unit 31 may use the area corresponding to the blurred area of the OCT image shifted by a predetermined number of frames, such as the preceding and succeeding frames of the OCT image acquired at the same time as the IVUS image, as the target of the complementary information.
  • control unit 31 may perform preprocessing for synchronizing the depths of the IVUS image and the OCT image.
  • the position and range (size) of the area corresponding to the blurred area in the OCT image may be specified.
  • FIG. 8 is a flowchart showing an example of a processing procedure executed by the image processing device 3.
  • FIG. When the ultrasound reflected wave signal and the reflected light are output from the intravascular examination apparatus 101, the control unit 31 of the image processing apparatus 3 executes the following processes according to the program 3P.
  • the control unit 31 of the image processing device 3 acquires a plurality of IVUS images and OCT images in chronological order via the intravascular examination device 101 (step S11). More specifically, the control unit 31 acquires an IVUS image and an OCT image generated based on reflected ultrasound wave signals and reflected light acquired via the intravascular examination apparatus 101 .
  • the IVUS image contains high intensity areas indicating the surface of the calcified plaque, and dark, indistinct blurred areas of reduced intensity values beyond the surface of the calcified plaque.
  • OCT images contain plaque regions that show calcified plaque at depth.
  • the control unit 31 identifies the blurred area in the acquired IVUS image (step S12).
  • a blurred region is a region formed radially outside the calcified plaque due to the calcified plaque and having a lower brightness than the calcified plaque.
  • FIG. 9 is a flowchart showing an example of a detailed procedure for specifying a blurred area.
  • the processing procedure shown in the flowchart of FIG. 9 corresponds to the details of step S12 in the flowchart of FIG.
  • the control unit 31 inputs the obtained IVUS image to the learning model 3M as input data (step S21).
  • the control unit 31 acquires the high-brightness region of the IVUS image output from the learning model 3M (step S22).
  • the control unit 31 identifies the blurred area formed radially outside the high-brightness area of the IVUS image based on the change in the brightness value of the IVUS image and the brightness value of the OCT image (step S23).
  • the control unit 31 returns the process to step S13 in the flowchart of FIG.
  • the control unit 31 acquires complementary information for complementing the blurred region of the IVUS image according to the identified blurred region of the IVUS image (step S13). Specifically, based on the position and range of the blurred region in the IVUS image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the OCT image acquired at the same time as the IVUS image. Get the pixel values of the area corresponding to the blurred area.
  • control unit 31 Based on the complementary information, the control unit 31 generates a composite IVUS image by replacing the pixel values of the blurred region in the IVUS image with the pixel values of the acquired OCT image (step S14). The control unit 31 displays a screen including the generated composite IVUS image on the display device 4 (step S15), and ends the series of processes.
  • FIG. 10 is a schematic diagram showing an example of a screen 40 displayed on the display device 4.
  • the screen 40 includes, for example, an IVUS image display section 41, an OCT image display section 42, and an input button 43 for inputting display/non-display of the composite IVUS image.
  • the IVUS image display unit 41 displays IVUS images acquired via the intravascular examination apparatus 101 .
  • the OCT image display unit 42 displays OCT images acquired via the intravascular examination apparatus 101 .
  • the input button 43 is displayed below the IVUS image display section 41, for example. Although the display position of the input button 43 is not limited, it is preferably displayed near the IVUS image display section 41 or superimposed on the IVUS image display section 41 .
  • a screen 40 is displayed via the display device 4 .
  • a screen 40 containing a composite IVUS image includes a composite IVUS image display portion 44 .
  • the composite IVUS image display section 44 is displayed at the same position as the IVUS image display section 41 on the screen 40 instead of the IVUS image display section 41, for example.
  • the synthetic IVUS image display unit 44 displays a synthetic IVUS image obtained by complementing the unclear region of the IVUS image of the IVUS image display unit 41 using complementary information.
  • the composite IVUS image display unit 44 identifiably displays the blurred area, that is, the area obtained by complementing the original IVUS image.
  • the control unit 31 of the image processing device 3 adds a frame line to the edge of the partial image generated using the OCT image based on the complementary information, and combines the partial image including the frame line with the IVUS image.
  • An image is generated and displayed on the composite IVUS image display unit 44 .
  • a user such as a physician, can easily distinguish between the complemented portion and the non-complemented portion of the actual IVUS image itself.
  • the method of marking the complementary portion is not limited, and may be, for example, coloring, shading, or the like.
  • the control unit 31 displays either the IVUS image or the composite IVUS image according to the switching operation of the input button 43 . Since the IVUS image and the synthesized IVUS image are displayed at the same position on the screen, the user can confirm the IVUS image and the synthesized IVUS image without moving the line of sight. Note that the control unit 31 may display the input button 43 in association with the IVUS image only when there is a composite IVUS image corresponding to the IVUS image. The user can recognize the presence of complementary information by displaying the input button 43 .
  • the screen 40 is an example and is not limited.
  • the screen 40 may include an IVUS image display section 41, an OCT image display section 42, and a composite IVUS image display section 44, and display all of the IVUS image, the OCT image, and the composite IVUS image.
  • the screen 40 may also include an angio image.
  • the screen 40 may include a three-dimensional image of calcified plaque generated by stacking a plurality of synthetic IVUS images (slice data) that are continuous in time series. A three-dimensional image can be generated, for example, by the voxel method.
  • the display device 4 is the output destination of the composite IVUS image, but it is of course possible to output the composite IVUS image to a device other than the display device 4 (for example, a personal computer).
  • the image processing apparatus 3 identifies the blurred region in the IVUS image according to the state of the IVUS image and the OCT image, and complements the blurred region in the IVUS image using an OCT image that is clearer than the IVUS image. display the combined IVUS image.
  • the image processing device 3 preferably performs interpolation on blurred areas formed at various positions in the IVUS image.
  • a composite IVUS image i.e., an IVUS image that includes a portion of an OCT image, allows the user to efficiently recognize information in both the IVUS and OCT images without comparative interpretation of the IVUS and OCT images. . Therefore, the image obtained by the diagnostic imaging catheter 1 can be easily interpreted, and an efficient diagnosis can be made.
  • This embodiment is particularly effective in examination of lower extremity blood vessels where calcified plaque is likely to occur.
  • the second embodiment differs from the first embodiment in the method of identifying the blurred area.
  • the differences from the first embodiment will be mainly described, and the same reference numerals will be given to the configurations common to the first embodiment, and detailed description thereof will be omitted.
  • the image processing apparatus 3 of the second embodiment stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34.
  • FIG. 11 is an explanatory diagram showing an overview of the learning model 3M in the second embodiment.
  • the first learning model 31M is a machine learning model that takes an IVUS image as input and outputs the blurred area in the IVUS image.
  • the second learning model 32M is a machine learning model that receives an OCT image as input and outputs a calcified plaque region in the OCT image. Since the first learning model 31M and the second learning model 32M have the same configuration, the configuration of the first learning model 31M will be described below.
  • the first learning model 31M is, for example, CNN.
  • the first learning model 31M recognizes on a pixel-by-pixel basis whether or not each pixel in the input image corresponds to an object region by image recognition technology using semantic segmentation.
  • the first learning model 31M has an input layer to which an IVUS image is input, an intermediate layer that extracts and restores the feature amount of the image, and an output layer that outputs information indicating the position and range of the object included in the IVUS image. have.
  • the first learning model 31M is U-Net, for example.
  • the input layer of the first learning model 31M has a plurality of nodes that receive input of pixel values of pixels included in the IVUS image, and passes the input pixel values to the intermediate layer.
  • the intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer).
  • CONV layer convolution layer
  • DECONV layer deconvolution layer
  • a convolutional layer is a layer that dimensionally compresses image data. Dimensional compression extracts the features of the object.
  • the deconvolution layer performs the deconvolution process to restore the original dimensions.
  • the restoration process in the deconvolution layer produces a binarized label image that indicates whether each pixel of the IVUS image is an object or not.
  • the output layer has one or more nodes that output label images.
  • the label image is, for example, an image in which pixels corresponding to blurred areas are class "1" and pixels corresponding to other images are class "0".
  • the first learning model 31M prepares training data in which an IVUS image containing an object (blurred area) and a label indicating the position of each object are associated, and uses the training data to machine an unlearned neural network. It can be generated by learning.
  • the second learning model 32M has the same configuration as the second learning model 32M, recognizes the calcified plaque region included in the image portion pixel by pixel, and outputs the generated label image.
  • the label image is, for example, an image in which pixels corresponding to calcified plaque regions are class "1" and pixels corresponding to other images are class "0".
  • the second learning model 32M prepares OCT images containing objects (calcified plaque regions) and training data in which labels indicating the positions of each object are associated, and uses the training data to prepare an unlearned neural network. can be generated by machine learning.
  • the first learning model 31M configured as described above, as shown in FIG. 11, by inputting an IVUS image including a blurred region to the first learning model 31M, a label image showing the blurred region in units of pixels is obtained. is obtained.
  • the second learning model 32M by inputting an OCT image containing a calcified plaque region into the second learning model 32M, a label image showing the calcified plaque region in units of pixels is obtained.
  • FIG. 12 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the second embodiment.
  • the processing procedure shown in the flowchart of FIG. 12 corresponds to the details of step S12 in the flowchart of FIG.
  • the control unit 31 of the image processing device 3 inputs the IVUS image acquired in step S11 of FIG. 8 to the first learning model 31M as input data (step S31).
  • the control unit 31 acquires the blurred region of the IVUS image output from the first learning model 31M (step S32).
  • the control unit 31 inputs the OCT image acquired in step S11 of FIG. 8 to the second learning model 32M as input data (step S33).
  • the control unit 31 acquires the calcified plaque region of the OCT image output from the second learning model 32M (step S34).
  • the control unit 31 compares the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image to determine whether the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match. Determine (step S35). In other words, the control unit 31 determines whether or not to complement the blurred region of the IVUS image using the information of the calcified plaque region of the OCT image.
  • step S35 NO
  • the control unit 31 ends the process.
  • the blurred region in the IVUS image is recognized by the first learning model 31M, and the calcified plaque region is not included in the position corresponding to the blurred region in the OCT image, the recognized blurred region is in the calcified plaque. Since it is presumed that it is not caused by this, no supplementation with the OCT image is performed. In this case, the processing from step S13 onward in FIG. 8 may not be executed because the blurred region is not specified. That is, the control unit 31 does not generate a composite IVUS image.
  • step S35 when it is determined that the positions of the blurred region of the IVUS image and the calcified plaque region of the OCT image match (step S35: YES), the control unit 31 complements the blurred region output from the first learning model 31M. (step S36).
  • the control unit 31 executes the processes from step S13 onward in FIG. 8 to generate a synthetic IVUS image that complements the blurred area of the IVUS image based on the calcified plaque area of the OCT image.
  • the image processing apparatus 3 generates a composite IVUS image only when the blurred region in the IVUS image and the calcified plaque region in the OCT image are correlated, so that inappropriate interpolation of the blurred region is performed. can be prevented.
  • lipid plaque is used as an example of a substance that attenuates light.
  • the blood vessel contains lipid plaque
  • the light transmitted radially outward of the probe 11 is attenuated more than the lipid plaque. Therefore, in the OCT image, there are a high brightness region with a high brightness value corresponding to the surface part of the lipid plaque, and a dark and unclear region with a low brightness value shown outside the high brightness region (unclear region). and are included.
  • the attenuation of ultrasonic waves by a lipid plaque is small, so that the lipid plaque can be well delineated over the deep portion radially outward.
  • an IVUS image contains a bright raised area with a high luminance value formed around the lipid plaque and a lipid plaque area clearly demarcated by the raised area.
  • the control unit 31 of the image processing device 3 generates a synthetic OCT image in which the blurred area in the OCT image is complemented by the lipid plaque area in the IVUS image.
  • the learning model 3M is a model that receives an OCT image, for example, and outputs a high-intensity region of lipid plaque in the OCT image.
  • FIG. 13 and 14 are flowcharts showing an example of the processing procedure executed by the image processing device 3 according to the third embodiment.
  • the control unit 31 of the image processing apparatus 3 according to the third embodiment executes the same processes as steps S11 to S15 and steps S21 to S23 of the first embodiment, but differs from the first embodiment in the following points. different.
  • the control unit 31 identifies the blurred area included in the OCT image in step S42. Specifically, in step 51, the control unit 31 inputs the OCT image as input data to the learning model 3M, and in step 52, acquires the high-brightness region of the OCT image output from the learning model 3M. Identify blurred areas.
  • the control unit 31 acquires complementary information for complementing the blurred region of the OCT image in step S43. Specifically, based on the position and range of the blurred region in the OCT image, the control unit 31 identifies and identifies the position and range of the region corresponding to the blurred region in the IVUS image acquired at the same time as the OCT image. Get the pixel values of the area corresponding to the blurred area. In step S44, the control unit 31 generates a composite OCT image by replacing the pixel values of the blurred region in the OCT image with the pixel values of the acquired IVUS image.
  • a synthetic OCT image is an image in which a blurred region in the OCT image is combined with a portion of the lipid plaque region in the IVUS image.
  • a synthesized OCT image in which an unclear region in the OCT image is complemented by the IVUS image is presented, so efficient interpretation of the IVUS image and the OCT image can be supported.
  • the fourth embodiment differs from the third embodiment in the method of identifying the blurred region in the OCT image.
  • the differences from the first to third embodiments will be mainly described, and the same reference numerals will be assigned to the configurations common to the first to third embodiments, and detailed description thereof will be omitted. .
  • the image processing apparatus 3 is a synthetic OCT image that complements the blurred region in the OCT image containing a substance with low light transmittance such as lipid plaque with an IVUS image. to generate
  • the image processing device 3 stores learning models 3M including a first learning model 31M and a second learning model 32M in the auxiliary storage unit 34 .
  • the first learning model 31M is a model that receives an OCT image as an input and outputs a blurred area in the OCT image.
  • the second learning model 32M is a model that receives an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
  • FIG. 15 is a flowchart showing an example of a detailed procedure for specifying a blurred area in the fourth embodiment.
  • the control unit 31 of the image processing device 3 inputs the OCT image as input data to the first learning model 31M (step S61), and acquires the blurred region in the OCT image output from the first learning model 31M (step S62). .
  • the control unit 31 also inputs the IVUS image as input data to the second learning model 32M (step S63), and acquires the lipid plaque region in the IVUS image output from the second learning model 32M (step S64). After that, the control unit 31 executes the same processing as in steps S35 and S36 of the second embodiment to identify the blurred region in the OCT image.
  • the image processing apparatus 3 generates a synthetic OCT image only when the blurred region in the OCT image and the lipid plaque region in the IVUS image are correlated, so that inappropriate interpolation of the blurred region can be prevented.
  • the image processing apparatus 3 includes the first learning model 31M and the second learning model 32M described in the second embodiment, and the first learning model 31M and the second learning model 32M described in the fourth embodiment.
  • a first learning model receives an IVUS image as an input and outputs a blurred region in the IVUS image.
  • the second learning model takes an OCT image as an input and outputs a calcified plaque region in the OCT image.
  • a third learning model receives an OCT image as an input and outputs a blurred region in the OCT image.
  • a fourth learning model takes an IVUS image as an input and outputs a lipid plaque region in the IVUS image.
  • the control unit 31 of the image processing device 3 inputs the IVUS image and the OCT image acquired via the intravascular examination device 101 to each of the four types of learning models described above, and acquires the output recognition result.
  • the control unit 31 executes the complementing process described in the second embodiment.
  • the control unit 31 generates a composite IVUS image by interpolating the blurred region of the IVUS image with the OCT image.
  • control unit 31 when the OCT image includes a blurred region, the control unit 31 performs the complementing process described in the fourth embodiment when a lipid plaque region is included in a position corresponding to the blurred region of the IVUS image. .
  • the control unit 31 generates a synthetic OCT image in which the blurred region of the OCT image is interpolated with the IVUS image.
  • the image processing apparatus 3 can present a composite image that has been appropriately complemented according to the state of the IVUS image and the OCT image.
  • part or all of the processing executed by the image processing device 3 may be executed by an external server (not shown) communicably connected to the image processing device 3 .
  • the storage unit of the external server stores programs and learning models similar to the programs 3P and learning models 3M described above.
  • the external server acquires IVUS images and OCT images from the image processing device 3 via a network such as a LAN (Local Area Network) or the Internet.
  • the external server executes the same processing as the image processing apparatus 3 of each embodiment based on the acquired IVUS image and OCT image, and transmits the generated synthetic IVUS image or synthetic OCT image to the image processing apparatus 3 .
  • the image processing device 3 acquires the composite IVUS image or the composite OCT image transmitted from the external server, and causes the display device 4 to display it.
  • REFERENCE SIGNS LIST 100 diagnostic imaging device 101 intravascular examination device 102 angiography device 1 catheter for diagnostic imaging (catheter) 2 MDUs 3 image processing device 31 control section 32 main storage section 33 input/output I/F 34 auxiliary storage unit 35 reading unit 3P program 3M learning model 31M first learning model 32M second learning model 30 recording medium 4 display device 5 input device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Optics & Photonics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

L'invention concerne un programme et analogue pouvant aider à réaliser une interprétation d'image radiographique efficace. Le programme selon la présente invention amène un ordinateur à exécuter un processus pour : acquérir une image de tomographie ultrasonore et une image de tomographie par cohérence optique générées sur la base d'un signal détecté par un cathéter inséré dans un organe luminal ; identifier une région indistincte qui est plus indistincte dans une image parmi l'image de tomographie ultrasonore ou l'image de tomographie par cohérence optique que dans l'autre image parmi l'image de tomographie ultrasonore ou l'image de tomographie par cohérence optique ; acquérir, sur la base d'informations concernant une région correspondant à la région indistincte dans l'autre image parmi l'image de tomographie ultrasonore ou l'image de tomographie par cohérence optique, des informations de complément pour compléter la région indistincte ; et générer, sur la base des informations de complément acquises, l'image de tomographie ultrasonore ou l'image de tomographie par cohérence optique qui complète la région indistincte.
PCT/JP2022/010252 2021-03-25 2022-03-09 Programme, procédé de traitement d'informations et dispositif de traitement d'informations WO2022202320A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-052011 2021-03-25
JP2021052011 2021-03-25

Publications (1)

Publication Number Publication Date
WO2022202320A1 true WO2022202320A1 (fr) 2022-09-29

Family

ID=83394917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010252 WO2022202320A1 (fr) 2021-03-25 2022-03-09 Programme, procédé de traitement d'informations et dispositif de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2022202320A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000229079A (ja) * 1999-02-09 2000-08-22 Ge Yokogawa Medical Systems Ltd 超音波撮像方法および装置
JP2005095624A (ja) * 2003-09-22 2005-04-14 Siemens Ag 医療検査および/または治療システム
JP2015515916A (ja) * 2012-05-11 2015-06-04 ヴォルカノ コーポレイションVolcano Corporation 画像及び血流速度測定のための装置及びシステム
JP2019518581A (ja) * 2016-06-08 2019-07-04 リサーチ ディヴェロプメント ファウンデーション 血管内光干渉断層撮影法を用いた冠動脈プラークの自動特徴分析およびリスク評価のためのシステムおよび方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000229079A (ja) * 1999-02-09 2000-08-22 Ge Yokogawa Medical Systems Ltd 超音波撮像方法および装置
JP2005095624A (ja) * 2003-09-22 2005-04-14 Siemens Ag 医療検査および/または治療システム
JP2015515916A (ja) * 2012-05-11 2015-06-04 ヴォルカノ コーポレイションVolcano Corporation 画像及び血流速度測定のための装置及びシステム
JP2019518581A (ja) * 2016-06-08 2019-07-04 リサーチ ディヴェロプメント ファウンデーション 血管内光干渉断層撮影法を用いた冠動脈プラークの自動特徴分析およびリスク評価のためのシステムおよび方法

Similar Documents

Publication Publication Date Title
JP2007083057A (ja) カテーテル装置、治療装置およびアテレクトミーを実施する際の検査画像の作成方法
JP6095770B2 (ja) 画像診断装置及びその作動方法、プログラム及びコンピュータ可読記憶媒体
WO2014136137A1 (fr) Appareil d'imagerie diagnostique, dispositif de traitement des informations et procédés de commande, programmes et support de stockage lisible par ordinateur associés
WO2021199968A1 (fr) Programme informatique, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
JP2017225819A (ja) 医用画像診断装置及び医用画像処理装置
US20230230244A1 (en) Program, model generation method, information processing device, and information processing method
JP6717801B2 (ja) 画像診断装置および画像構築方法
US20240013385A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
JP2022055170A (ja) コンピュータプログラム、画像処理方法及び画像処理装置
WO2023054467A1 (fr) Procédé de génération de modèle, modèle d'apprentissage, programme informatique, procédé de traitement d'informations et dispositif de traitement d'informations
WO2022202320A1 (fr) Programme, procédé de traitement d'informations et dispositif de traitement d'informations
WO2022071265A1 (fr) Programme, et dispositif et procédé de traitement d'informations
WO2022209692A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
US20240008849A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2022209652A1 (fr) Programme informatique, procédé de traitement d'informations et dispositif de traitement d'informations
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2023132332A1 (fr) Programme informatique, procédé de traitement d'image et dispositif de traitement d'image
WO2023189308A1 (fr) Programme informatique, procédé de traitement d'image et dispositif de traitement d'image
WO2024071252A1 (fr) Programme informatique, procédé et dispositif de traitement d'informations
WO2022202323A1 (fr) Programme, procédé de traitement d'informations et dispositif de traitement d'informations
WO2024071322A1 (fr) Procédé de traitement d'informations, procédé de génération de modèle d'apprentissage, programme informatique et dispositif de traitement d'informations
WO2022209705A1 (fr) Programme, procédé de traitement d'image et dispositif de traitement d'image
WO2023100838A1 (fr) Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage
WO2023189260A1 (fr) Programme informatique, dispositif de traitement d'informations et procédé de traitement d'informations
WO2022239529A1 (fr) Dispositif de traitement d'image médicale, procédé de traitement d'image médicale et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22775112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22775112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP