WO2023100838A1 - Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage - Google Patents

Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage Download PDF

Info

Publication number
WO2023100838A1
WO2023100838A1 PCT/JP2022/043872 JP2022043872W WO2023100838A1 WO 2023100838 A1 WO2023100838 A1 WO 2023100838A1 JP 2022043872 W JP2022043872 W JP 2022043872W WO 2023100838 A1 WO2023100838 A1 WO 2023100838A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cross
contrast
sectional
side branch
Prior art date
Application number
PCT/JP2022/043872
Other languages
English (en)
Japanese (ja)
Inventor
耕太郎 楠
陽 井口
耕一 井上
祐次郎 松下
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023100838A1 publication Critical patent/WO2023100838A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a computer program, an information processing device, an information processing method, and a learning model generation method.
  • Patent Literature 1 discloses an X-ray diagnostic apparatus capable of ensuring the visibility of the entire X-ray image during contrast imaging.
  • Contrast-enhanced images are taken from various directions because the appearance of the lesion differs depending on the imaging direction (angle) even for the same lesion.
  • doctors should use IVUS (Intra Vascular Ultra Sound) as well as X-ray diagnostic equipment.
  • OCT Optical Coherence Tomography
  • OFDI Optical Frequency Domain Imaging
  • contrast-enhanced images which are shadowgraphs
  • IVUS and OFDI which are cross-sectional images (tomographic images) of hollow organs
  • identifying the corresponding position on the contrast-enhanced image for each cross-sectional image of the hollow organ requires experience, is highly difficult, and is not easy.
  • the present invention has been made in view of such circumstances, and includes a computer program, an information processing apparatus, an information processing method, and a computer program capable of easily specifying a corresponding position on a contrast image for each cross-sectional image of a hollow organ.
  • An object of the present invention is to provide a learning model generation method.
  • a computer program causes a computer to acquire contrast-enhanced images of a hollow organ and to obtain a plurality of images along the axial direction of the hollow organ. to detect a side branch of the luminal organ based on the contrast-enhanced image; identify a cross-sectional image in which the side branch exists based on the plurality of cross-sectional images; A process of associating with the lateral canal on the cross-sectional image is executed.
  • FIG. 1 is a diagram showing an example of a configuration of a diagnostic imaging system according to an embodiment
  • FIG. It is a figure which shows the 1st example of a structure of an information processing apparatus. It is a figure which shows an example of a structure of a 1st learning model.
  • FIG. 10 is a diagram showing an example of a configuration of a second learning model;
  • FIG. 10 is a diagram showing a first example of a configuration of a third learning model;
  • FIG. 12 is a diagram showing a second example of the configuration of the third learning model; It is a figure which shows the flow of a process by an imaging diagnostic system.
  • FIG. 4 is a diagram showing an example of a method of estimating a blood vessel path;
  • FIG. 4 is a diagram showing an example of a method of estimating a blood vessel path;
  • FIG. 4 is a diagram showing an example of a method of estimating a blood vessel path;
  • FIG. 4 is a diagram showing an example of a method of
  • FIG. 10 is a diagram showing an example of a method for detecting side branches on a contrast image
  • FIG. 4 is a diagram showing an example of association between a side branch on a contrast-enhanced image and a side branch on a cross-sectional image
  • FIG. 10 is a diagram showing an example of association between a blood vessel other than a side branch on a contrast image and a cross-sectional image
  • It is a figure which shows the 2nd example of a structure of an information processing apparatus.
  • It is a figure which shows an example of a structure of a learning model.
  • It is a figure which shows an example of the correspondence information which a learning model outputs.
  • FIG. 1 is a diagram showing an example of the configuration of a diagnostic imaging system 100 according to this embodiment.
  • the diagnostic imaging system 100 is an apparatus for performing intravascular imaging (diagnostic imaging) used for cardiac catheterization (PCI).
  • Cardiac catheterization is a method of treating a narrowed portion of a coronary artery by inserting a catheter from a blood vessel such as the groin, arm, or wrist.
  • intravascular imaging intravascular ultrasound (IVUS: Intra Vascular Ultra Sound) and optical coherence tomography (OFDI: Optical Frequency Domain Imaging, OCT: Optical Coherence Tomography).
  • IVUS utilizes the reflection of ultrasound to interpret the inside of a blood vessel as a tomographic image.
  • a thin catheter equipped with an ultra-small sensor at the tip is inserted into the coronary artery and passed through the affected area.
  • OFDI uses near-infrared rays to interpret the state of blood vessels with high-resolution images.
  • a catheter is inserted into a blood vessel, near-infrared rays are emitted from the distal end, a cross section of the blood vessel is measured by an interferometry, and a cross section image is generated.
  • OCT is an intravascular imaging diagnosis that applies near-infrared rays and optical fiber technology.
  • cross-sectional images include those generated by IVUS, OFDI, or OCT, but the case where the IVUS method is mainly used will be described below.
  • a blood vessel will be described below as an example of a hollow organ, the hollow organ is not limited to a blood vessel.
  • the diagnostic imaging system 100 includes a catheter 10 , an MDU (Motor Drive Unit) 20 , a display device 30 , an input device 40 , an information processing device 50 and an X-ray diagnostic device 80 .
  • a server 200 is connected to the information processing device 50 via the communication network 1 .
  • the catheter 10 is a diagnostic imaging catheter for obtaining ultrasonic tomographic images of blood vessels by the IVUS method.
  • the catheter 10 has an ultrasonic probe at its distal end for obtaining ultrasonic tomographic images of blood vessels.
  • the ultrasonic probe has an ultrasonic transducer that emits ultrasonic waves within a blood vessel, and an ultrasonic sensor that receives reflected waves (ultrasonic echoes) reflected by structures such as biological tissue of the blood vessel or medical equipment.
  • the ultrasonic probe is configured to advance and retreat in the longitudinal direction of the blood vessel while rotating in the circumferential direction of the blood vessel.
  • the MDU 20 is a drive device to which the catheter 10 can be detachably attached, and controls the operation of the catheter 10 inserted into the blood vessel by driving the built-in motor according to the operation of the medical staff.
  • the MDU 20 can rotate in the circumferential direction while moving the ultrasonic probe of the catheter 10 from the tip (distal) side to the base end (proximal) side (pullback operation).
  • the ultrasonic probe continuously scans the inside of the blood vessel at predetermined time intervals, and outputs reflected wave data of detected ultrasonic waves to the information processing device 50 .
  • the X-ray diagnostic apparatus 80 is also called an angio (ANGIO) apparatus or an angiography apparatus.
  • An X-ray detector is provided on the other end side, and the patient on the table is sandwiched between the X-ray generator and the X-ray detector.
  • the X-ray diagnostic apparatus 80 inserts a guiding catheter into a blood vessel (lumen organ), injects a contrast agent into the guiding catheter, and determines the condition of a target blood vessel (for example, the presence or absence of an aneurysm, vascular occlusion or stenosis, Blood flow, tumor, etc.) can be imaged to acquire (generate) a contrast-enhanced image.
  • a target blood vessel for example, the presence or absence of an aneurysm, vascular occlusion or stenosis, Blood flow, tumor, etc.
  • the X-ray diagnostic apparatus 80 can also acquire (generate) a fluoroscopic image of the blood vessel by inserting a device (for example, a catheter, stent, balloon, etc.) into the blood vessel.
  • a device for example, a catheter, stent, balloon, etc.
  • a contrast-enhanced image has no device inserted into the target blood vessel and a contrast agent has been injected.
  • the contrast image may be obtained by injecting the contrast medium while the device is inserted.
  • a fluoroscopic image shows the device inserted into the vessel of interest and no contrast agent injected. Note that the fluoroscopic image may be in a state in which the device is not inserted. Angle information at the time of imaging is given as an attribute to the contrast image and the fluoroscopic image.
  • the X-ray diagnostic apparatus 80 outputs contrast images and fluoroscopic images to the information processing apparatus 50 .
  • the information processing device 50 generates a plurality of time-sequential cross-sectional images (along the longitudinal direction of the blood vessel) including cross-sectional images of the blood vessel based on the reflected wave data output from the ultrasonic probe of the catheter 10. (get). Since the ultrasound probe scans the inside of the blood vessel while moving from the tip (distal) side to the base end (proximal) side, multiple cross-sectional images in chronological order are available from the distal to the proximal side. It becomes a tomographic image of the blood vessel observed at the point.
  • the information processing device 50 acquires contrast images and fluoroscopic images from the X-ray diagnostic device 80 .
  • the display device 30 includes a liquid crystal display panel, an organic EL display panel, or the like, and can display the processing results of the information processing device 50 .
  • the display device 30 can display cross-sectional images of blood vessels and contrast-enhanced images.
  • the input device 40 is an input interface such as a keyboard, a mouse, etc., for receiving input of various setting values, operation of the information processing device 50, and the like when conducting an examination.
  • the input device 40 may be a touch panel, soft keys, hard keys, or the like provided on the display device 30 .
  • the server 200 is, for example, a data server, and may include an image DB in which cross-sectional images of blood vessels, contrast images, and fluoroscopic images are accumulated.
  • FIG. 2 is a diagram showing a first example of the configuration of the information processing device 50.
  • the information processing device 50 can be configured by a computer, and includes a control unit 51 that controls the entire information processing device 50, a communication unit 52, an interface unit 53, a recording medium reading unit 54, a memory 55, a storage unit 56, and a display control unit. 57 and a processing unit 58 .
  • the control unit 51 incorporates a required number of CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. configured as follows. Further, the control unit 51 may be configured by combining DSPs (Digital Signal Processors), FPGAs (Field-Programmable Gate Arrays), quantum processors, and the like.
  • the control unit 51 has the functions of a first acquisition unit and a second acquisition unit, and also has functions realized by a computer program 561 described later. That is, the processing by the control unit 51 is also the processing by the computer program 561 .
  • the memory 55 can be composed of semiconductor memory such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and flash memory.
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • flash memory any type of semiconductor memory
  • the communication unit 52 includes, for example, a communication module and has a function of communicating with the server 200 via the communication network 1. Also, the communication unit 52 may have a communication function with an external device (not shown) connected to the communication network 1 .
  • the interface unit 53 provides an interface function between the catheter 10, the display device 30, the input device 40 and the X-ray diagnostic device 80.
  • the information processing device 50 (controller 51 ) can transmit and receive data and information to and from the catheter 10 , the display device 30 , the input device 40 and the X-ray diagnostic device 80 through the interface 53 .
  • the processing unit 58 can be configured by combining GPU, GPGPU, DSP, FPGA, or the like.
  • the processing unit 58 includes a route estimating unit 581 , a specifying unit 582 and an associating unit 583 .
  • the path estimation unit 581 can estimate the path of the blood vessel on the contrast image of the blood vessel.
  • the identifying unit 582 can identify a desired contrast-enhanced image from among a plurality of contrast-enhanced images of blood vessels.
  • the associating unit 583 can associate the side branch detected on the contrast image with the side branch on the cross-sectional image of the blood vessel. Details of the route estimation unit 581, the identification unit 582, and the association unit 583 will be described later.
  • the display control unit 57 displays the result of processing by the information processing device 50 on the display device 30 .
  • the recording medium reading unit 54 can be configured by, for example, an optical disk drive, and reads a computer program (program product) recorded on a recording medium 541 (for example, an optically readable disk storage medium such as a CD-ROM). It can be read by the unit 54 and stored in the storage unit 56 .
  • the computer program 561 is developed in the memory 55 and executed by the control section 51 . Note that the computer program 561 may be downloaded from an external device via the communication unit 52 and stored in the storage unit 56 .
  • the storage unit 56 can be configured by, for example, a hard disk or semiconductor memory, and can store required information.
  • the storage unit 56 can store a first learning model 562 , a second learning model 563 and a third learning model 564 in addition to the computer program 561 .
  • the first learning model 562, the second learning model 563, and the third learning model 564 include pre-learning models, in-learning models, or learned models.
  • the first learning model 562, the second learning model 563, and the third learning model 564 will be described below.
  • FIG. 3 is a diagram showing an example of the configuration of the first learning model 562.
  • the first learning model 562 outputs the position of the IVUS catheter 10 inserted into the blood vessel when the fluoroscopic image data (fluoroscopic image) of the blood vessel is input.
  • the position of the catheter 10 includes, for example, the position P of the sensor marker (that is, the IVUS pass start position) and the position P' of the catheter 10 (where the catheter 10 exists).
  • the first learning model 562 is a model that recognizes a predetermined object (catheter 10, guide wire, etc.) included in the fluoroscopic image.
  • the first learning model 562 can, for example, use an image recognition technique using semantic segmentation to classify objects on a pixel-by-pixel basis and recognize catheters included in fluoroscopic images. can.
  • the first learning model 562 includes an input layer 562a, an intermediate layer 562b, and an output layer 562c, and can be configured by U-Net, for example.
  • the middle layer 562b comprises multiple encoders and multiple decoders. Convolution processing is repeated by a plurality of encoders for perspective image data input to the input layer 561a. A plurality of decoders repeat upsampling (deconvolution) processing for the image convolved by the encoder. When decoding the convolved image, the feature map generated by the encoder is added to the image to be deconvolved. This makes it possible to retain the position information that is lost due to the convolution process, and to output a more accurate segmentation (which pixel belongs to which class).
  • the first learning model 562 can output position data indicating the region of the catheter when fluoroscopic image data is input.
  • the position data is coordinate data of picture elements (pixels) indicating the area of the catheter.
  • a value of 1 can be associated with pixels indicating the region of the catheter (for example, class 1)
  • a value of 0 can be associated with pixels other than the region of the catheter (for example, class 0).
  • the first learning model 562 is not limited to U-Net, and may be, for example, GAN (Generative Adversarial Network), SegNet, or the like.
  • FIG. 4 is a diagram showing an example of the configuration of the second learning model 563.
  • the second learning model 563 outputs the position of the side branch of the blood vessel when the contrast-enhanced image data (contrast-enhanced image) of the blood vessel is input.
  • the position of the side branch can be, for example, the position of the area of the side branch connected to the trunk of the blood vessel.
  • the position of the area may be, for example, the center or the center of gravity of the area.
  • the second learning model 563 uses an image recognition technique that uses semantic segmentation to classify objects on a pixel-by-pixel basis. can detect side branches of blood vessels that
  • the second learning model 563 like the first learning model 562, comprises an input layer 563a, an intermediate layer 563b, and an output layer 563c, and can be configured with U-Net, for example.
  • the middle layer 563b comprises multiple encoders and multiple decoders.
  • the second learning model 563 can output the position data indicating the area of the lateral canal when the contrast image data is input.
  • the position data is pixel coordinate data indicating the area of the lateral canal.
  • a pixel indicating a lateral canal region can be associated with a numerical value of 1 (eg, class 1)
  • a pixel outside the lateral canal region can be associated with a numerical value of 0 (eg, class 0).
  • 1 eg, class 1
  • a pixel outside the lateral canal region can be associated with a numerical value of 0 (eg, class 0).
  • the second learning model 563 is not limited to U-Net, and may be, for example, GAN (Generative Adversarial Network), SegNet, or the like.
  • FIG. 5 is a diagram showing a first example of the configuration of the third learning model 564.
  • the third learning model 564 like the first learning model 562, comprises an input layer 564a, an intermediate layer 564b, and an output layer 564c, and can be composed of U-Net, for example.
  • the middle layer 564b comprises multiple encoders and multiple decoders.
  • the third learning model 564 can output segmentation data when blood vessel cross-sectional image data (cross-sectional image) is input.
  • the segmentation data is obtained by classifying each pixel of the cross-sectional image into classes.
  • the third learning model 564 can classify the pixels of each input cross-sectional image into three classes, for example, classes 1, 2, and 3. Class 1 indicates Background, which indicates the area outside the blood vessel.
  • Class 2 indicates (Plaque + Media) and indicates areas of blood vessels containing plaque.
  • Class 3 indicates Lumen and indicates the lumen of a blood vessel. Therefore, the boundary between the pixels classified into class 2 and the pixels classified into class 3 indicates the boundary of the lumen, and the boundary between the pixels classified into class 1 and the pixels classified into class 2 indicates the boundary of the blood vessel. indicates That is, the third learning model 564 can output the position data indicating the lumen region and the blood vessel region, respectively, when cross-sectional image data of the blood vessel is input.
  • the example of FIG. 5 shows the segmentation result when there is no blood vessel collateral in the cross-sectional image.
  • the third learning model 564 is not limited to U-Net, and may be, for example, GAN (Generative Adversarial Network), SegNet, or the like.
  • FIG. 6 is a diagram showing a second example of the configuration of the third learning model 564.
  • FIG. The example of FIG. 6 shows the segmentation result when a side branch of a blood vessel is present in the cross-sectional image.
  • the third learning model 564 of the second example can output the position data of the lumen region and the blood vessel region of each of the main trunk of the blood vessel and the side branch connected to the main trunk.
  • the third learning model 564 can output the area of the side branch, ie, the location of the side branch.
  • the third learning model 564 of the second example can classify the pixels of each input cross-sectional image into five classes, ie, classes 1, 2, 3, 4, and 5, for example. Class 1 indicates Background, which indicates the area outside the blood vessel.
  • Class 2 indicates trunk (Plaque + Media) and indicates areas of blood vessels containing plaque.
  • Class 3 indicates the Lumen of the trunk and indicates the lumen of the vessel.
  • Class 4 indicates side branch (Plaque + Media) and class 5 indicates side branch Lumen.
  • the third learning model 564 determines in which cross-sectional images a side branch exists and in which cross-sectional image a side branch does not exist with respect to cross-sectional images of a plurality of blood vessels obtained by the pullback operation. can be detected.
  • detection of side branches is not limited to the configuration using the third learning model 564 of the second example.
  • the eccentricity of the cross-sectional shape of the blood vessel is calculated based on the position data of the lumen boundary and the blood vessel boundary output by the third learning model 564 of the first example, and the presence or absence of side branches is calculated based on the calculated eccentricity. may be determined.
  • the degree of eccentricity is greater than or equal to a predetermined threshold, it can be determined that there is a side branch, and if the degree of eccentricity is less than the predetermined threshold, it can be determined that there is no side branch.
  • the eccentricity may be calculated based on the blood vessel boundary instead of the lumen diameter. Further, instead of using the maximum diameter and the minimum diameter, for example, for all pixels (pixels) existing within the boundary of the lumen, the distance from the center (which may be the center of gravity) is calculated, and the distance within the boundary of the lumen is calculated. , and the presence or absence of a side branch may be determined based on the number of pixels whose distance is equal to or greater than a predetermined threshold.
  • the third learning model 564 of the first example and the fourth learning model (not shown) that classifies the presence or absence of side branches may be used together.
  • FIG. 7 is a diagram showing the flow of processing by the diagnostic imaging system 100.
  • FIG. Processing by the diagnostic imaging system 100 is performed in the following order: engagement, imaging, device insertion, fluoroscopy, and pullback.
  • Engage involves inserting a guiding catheter into a blood vessel (eg, a coronary artery).
  • a blood vessel eg, a coronary artery
  • a contrast medium is injected into the guiding catheter, and a contrast-enhanced image is captured by the X-ray diagnostic device 80 .
  • X-rays are irradiated toward the blood vessel from various directions (at angles), and a plurality of contrast-enhanced images ⁇ A ⁇ can be obtained.
  • ⁇ A ⁇ represents a set of contrast-enhanced images.
  • no guide wire, IVUS catheter 10, or the like is inserted into the blood vessel to be treated.
  • a guiding catheter for injecting a contrast medium is inserted to the vicinity of a blood vessel of interest.
  • a device such as a guide wire or IVUS catheter 10 is inserted into the blood vessel to be treated while viewing the fluoroscopic image.
  • ⁇ B ⁇ In fluoroscopy, multiple frames of fluoroscopic images ⁇ B ⁇ are captured with a guide wire or IVUS catheter 10 inserted. ⁇ B ⁇ represents a set of perspective images. In a fluoroscopic image, there can be no contrast agent. The fluoroscopic image records the state just prior to IVUS pullback.
  • the ultrasonic probe of the IVUS catheter 10 is rotated in the circumferential direction while moving from the tip (distal) side to the base end (proximal) side, so that the ultrasonic probe is continuously rotated at predetermined time intervals.
  • a plurality of cross-sectional images ⁇ C ⁇ of the blood vessel are captured by scanning the inside of the blood vessel.
  • ⁇ C ⁇ represents a set of cross-sectional images
  • FIG. 7 shows n cross-sectional images C1 to Cn.
  • control unit 51 selects a required one from the fluoroscopic images ⁇ B ⁇ , and performs the first learning on the selected fluoroscopic image. Input to model 562 .
  • the required one image may be selected as appropriate. For example, a clear image with little influence of noise may be selected.
  • the first learning model 562 can output the position of the IVUS catheter 10 inserted into the blood vessel.
  • the specifying unit 582 specifies one contrast-enhanced image captured at the same angle (or from the direction) as the selected fluoroscopic image from among the contrast-enhanced images ⁇ A ⁇ .
  • the control unit 51 inputs the identified contrast-enhanced image to the second learning model 563 .
  • the second learning model 563 may output the location of the side branch of the vessel.
  • the specifying unit 582 can specify a desired contrast-enhanced image from among a plurality of contrast-enhanced images of blood vessels based on the imaging angle of the fluoroscopic image. Further, the specifying unit 582 may specify a desired contrast-enhanced image from among a plurality of contrast-enhanced images of the blood vessel based on the heartbeat cycle at the time of capturing the fluoroscopic image. By specifying the required contrast-enhanced image based on the heartbeat cycle at the time of imaging, it is possible to adjust the timing of expansion and contraction of the blood vessel, and the fluoroscopic image input to the first learning model 562 and the fluoroscopic image input to the second learning model 563 are obtained. It is possible to eliminate the deviation from the contrast-enhanced image.
  • the control unit 51 inputs a plurality of cross-sectional images C1 to Cn to the third learning model 564.
  • the third learning model 564 determines in which cross-sectional images a side branch exists and in which cross-sectional image a side branch exists among a plurality of cross-sectional images of blood vessels obtained by the pullback operation. or not can be detected. Also, the third learning model 564 can output the position of the side canal on the cross-sectional image when the side canal exists.
  • FIG. 8 is a diagram showing an example of a blood vessel path estimation method.
  • the identifying unit 582 identifies a desired contrast-enhanced image A from a plurality of blood vessel contrast-enhanced images ⁇ A ⁇ based on the imaging angle of the fluoroscopic image B.
  • the electrocardiogram information may be used to specify the desired contrast-enhanced image A based on the heartbeat cycle.
  • the control unit 51 acquires the fluoroscopic image B of the blood vessel immediately before the multiple cross-sectional images are captured (that is, immediately before the pullback), inputs the acquired fluoroscopic image B to the first learning model 562, and calculates the position of the catheter 10. (IVUS pass start position P, existence position P' of catheter 10) is detected.
  • the path estimation unit 581 estimates the path of the blood vessel on the contrast image A based on the detected position of the catheter 10 .
  • the vascular path V' includes a vascular start position V corresponding to the IVUS path start position P and a vascular path V' corresponding to the location P' of the catheter 10.
  • a method such as non-steel registration may be used, but the method is not limited to non-steel registration.
  • contrast-enhanced image A since no device is inserted into the blood vessel to be treated, the shape of the actual blood vessel appears faithfully. By estimating the blood vessel path on the contrast-enhanced image, the path can be accurately estimated.
  • FIG. 9 is a diagram showing an example of a method for detecting side branches on a contrast image.
  • the control unit 51 acquires the contrast-enhanced image A of the blood vessel and inputs the acquired contrast-enhanced image A to the second learning model 563, whereby the second learning model 563 outputs the position of the side branch of the blood vessel. can do.
  • the side branches estimated on the contrast-enhanced image A are S1 to S5.
  • the blood vessel path V' estimated on the contrast-enhanced image A is defined as the start position V on the blood vessel and the blood vessel path V'.
  • the control unit 51 detects only the side branches on the blood vessel route V' among the side branches S1 to S5.
  • the side branches S1, S2, and S3 are on the blood vessel route V', and the side branches S4 and S5 are separated from the blood vessel route V'. Thereby, only the side branches along the path of the blood vessel on the contrast-enhanced image A can be detected.
  • FIG. 10 is a diagram showing an example of correspondence between a side branch on a contrast image and a side branch on a cross-sectional image.
  • side branches detected on contrast-enhanced image A are assumed to be S1, S2, and S3.
  • Reference character V indicates the start position of the blood vessel, and the part of the blood vessel to which the ultrasonic sensor moves by the pullback operation is shown thick for convenience.
  • the control unit 51 inputs the plurality of cross-sectional images C1 to C10 to the third learning model 564
  • the third learning model 564 selects which cross-sectional image has a side branch for the plurality of cross-sectional images C1 to C10. exist and which cross-sectional images do not have collaterals.
  • side branches are present in cross-sectional images C4, C6, and C9, and side branches are not present in the other cross-sectional images.
  • the associating unit 583 associates the side canal on the contrast image A with the side canal on the cross-sectional image Ci. More specifically, based on the vascular path estimated by the path estimator 581, a side branch along the vascular path is identified on the contrast-enhanced image A, and the identified side branch is associated with the side branch on the cross-sectional image Ci. be able to.
  • the start position V of the blood vessel on the contrast image A is associated with the cross-sectional image C1 on the tip (distal) side.
  • the cross-sectional image at the position corresponding to the distance d3 is 1 based on the cross-sectional image C.
  • a plurality of cross-sectional images are specified, and the cross-sectional images in which side branches are present are associated with each other among the specified cross-sectional images.
  • the cross-sectional image C4 is associated with the side branch S3.
  • the associating unit 583 can associate the side branch on the contrast image with the side branch on the cross-sectional image based on the relative position of the side branch on the path of the blood vessel in the contrast image.
  • the associating unit 583 determines the size of the lateral canal on the contrast image and the side canal on the cross-sectional image based on the size of the lateral canal region on the contrast image and the size of the lateral canal on the cross-sectional image. may be associated with. For example, collaterals whose size difference is within a predetermined threshold can be associated.
  • FIG. 11 is a diagram showing an example of association between a blood vessel other than a side branch on a contrast-enhanced image and a cross-sectional image.
  • three lateral branches S1 to S3 are detected on the contrast-enhanced image A.
  • the coordinates of the side branches S1-S3 can be represented, for example, by (x1, y1), (x2, y2), and (x3, y3), respectively.
  • the vessel position between the vessel start position V and the side branch S3 shows the vessel position between the side branches S3 and S2, the vessel position between the side branches S2 and S1, and the side branch Correspondence between each position of the blood vessel between S1 and the end position of the blood vessel (the position on the base end (proximal) side of the ultrasonic sensor) and the cross-sectional image is shown.
  • the cross-sectional image C1 corresponds to the start position V in the contrast-enhanced image A
  • the cross-sectional image C4 corresponds to the side branch S3.
  • cross-sectional image C5 corresponds to the blood vessel between the side branches S3 and S2. can be associated.
  • the cross-sectional images C7 and C8 correspond to the blood vessel between the side branches S2 and S1. C8 can be associated.
  • the cross-sectional image C10 can be associated with the end position of the blood vessel.
  • the associating unit 583 can associate the position of the blood vessel between the side branches on the contrast-enhanced image with the cross-sectional image in which the side branch is not present and which is captured between the cross-sectional images in which the side branch is present. can.
  • a method of matching by extracting the position of the IVUS catheter inside from the contrast-enhanced image is conceivable. Specifically, (1) between device insertion and fluoroscopy immediately before pullback shown in FIG. coregistration IVUS at . (2) A fluoroscopic image is captured during pullback, a marker is detected from the fluoroscopic image during pullback, and a corresponding position on the contrast image is estimated based on the amount of movement of the detected marker in the blood vessel.
  • such methods require extra procedures not normally performed. Specifically, it is necessary to capture fluoroscopic images during IVUS image acquisition.
  • the side branch on the contrast-enhanced image and the side branch on the cross-sectional image of the blood vessel are associated with each other. , and there is no need to shoot a fluoroscopic image during pullback, it is possible to prevent an increase in risk to the patient and the operator due to unnecessary procedures as described above.
  • the side branch on the contrast-enhanced image is associated with the side branch on the cross-sectional image of the blood vessel instead of the amount of movement of the catheter, the corresponding position on the contrast-enhanced image is accurately determined for each cross-sectional image of the blood vessel. and can be easily identified.
  • FIG. 12 is a diagram showing a second example of the configuration of the information processing device 50. As shown in FIG. The difference from the information processing apparatus 50 of the first example illustrated in FIG. It is a point. Since the control unit 51, the communication unit 52, the interface unit 53, the recording medium reading unit 54, the memory 55, the storage unit 56, and the display control unit 57 are the same as those in the first example, description thereof will be omitted.
  • FIG. 13 is a diagram showing an example of the configuration of the learning model 565.
  • the learning model 565 includes an input layer 565a, an intermediate layer 565b, and an output layer 565c, and can be configured with, for example, LSTM (Long Short Term Memory).
  • the learning model 565 comprises LSTM blocks L1, L2, . . . , L10. Note that the number of LSTM blocks is not limited to ten.
  • the contrast-enhanced image A and the fluoroscopic image B input to the learning model 565 are, for example, images captured at the same imaging angle. Also, the fluoroscopic image is an image immediately before the pullback. As shown in FIG. 13, cross-sectional images (IVUS images) are C1 to C10. When the contrast-enhanced image A, the fluoroscopic image B, and the cross-sectional image C1 are input, the learning model 565 outputs the presence or absence of the side canal in the cross-sectional image C1 (there is no side canal in the example of FIG. 13).
  • the learning model 565 When the contrast-enhanced image A, the fluoroscopic image B, and the cross-sectional image C2 are input, the learning model 565 outputs the presence or absence of the side canal in the cross-sectional image C2 (there is no side canal in the example of FIG. 13). Subsequently, similarly, the contrast image A, the fluoroscopic image B, and the cross-sectional images C3 to C10 are input to the learning model 565, and the presence or absence of side branches is output.
  • FIG. 14 is a diagram showing an example of correspondence information output by the learning model 565.
  • Corresponding information consists of cross-sectional images, side branches, and coordinates.
  • Side branches S1 to S3 are side branches detected on the contrast-enhanced image, and the coordinates indicate the positions of the side branches on the contrast-enhanced image.
  • side branches S1 to S3 are detected on the contrast-enhanced image, and the coordinates of the side branches S1 to S3 are (x1, y1), (x2, y2), and (x3, y3). Coordinates (0, 0) represent the absence of side branches.
  • Cross-sectional images C9, C6, and C3 correspond to the side branches S1 to S3, respectively.
  • cross-sectional images other than the cross-sectional images C9, C6, and C3 correspond to the positions of blood vessels other than the side branches.
  • control unit 51 acquires a contrast-enhanced image of the blood vessel, a plurality of cross-sectional images along the axial direction of the blood vessel, and a fluoroscopic image of the blood vessel immediately before capturing the plurality of cross-sectional images.
  • An acquired contrast image, a plurality of cross-sectional images, and a fluoroscopic image can be input to the model 565, and correspondence information that associates the side branches on the contrast-enhanced image with the side branches on the cross-sectional image can be output.
  • FIG. 15 is a diagram showing an example of a method of generating the learning model 565.
  • FIG. A method of generating the learning model 565 can be, for example, as follows.
  • the control unit 51 acquires blood vessel contrast image data (contrast image), fluoroscopic image data (fluoroscopic image), cross-sectional image data (cross-sectional image), and training data including correspondence information.
  • the training data may be collected and stored in the server 200 and acquired from the server 200, for example.
  • the control unit 51 inputs the contrast image data, the fluoroscopic image data, and the cross-sectional image data included in the training data to the learning model 565, acquires the correspondence information output by the learning model 565, and acquires the correspondence information output by the learning model 565.
  • the parameters of the learning model 565 may be adjusted so that the value of the loss function based on the information and corresponding information as teacher data is minimized.
  • control unit 51 controls the contrast-enhanced image of the blood vessel, the plurality of cross-sectional images along the axial direction of the blood vessel, the fluoroscopic image of the blood vessel immediately before capturing the plurality of cross-sectional images, and the lateral branches on the contrast-enhanced image and on the cross-sectional image.
  • the learning model 565 can be generated so as to output correspondence information that associates the lateral canal on the contrast-enhanced image with the lateral canal on the cross-sectional image.
  • FIG. 16 is a diagram showing an example of the display screen 300 of the processing result by the information processing device 50.
  • the processing result by the information processing device 50 is stored in the storage unit 56, and the display control unit 57 can display the processing result on the display device 30 according to the operation of a user such as a doctor.
  • the display screen 300 displays a column for selecting patient information (for example, patient name, patient ID, etc.) and a column for selecting the examination date of the examination for the selected patient.
  • the display screen 300 has a contrast image display screen 301, a blood vessel cross-sectional image (cross-sectional image) display screen 302, and a longitudinal cross-sectional image display screen 303 along the axial direction of the blood vessel.
  • a contrast image is displayed on the contrast image display screen 301 .
  • side branches of the blood vessel side branches 1, 2, 3, and 4 in the example of FIG. 16
  • a cursor 305 for designating the position on the blood vessel and a regulation icon 304 for setting the movement range of the cursor 305 are displayed.
  • a cross-sectional image of a blood vessel at a position designated by a cursor 305 is displayed on the cross-sectional image display screen 302 .
  • a longitudinal cross-sectional image display screen 303 displays a cross-sectional image of the blood vessel in the axial direction, and identifiers of the side branches 1 to 4 are displayed at positions corresponding to the side branches 1 to 4 of the blood vessel in the contrast image.
  • the cursor 305 and the restriction icon 304 are displayed at positions on the cross-sectional image corresponding to the positions of the cursor 305 and the restriction icon 304 on the contrast image.
  • the display control unit 57 displays the contrast-enhanced image, the longitudinal section image, and the transverse section image of the blood vessel, displays the first identifier for identifying the side branch on the contrast-enhanced image, and displays the longitudinal section image and the transverse section image.
  • the longitudinal section image is scrolled by moving the cursor 305 or by dragging or the like in any one of the contrast-enhanced image, the longitudinal section image, and the transverse section image, the positions of the other two images are also moved or the images are changed. It can be performed. For example, when the cursor 305 is moved toward the side branches 3, 2, and 1 on the contrast-enhanced image, the cursor 305 on the longitudinal section image moves toward the side branches 3, 2, and 1, interlocking with the movement of the cursor 305.
  • the cross-sectional image also changes.
  • the display control unit 57 can display the position of the luminal organ on the contrast-enhanced image in conjunction with at least one of the imaging time point of the transverse cross-sectional image and the axial position of the longitudinal cross-sectional image. . This allows a user such as a doctor to easily grasp a cross-sectional image at a desired position of the blood vessel in the contrast-enhanced image.
  • the range can be changed by moving the restriction icon 304 in the contrast image or longitudinal cross-sectional image.
  • the display control unit 57 can set the interlocking range according to the position of the cursor 305 .
  • FIG. 17 is a flowchart showing an example of a processing procedure by the information processing device 50 of the first example.
  • the main body of processing is assumed to be the control unit 51 .
  • the control unit 51 acquires a contrast image of the hollow organ (blood vessel) (S11), and acquires a fluoroscopic image of the hollow organ immediately before the pullback (S12).
  • the control unit 51 acquires a cross-sectional image of the hollow organ during pullback (S13).
  • the control unit 51 inputs the fluoroscopic image to the first learning model 562 and detects the position of the catheter (S14). Detecting the position of the catheter is illustrated in FIG. 3, for example.
  • the control unit 51 identifies a desired contrast-enhanced image from the acquired contrast-enhanced images (S15).
  • the desired contrast-enhanced image may be specified, for example, at the same imaging angle as the fluoroscopic image.
  • the control unit 51 estimates the vascular path in the specified contrast image based on the position of the catheter on the fluoroscopic image (S16). Estimation of the vascular path is illustrated in FIG. 8, for example.
  • the control unit 51 inputs the identified contrast-enhanced image to the second learning model 563 to detect a side branch (S17). Side branch detection is illustrated, for example, in FIG.
  • the control unit 51 inputs the cross-sectional image to the third learning model 564 to detect a side branch (S18). Side branch detection is illustrated in FIG. 6, for example.
  • the control unit 51 associates the lateral canal on the contrast-enhanced image with the cross-sectional image in which the lateral canal is detected (S19).
  • the correspondence is illustrated in FIG. 10, for example.
  • the control unit 51 associates the position of the hollow organ other than the side branch on the contrast image with the cross-sectional image in which the side branch is not detected (S20).
  • the correspondence is illustrated in FIG. 11, for example.
  • the control unit 51 displays the side canal on the contrast image and the side canal on the cross-sectional image in association with each other so that they can be identified (S21), and ends the process.
  • FIG. 18 is a flowchart showing an example of a processing procedure by the information processing device 50 of the second example.
  • the control unit 51 acquires a contrast image of the hollow organ, a fluoroscopic image of the hollow organ immediately before the pullback, and a cross-sectional image of the hollow organ during the pullback (S31).
  • the control unit 51 identifies a desired contrast-enhanced image from the acquired contrast-enhanced images (S32).
  • the desired contrast-enhanced image may be specified, for example, at the same imaging angle as the fluoroscopic image.
  • the control unit 51 inputs the specified contrast image, the acquired fluoroscopic image, and the cross-sectional image to the learning model 565 (S33), and acquires the correspondence information output by the learning model 565 (S34). Correspondence information is illustrated in FIG. 14, for example.
  • the control unit 51 displays the side canal on the contrast image and the side canal on the cross-sectional image in a identifiable manner in association with each other (S35), and ends the process.
  • FIG. 19 is a diagram showing an example of processing for generating the learning model 565.
  • the control unit 51 acquires a contrast-enhanced image of the hollow organ (S41), and selects the contrast-enhanced image specified from among the fluoroscopic image of the hollow organ immediately before the pullback, the cross-sectional image of the hollow organ during the pullback, and the acquired contrast-enhanced image. , and training data including correspondence information that associates the side branches on the contrast image with the side branches on the cross-sectional image (S42).
  • the control unit 51 sets the initial values of the parameters of the learning model 565 (S43), and inputs the contrast image, the fluoroscopic image, and the cross-sectional image into the learning model 565 based on the training data (S44).
  • the control unit 51 adjusts the parameters so that the loss function based on the correspondence information output by the learning model 565 and the correspondence information included in the training data is minimized (S45).
  • the control unit 51 determines whether or not the value of the loss function is within the allowable range (S46), and if it is not within the allowable range (NO in S46), continues the processing from step S44 onwards. If the value of the loss function is within the allowable range (YES in S46), the control unit 51 stores the generated learning model 565 in the storage unit 56 (S47), and terminates the process.
  • the information processing device 50 is configured to associate side branches, but is not limited to this.
  • the information processing apparatus 50 may be used as a client apparatus, and an external server may be used to associate the side branches, and the processing result may be obtained from the server.
  • Each learning model may be generated by a device other than the information processing device 50, and the learning model may be acquired from the other device.
  • the processing in the information processing device 50 may be distributed among a plurality of information processing devices.
  • the X-ray diagnostic apparatus 80 can take images from two directions at the same time, it is possible to reduce side branch detection omissions and improve accuracy by using side branch detection information obtained from two images.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un programme informatique, un dispositif de traitement d'informations, un procédé de traitement d'informations et un procédé de génération de modèle d'apprentissage avec lesquels il est possible d'identifier facilement, sur des images en coupe transversale d'un organe luminal, une position qui correspond à une image de contraste. Le programme informatique amène un ordinateur à effectuer un processus d'acquisition d'une image de contraste d'un organe luminal, acquérir une pluralité d'images en coupe transversale le long de la direction axiale de l'organe luminal, détecter une branche latérale de l'organe luminal sur la base de l'image de contraste, identifier l'image en coupe transversale dans laquelle la branche latérale est présente sur la base de la pluralité d'images en coupe transversale et associer l'une à l'autre la branche latérale sur l'image de contraste et la branche latérale sur l'image en coupe transversale.
PCT/JP2022/043872 2021-11-30 2022-11-29 Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage WO2023100838A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021194311 2021-11-30
JP2021-194311 2021-11-30

Publications (1)

Publication Number Publication Date
WO2023100838A1 true WO2023100838A1 (fr) 2023-06-08

Family

ID=86612300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/043872 WO2023100838A1 (fr) 2021-11-30 2022-11-29 Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage

Country Status (1)

Country Link
WO (1) WO2023100838A1 (fr)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009528147A (ja) * 2006-03-01 2009-08-06 ザ ブリガム アンド ウイメンズ ホスピタル, インク. 動脈の画像システム
JP2012061086A (ja) * 2010-09-15 2012-03-29 Toshiba Corp 医用画像表示システム
JP2012213659A (ja) * 2006-03-01 2012-11-08 Toshiba Corp 画像処理装置
WO2015045368A1 (fr) * 2013-09-26 2015-04-02 テルモ株式会社 Dispositif de traitement d'image, système d'affichage d'image, système d'imagerie, procédé de traitement d'image, et programme
JP2015109968A (ja) * 2013-11-13 2015-06-18 パイ メディカル イメージング ビー ヴイPie Medical Imaging B.V. 血管内画像を登録するための方法およびシステム
WO2017130927A1 (fr) * 2016-01-26 2017-08-03 テルモ株式会社 Dispositif d'affichage d'image et son procédé de commande
US20180310830A1 (en) * 2017-04-26 2018-11-01 International Business Machines Corporation Intravascular catheter including markers
WO2020237024A1 (fr) * 2019-05-21 2020-11-26 Gentuity, Llc Systèmes et procédés pour un traitement de patients basé sur l'oct
JP2021062200A (ja) * 2019-09-17 2021-04-22 キヤノン ユーエスエイ, インコーポレイテッドCanon U.S.A., Inc 3d構造の構成又は再構成

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009528147A (ja) * 2006-03-01 2009-08-06 ザ ブリガム アンド ウイメンズ ホスピタル, インク. 動脈の画像システム
JP2012213659A (ja) * 2006-03-01 2012-11-08 Toshiba Corp 画像処理装置
JP2012061086A (ja) * 2010-09-15 2012-03-29 Toshiba Corp 医用画像表示システム
WO2015045368A1 (fr) * 2013-09-26 2015-04-02 テルモ株式会社 Dispositif de traitement d'image, système d'affichage d'image, système d'imagerie, procédé de traitement d'image, et programme
JP2015109968A (ja) * 2013-11-13 2015-06-18 パイ メディカル イメージング ビー ヴイPie Medical Imaging B.V. 血管内画像を登録するための方法およびシステム
WO2017130927A1 (fr) * 2016-01-26 2017-08-03 テルモ株式会社 Dispositif d'affichage d'image et son procédé de commande
US20180310830A1 (en) * 2017-04-26 2018-11-01 International Business Machines Corporation Intravascular catheter including markers
WO2020237024A1 (fr) * 2019-05-21 2020-11-26 Gentuity, Llc Systèmes et procédés pour un traitement de patients basé sur l'oct
JP2021062200A (ja) * 2019-09-17 2021-04-22 キヤノン ユーエスエイ, インコーポレイテッドCanon U.S.A., Inc 3d構造の構成又は再構成

Similar Documents

Publication Publication Date Title
US11883149B2 (en) Apparatus and methods for mapping a sequence of images to a roadmap image
US10687777B2 (en) Vascular data processing and image registration systems, methods, and apparatuses
CN107787201B (zh) 血管内成像系统界面和阴影检测方法
US8025622B2 (en) Systems and methods for estimating the size and position of a medical device to be applied within a patient
US9144394B2 (en) Apparatus and methods for determining a plurality of local calibration factors for an image
JP6388632B2 (ja) プロセッサ装置の作動方法
JP2022517581A (ja) 冠動脈の動的ロードマップを提供するための方法およびシステム
US9101286B2 (en) Apparatus and methods for determining a dimension of a portion of a stack of endoluminal data points
US8295577B2 (en) Method and apparatus for guiding a device in a totally occluded or partly occluded tubular organ
US20160171701A1 (en) Methods and systems for transforming luminal images
JP2019022647A (ja) X線血管造影法画像の血液フロー速度を決定するための方法と装置
JP7278319B2 (ja) 内腔に沿った管腔内デバイスの管腔内経路の推定
JP2021166701A (ja) オブジェクト内データをオブジェクト外データと共に登録するための方法とシステム
JP7436548B2 (ja) プロセッサ装置の作動方法
EP3210536A2 (fr) Procédé et système de colocation d'imagerie par rayons x ou intravasculaire
JP2022055170A (ja) コンピュータプログラム、画像処理方法及び画像処理装置
WO2023100838A1 (fr) Programme informatique, dispositif de traitement d'informations, procédé de traitement d'informations et procédé de génération de modèle d'apprentissage
WO2008050315A2 (fr) Méthode et appareil de guidage d'un dispositif dans une zone totalement ou partiellement occlue d'un organe tubulaire
JP6726714B2 (ja) 血管内プローブマーカを検出するためのシステムの作動方法、及び血管造影データと、血管に関して取得された血管内データとを重ね合わせ登録するためのシステムの作動方法
WO2023054442A1 (fr) Programme informatique, dispositif de traitement d'informations, et procédé de traitement d'informations
WO2023189261A1 (fr) Programme informatique, dispositif de traitement d'informations et procédé de traitement d'informations
WO2021199962A1 (fr) Programme, procédé de traitement d'informations et dispositif de traitement d'informations
JP2022149735A (ja) プログラム、画像処理方法、画像処理装置及びモデル生成方法
CN116894865A (zh) 结果数据集的提供
Martín-Leung Navigation and positioning aids for intracoronary interventions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22901270

Country of ref document: EP

Kind code of ref document: A1