WO2020194942A1 - Blood vessel recognition device, blood vessel recognition method, and blood vessel recognition system - Google Patents

Blood vessel recognition device, blood vessel recognition method, and blood vessel recognition system Download PDF

Info

Publication number
WO2020194942A1
WO2020194942A1 PCT/JP2019/050141 JP2019050141W WO2020194942A1 WO 2020194942 A1 WO2020194942 A1 WO 2020194942A1 JP 2019050141 W JP2019050141 W JP 2019050141W WO 2020194942 A1 WO2020194942 A1 WO 2020194942A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vessel
captured
vascular
surgical field
Prior art date
Application number
PCT/JP2019/050141
Other languages
French (fr)
Japanese (ja)
Inventor
悦朗 波多野
寛 鳥口
加藤 淳
小林 健二
健史 島田
北岡 義隆
朋之 齊藤
Original Assignee
学校法人兵庫医科大学
パナソニックi-PROセンシングソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 学校法人兵庫医科大学, パナソニックi-PROセンシングソリューションズ株式会社 filed Critical 学校法人兵庫医科大学
Publication of WO2020194942A1 publication Critical patent/WO2020194942A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Definitions

  • the present disclosure relates to a vascular recognition device, a vascular recognition method, and a vascular recognition system that recognize a vascular image reflected in a captured image of a surgical field.
  • DSA digital subtraction angiography
  • a subtraction image in which only blood vessels appear and a mask image or live image in which at least bones appear are used, and only the subtraction image is emphasized and the subtraction image and the mask image or live image are added.
  • a DSA post-treatment blood vessel highlighting device is disclosed that obtains an image in which the positional relationship between a blood vessel and a bone can be clearly identified.
  • the liver is known as an organ composed of the above-mentioned blood vessels intricately intertwined in a tissue.
  • the hepatic veins or Glisson's capsule in the liver run in a mesh pattern while crossing intricately, and the running pattern varies from person to person.
  • blood vessels an example of a vessel
  • CT scan Computerputed Tomography
  • MRI Magnetic Resonance Imaging
  • An object of the present invention is to provide a vascular recognition device, a vascular recognition method, and a vascular recognition system that notify the location of a vascular appearing on a detached surface and support the provision of safe and secure surgery.
  • the present disclosure is a vessel recognition device connected to an imaging device that images a surgical field, and is based on an image input unit that inputs an image captured by the surgical field from the imaging device and the input image.
  • An image processing unit that recognizes the vessels reflected in the captured image and a composite image in which the recognized information teaching the vessels is superimposed on the captured image are generated, and the generated composite image is used as a monitor.
  • a vessel recognition device including an image output unit for outputting.
  • the present disclosure is a vascular recognition method in a vascular recognition device connected to an imaging device that images the surgical field, and is a step of inputting an image captured in the surgical field from the imaging device.
  • a step of recognizing a vessel reflected in the captured image based on the captured image, a step of generating a composite image in which information for teaching the recognized vessel is superimposed on the captured image, and the generated said Provided is a vascular recognition method having a step of outputting a composite image to a monitor.
  • the present disclosure is a vessel recognition system in which an imaging device for imaging a surgical field and a vessel recognition device are connected to each other, and the vessel recognition device is an image captured of the surgical field from the imaging device. Is input, the vessel reflected in the captured image is recognized based on the input captured image, and a composite image in which the information for teaching the recognized vessel is superimposed on the captured image is generated and generated.
  • a vessel recognition system that outputs the combined image to a monitor.
  • the recognition function of the vessel which is equivalent to the tacit knowledge of a skilled doctor who quickly recognizes the vessel appearing in the surgical field, is reproduced in real time and with high accuracy, and the location of the vessel appearing on the incision surface is notified.
  • the figure which shows the concrete structure of the medical image projection system A flowchart showing an example of an operation procedure of the medical image projection system according to the first embodiment.
  • the figure which shows the image of the organ which the vessel specific image is projected on the vessel detected by the image recognition function of the vessel.
  • Diagram showing an overview of the endoscopic system Flow chart showing an example of operating procedure of the endoscope system according to the second embodiment
  • the figure which shows the registration content of the scene determination table which shows the scene at the time of performing the laparotomy which concerns on the modification 2 of Embodiment 1.
  • the figure which shows the screen of the monitor which the ICG fluorescence image which concerns on the modification 4 of Embodiment 1 and the vessel specific image partially overlap and are superposed on the captured image including an organ.
  • FIG. 1 is a diagram showing a schematic configuration example of the medical image projection system 5 according to the first embodiment.
  • the medical image projection system 5 projects and maps the fluorescent image of the tumor part where fluorescent agents such as ICG (Indocyanine Green) are accumulated directly to the organ, and the surgeon such as a doctor keeps an eye on the surgical field to prevent blood flow and ischemia. It is a system that visualizes the boundaries of regions in real time.
  • the vessels include blood vessels through which blood flows in the body and lymph vessels through which lymph fluid flows.
  • the Grisson capsule is a tissue that contains blood vessels and bile ducts that flow through the liver. In the first embodiment, the Grisson capsule is treated in the same manner as the vessel.
  • the medical image projection system 5 includes an image pickup irradiation device 20, a camera control unit 10 (CCU: Camera Control Unit) as an example of a vessel recognition device, and a monitor 30.
  • CCU Camera Control Unit
  • the imaging irradiation device 20 irradiates the surgical field with white light (that is, visible light), receives the reflected light reflected by the affected part (for example, an organ) of the subject (for example, a person), and captures a visible light image. To do.
  • the imaging irradiation device 20 irradiates infrared light to excite the fluorescent agent accumulated in the affected area or the like, receives infrared light including fluorescence generated by the fluorescence emission of the fluorescent agent, and images a fluorescent image. Further, the image pickup irradiation device 20 projects a tube specific image that teaches the position of the tube on the affected part (for example, an organ).
  • the image pickup irradiation device 20 includes a camera head 21, a projector 22, and a light source 23.
  • the camera head 21 as an example of the imaging device includes a visible light sensor unit 24A including a visible light image sensor 25A and an IR cut filter 26A, and an infrared light sensor including an infrared light image sensor 25B and a visible light cut filter 26B. Includes unit 24B.
  • the IR cut filter 26A blocks (cuts) the excitation light (IR light: Infrared Light) having an IR wavelength band that is irradiated from the light source 23 to the affected area (for example, an organ) and reflected by the affected area.
  • the visible light image sensor 25A receives visible light that has passed through the IR cut filter 26A (that is, visible light reflected by the affected area) and captures a visible light image.
  • the visible light cut filter 26B irradiates the affected area (for example, an organ) from the light source 23 and blocks (cuts) the visible light reflected by the affected area. Further, the visible light cut filter 26B blocks (cuts) not only visible light but also IR light in the wavelength band of excitation light (for example, 690 nm to 820 nm).
  • the infrared light image sensor 25B receives fluorescence that has passed through the visible light cut filter 26B (that is, fluorescence emitted based on the fluorescence emission of the fluorescent agent accumulated in the affected area), and captures a fluorescence image.
  • the camera head 21 captures, for example, a visible light image and a fluorescent image in a time-division manner.
  • the camera head 21 switches the optical filter to the IR cut filter 26A.
  • the visible light image sensor 25A receives the visible light that has passed through the IR cut filter 26A and captures a visible light image.
  • the camera head 21 switches the optical filter to the visible light cut filter 26B.
  • the infrared light image sensor 25B receives the fluorescence (see above) that has passed through the visible light cut filter 26B and captures a fluorescence image.
  • the visible light image and the fluorescent image can be captured at the same time.
  • the image sensor constituting the visible light image sensor 25A and the infrared light image sensor 25B for example, CMOS (Complementary Metal Oxide Semiconductor) is used, but CCD (Charged Coupled Device) may be used.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the light source 23 irradiates excitation light in the IR band (for example, light having a wavelength of 760 nm) for exciting a fluorescent agent (for example, indocyanine green). Further, the light source 23 irradiates white light (for example, light having a wavelength of 700 nm or less).
  • the light source 23 can emit IR excitation light and white light in a time-division manner or at the same time.
  • the projector 22 generates a vascular specific image for teaching (in other words, characterizing) the shape of the vascular when the vascular is detected and recognized in the captured image of the affected area captured by the camera head 21.
  • the projected light corresponding to the vascular specific image is projected onto the affected area in the surgical field.
  • the projector 22 may acquire a vessel specific image from the camera control unit 10.
  • the projector 22 corresponds to the vascular specific image so that the position of the vascular specific image coincides with the position of the vascular appearing on the surface of the affected area, or the vascular specific image covers the outer shape of the vascular. Project the projected light.
  • the projection of the projected light corresponding to the vascular specific image is simply referred to as the projection of the vascular specific image.
  • the projector 22 may be a projector that employs any method such as a DLP (Digital Light Processing) method, a 3LCD (Liquid Crystal Display) method, and an LCOS (Liquid Crystal On Silicon) method.
  • a DLP Digital Light Processing
  • 3LCD Liquid Crystal Display
  • LCOS Liquid Crystal On Silicon
  • the camera control unit 10 is electrically connected to the imaging irradiation device 20, the monitor 30, and the mouse 45 (see FIG. 2), and controls the operation of these devices in an integrated manner.
  • the camera control unit 10 includes an image input unit 11, an image processing unit 12, and an image output unit 13.
  • the image input unit 11 inputs the data of the captured image at the time of surgery captured by the camera head 21.
  • the image input unit 11 may use an HDMI (registered trademark) (High-Definition Multimedia Interface) or USB (Universal Serial Bus) Type-C interface capable of transferring video data at high speed.
  • HDMI registered trademark
  • USB Universal Serial Bus
  • the camera control unit 10 has a processor and a built-in memory, and the processor executes a program stored in the built-in memory to specifically realize the functions of the image processing unit 12 and the image output unit 13.
  • the processor may be a GPU (Graphical Processing Unit) suitable for image processing.
  • the image processing unit 12 uses a dedicated electronic circuit designed by an MPU (Micro Processing Unit), a CPU (Central Processing Unit), an ASIC (Application Specific Integrated Circuit), etc., or an FPGA (Field Programmable Gate) instead of the GPU. It may be composed of an electronic circuit designed to be reconfigurable by Array) or the like.
  • the image processing unit 12 is equipped with artificial intelligence (AI: Artificial Intelligent) and has an image recognition function for detecting and recognizing a vessel reflected in the captured image input by the image input unit 11.
  • AI Artificial Intelligent
  • the image processing unit 12 can realize the above-mentioned image recognition function by using a trained model equipped with artificial intelligence.
  • the trained model is pre-generated before the start of actual operation of the medical image projection system 5.
  • the procedure for generating the trained model is as follows.
  • the user for example, an administrator or an operator
  • the medical image projection system 5 prepares the data of the captured image at the time of surgery.
  • the data of this captured image also includes audio data explained by a doctor by injecting findings including names of parts such as organs reflected in each captured image by voice.
  • the findings including the names of organs reflected in each captured image are explained by voice, and the findings for a total of about 100,000 captured images in 100 cases are explained. Is prepared.
  • the user of the medical image projection system 5 refers to the sound of the findings explained by the doctor in the acquired image, and puts the pulse into the vessel to be detected in each image.
  • Annotation work such as adding information for teaching the tube (for example, a frame image) is performed, and the data of each captured image to which the annotation work is performed is prepared as teacher data.
  • teacher data for example, a large amount of teaching data for detecting and recognizing the vessels reflected in the captured image is prepared.
  • the image processing unit 12 inputs a large number of teacher data prepared in advance and data of the captured image at the time of surgery as described above, and performs machine learning such as deep learning (deep learning).
  • machine learning such as deep learning (deep learning).
  • the weighting coefficient between the neurons of each layer (input layer, intermediate layer, output layer) of the neural network that composes the trained model for performing deep learning is reflected in the captured image during surgery. Is optimized so that it can be detected and recognized accurately.
  • the image processing unit 12 generates a trained model suitable for detecting and recognizing a vessel as a result of the machine learning described above. Using this trained model, the image processing unit 12 performs image recognition processing of the vascular tube that can be reflected in the captured image input from the image input unit 11 at the time of surgery, and colors the area surrounding the detected vascular tube. To generate a vessel-specific image. The vessel-specific image is superimposed on the captured image at the time of surgery by the image output unit 13. Further, the image processing unit 12 evaluates (scores) the result of the image recognition processing of the detected vessel (for example, the matching probability), and updates the trained model by machine learning in real time using the evaluation result. May be good.
  • the image output unit 13 generates a composite image in which the vessel-specific image generated by the image processing unit 12 is superimposed on the captured image at the time of surgery input from the image input unit 11.
  • the image output unit 13 transmits the composite image data to the monitor 30.
  • This composite image data further includes position information in the composite image of the vessel and color information for teaching the region of the vessel, in addition to the captured image at the time of surgery and the vessel specific image.
  • the image output unit 13 may transmit text data related to the vessel in addition to the above-mentioned composite image data.
  • the text data may include, for example, an evaluation value (score value) of an image recognition result of a vessel, an organ name, or the like.
  • the image output unit 13 transmits the above-mentioned composite image data and related text data to the projector 22.
  • the image output unit 13 may output the vessel-specific image generated by the image processing unit 12 to the projector 22 instead of the composite image data and the related text data.
  • the projector 22 is set to match the position of the vascular tube appearing in the affected area in the surgical field at the time of surgery, based on the position information reflected in the captured image of the vascular tube included in the composite image data transmitted from the image output unit 13. Project a vascular specific image on (that is, overlap the vascular). For example, a single color green is used as the color information of the vessel specific image.
  • the monitor 30 displays the data of the composite image from the image output unit 13 (that is, the composite image in which the vessel specific image is superimposed on the captured image).
  • the monitor 30 is composed of, for example, a liquid crystal display or an organic EL (Electroluminescence) display, and has a display surface for displaying an image.
  • FIG. 2 is a diagram showing a specific configuration of the medical image projection system 5.
  • the medical image projection system 5 is installed, for example, in an operating room of a hospital. In the operating room, open surgery is performed on a patient hm who is lying on his back on the operating table 110.
  • the imaging irradiation device 20 irradiates the surgical field 130 (for example, the affected part of the patient) including the opened portion of the patient hm with white light, excitation light, and projected light, and also images the surgical field 130.
  • the medical image projection system 5 includes an imaging irradiation device 20, a camera control unit 10 and a monitor 30 shown in FIG. 1, as well as a mouse 45 that can be operated by a user such as a doctor.
  • the image pickup irradiation device 20 includes an optical unit 28 in addition to the camera head 21, the projector 22, and the light source 23 shown in FIG.
  • the optical unit 28 is arranged so as to face the camera head 21 and the projector 22.
  • the optical unit 28 transmits visible light and fluorescence from the surgical field toward the camera head 21, while reflecting the projected light emitted from the projector 22 and projecting it onto the surgical field.
  • the optical unit 28 is adjusted so that the optical axes of visible light and fluorescence toward the camera head 21 and the optical axes of the projected light emitted from the projector 22 toward the surgical field are aligned with each other.
  • the camera control unit 10 sets the position of the vessel included in the image captured by the camera head 21 as the center of, for example, the optical axis of the camera head 21. It is represented by the coordinates to be used, and this coordinate information is transmitted to the projector 22 together with the color information of the vessel specific image.
  • the projector 22 projects a vessel-specific image at the position of the received coordinate information. As a result, the projector 22 can map (project) the vascular tube in the surgical field with the vascular tube specific image.
  • the mouse 45 is an input device that accepts operations by an operator or the like on the camera control unit 10.
  • the input device may be a keyboard, a touch pad, a touch panel, a button, a switch, or the like.
  • the surgeon or the like displays an image captured by the camera head 21 on the monitor 30 to visually recognize the state of surgery.
  • FIG. 3 is a flowchart showing an example of an operation procedure of the medical image projection system 5 according to the first embodiment. This operation is executed when an operator such as a doctor activates the image recognition function of the vessel to the camera control unit 10.
  • the image input unit 11 acquires a visible light image captured by the camera head 21 with the surgical field as a subject (S1).
  • the image processing unit 12 analyzes a visible light image using the image recognition function of the vessel (that is, a learned model capable of realizing the image recognition function using AI described above) (S2).
  • the image processing unit 12 determines whether or not a vessel has been detected as a result of image analysis (S3).
  • the image output unit 13 transmits a vessel image projection instruction to the projector 22 (S4).
  • the vascular image projection instruction includes the position information of the vascular in the captured image and the vascular specific image superimposed on the vascular.
  • the vessel specific image is a color image in which a region representing the outer shape of the vessel included in the captured image is filled with a specific color.
  • the image output unit 13 generates a composite image in which a vascular specific image corresponding to the vascular detected and recognized by the image processing unit 12 is superimposed on the captured image input from the image input unit 11 (S5). .. The image output unit 13 outputs this composite image to the monitor 30 (S6).
  • the image processing unit 12 does not detect the vessel (S3, NO)
  • the image output unit 13 is a visible light image captured by the camera head 21 (that is, an captured image input from the image input unit 11). ) Is output to the monitor 30 as it is (S7).
  • the camera control unit 10 ends the operation shown in FIG. The operation shown in FIG. 3 is repeated until an operator such as a doctor stops the image recognition function of the vessel with respect to the camera control unit 10.
  • FIG. 4 is a diagram showing an image of an organ 200 on which a vascular specific image mg is projected onto a vascular 100 detected by a vascular image recognition function.
  • the image of the organ 200 is displayed on the monitor 30.
  • the image of the organ 200 may be an image of an organ visually observed by a doctor in the surgical field.
  • the vessel specific image is not generated, so that the image of the organ 200 remains the captured image input from the image input unit 11 and does not change in particular.
  • the vessel specific image mg representing the region (outer shape) of the vessel is superimposed on the image of the organ 200.
  • the Grisson capsule which is treated as a vessel, appears.
  • the vessel specific image mg is a colored (for example, green) image having the same contour as the outer shape of the vessel 100.
  • the vessel specific image may be any color except yellowish and reddish colors.
  • the medical image projection system 5 projects an ICG fluorescence image onto an organ by projection mapping.
  • a surgeon such as a doctor can visualize the boundary between the vascular tube and the ischemic area in real time without taking his eyes off the surgical field.
  • the camera control unit 10 is equipped with artificial intelligence (in other words, the image processing unit 12) that holds the image recognition function of the vessel, so that the pulse in the surgical field is at the same judgment level as that of a skilled doctor.
  • the tube can be accurately detected and recognized, and can be displayed on the surgical field so that the surrounding surgeons can grasp it. Therefore, safe and secure surgical navigation is possible.
  • the camera control unit 10 is connected to the camera head 21 that images the surgical field.
  • the camera control unit 10 inputs a captured image of the surgical field from the camera head 21.
  • the camera control unit 10 recognizes the vessels reflected in the captured image based on the input captured image.
  • the camera control unit 10 generates a composite image in which a vessel specific image (an example of information for teaching the vessel) representing the recognized vessel is superimposed on the captured image, and outputs the generated composite image to the monitor 30. ..
  • the medical image projection system 5 provides, for example, a recognition function equivalent to the tacit knowledge of a skilled doctor who quickly recognizes the vessels appearing in the captured image captured by the camera head 21 that captures the surgical field during surgery in real time. Can be reproduced with high accuracy. Therefore, the medical image projection system 5 can accurately notify the surgeon such as a doctor of the location of the vessel appearing on the dissected surface such as the tumor portion in the surgical field, and can support the provision of safe and secure surgery.
  • the camera control unit 10 is connected to a projector 22 arranged so as to be projectable in the surgical field.
  • the image output unit 13 generates a vascular image projection instruction including the position information of the vascular in the captured image GZ and the vascular specific image and sends it to the projector 22.
  • the surgeon or the like can visually recognize the vessel in the surgical field without looking at the monitor.
  • the image processing unit 12 recognizes the vessels by using a learned model generated based on machine learning using the teacher data of a plurality of vessels displayed in the captured image of the surgical field. As a result, the camera control unit 10 can improve the detection accuracy of the vessel by learning.
  • the image input unit 11 inputs a visible light image as an captured image.
  • the image processing unit 12 recognizes the vessels based on the visible light image.
  • the trained model can detect and recognize the vessels reflected in the visible light image by the annotation work of the doctor or the like described above.
  • the camera control unit 10 can brightly illuminate the surgical field even if the fluorescence based on the fluorescence emission of a fluorescent agent such as ICG (in other words, the affected part such as the tumor part) is not imaged.
  • the vasculature can be accurately detected and recognized from the visible light image obtained based on the imaging. Therefore, the configuration of the medical image projection system 5 can be simplified and the cost can be reduced.
  • FIG. 5 is a diagram showing a schematic configuration example of the endoscope system 40 according to the second embodiment.
  • the endoscope system 40 includes an endoscope 50, a camera control unit 60 as an example of a vessel recognition device, a light source 53, and a monitor 80.
  • the endoscope 50 is, for example, a medical rigid endoscope.
  • the camera control unit 60 is a captured image (for example, a still image) captured by an endoscope 50 inserted into an affected area (for example, the skin of a human body, an organ wall inside the human body, etc.) to be observed by a subject (for example, a person).
  • the moving image is subjected to predetermined image processing, and the vessels reflected in the captured image after the image processing are detected.
  • the camera control unit 60 generates a composite image in which the detected vascular image is superimposed on the captured image after image processing, and outputs the composite image to the monitor 80.
  • the monitor 80 displays a composite image output from the camera control unit 60.
  • the endoscope 50 irradiates the surgical field with white light (that is, visible light), receives the reflected light reflected by the affected part (for example, an organ) of the subject (for example, a person), and captures a visible light image. To do.
  • the endoscope 50 irradiates infrared light to excite the fluorescent agent accumulated in the affected area or the like, and receives infrared light including fluorescence generated by the fluorescence emission of the fluorescent agent to capture a fluorescent image.
  • the endoscope 50 includes an endoscope head 51.
  • the endoscope head 51 includes a visible light sensor unit 51A including a visible light image sensor 54A and an IR cut filter 56A, and an infrared light sensor unit 51B including an infrared light image sensor 54B and a visible light cut filter 56B.
  • a visible light sensor unit 51A including a visible light image sensor 54A and an IR cut filter 56A
  • an infrared light sensor unit 51B including an infrared light image sensor 54B and a visible light cut filter 56B.
  • the image sensor for visible light may be referred to as a sensor for visible light.
  • the infrared light image sensor may be referred to as an infrared light sensor.
  • the IR cut filter 56A blocks (cuts) the excitation light having the IR wavelength band that is irradiated from the light source 53 to the affected area (for example, an organ) and reflected by the affected area.
  • the visible light image sensor 54A receives visible light that has passed through the IR cut filter 56A (that is, visible light reflected by the affected area) and captures a visible light image.
  • the visible light cut filter 56B irradiates the affected area (for example, an organ) from the light source 53 and blocks (cuts) the visible light reflected by the affected area. Further, the visible light cut filter 56B blocks (cuts) not only visible light but also IR light in the wavelength band of excitation light (for example, 690 nm to 820 nm).
  • the infrared light image sensor 54B receives fluorescence that has passed through the visible light cut filter 56B (that is, fluorescence emitted based on the fluorescence emission of the fluorescent agent accumulated in the affected area), and captures a fluorescence image.
  • the endoscope head 51 captures, for example, a visible light image and a fluorescence image in a time-division manner.
  • the endoscope head 51 switches the optical filter to the IR cut filter 56A.
  • the visible light image sensor 54A receives the visible light that has passed through the IR cut filter 56A and captures a visible light image.
  • the endoscope head 51 switches the optical filter to the visible light cut filter 56B.
  • the infrared light image sensor 54B receives the fluorescence (see above) that has passed through the visible light cut filter 56B and captures a fluorescence image.
  • the endoscope head 51 has a configuration in which the IR cut filter 56A and the visible light cut filter 56B can be used together, the visible light image and the fluorescent image can be captured at the same time.
  • the image sensor constituting the visible light image sensor 25A and the infrared light image sensor 25B for example, CMOS is used, but CCD may be used.
  • the light source 53 irradiates a fluorescent agent (for example, excitation light in the IR band for exciting indocyanine green (for example, light having a wavelength of 760 nm), and the light source 53 irradiates white light (for example, light having a wavelength of 700 nm or less).
  • a fluorescent agent for example, excitation light in the IR band for exciting indocyanine green (for example, light having a wavelength of 760 nm)
  • white light for example, light having a wavelength of 700 nm or less.
  • the light source 53 can emit IR excitation light and white light in a time-divided manner or at the same time.
  • the camera control unit 60 is electrically connected to the endoscope head 51, the light source 53, and the monitor 80, and controls the operation of these devices in an integrated manner.
  • the camera control unit 60 includes an image input unit 61, an image processing unit 62, and an image output unit 63.
  • the image input unit 61 inputs the data of the captured image at the time of surgery captured by the endoscope head 51.
  • the image input unit 61 may use an HDMI (registered trademark), USB Type-C, or the like capable of transferring video data at high speed.
  • the camera control unit 60 has a processor and a built-in memory, and the processor executes a program stored in the built-in memory to specifically realize the functions of the image processing unit 62 and the image output unit 63.
  • the processor may be a GPU suitable for image processing.
  • the image processing unit 62 may be configured by a dedicated electronic circuit designed by an MPU, a CPU, an ASIC, or the like, or an electronic circuit designed so that it can be reconfigured by an FPGA or the like, instead of the GPU.
  • the image processing unit 62 is equipped with artificial intelligence (AI, see the first embodiment) and has an image recognition function for detecting and recognizing a vessel reflected in the captured image input by the image input unit 61.
  • the image processing unit 62 can realize the above-mentioned image recognition function as in the first embodiment by using the trained model equipped with artificial intelligence.
  • the trained model is pre-generated before the start of actual operation of the endoscope system 40. Since the procedure for generating the trained model is as described with reference to the first embodiment, the description thereof will be omitted here.
  • the image processing unit 62 uses the trained model described above to perform image recognition processing of the vascular tube that can be reflected in the captured image input from the image input unit 61 at the time of surgery, and colors the area surrounding the detected vascular tube. To generate a vascular specific image.
  • the vessel-specific image is superimposed on the captured image at the time of surgery by the image output unit 63.
  • the image processing unit 62 evaluates (scores) the result (for example, matching probability) of the detected vascular image recognition process, and updates the trained model by machine learning in real time using the evaluation result. May be good.
  • the image output unit 63 generates a composite image in which the vessel-specific image generated by the image processing unit 62 is superimposed on the captured image at the time of surgery input from the image input unit 61.
  • the image output unit 63 transmits the composite image data to the monitor 30.
  • This composite image data further includes position information in the composite image of the vessel and color information for teaching the region of the vessel, in addition to the captured image at the time of surgery and the vessel specific image.
  • the image output unit 63 may transmit text data related to the vessel in addition to the above-mentioned composite image data.
  • the text data may include, for example, an evaluation value (score value) of an image recognition result of a vessel, an organ name, or the like.
  • the monitor 80 displays the data of the composite image from the image output unit 63 (that is, the composite image in which the vessel specific image is superimposed on the captured image).
  • the monitor 80 is composed of, for example, a liquid crystal display or an organic EL display, and has a display surface for displaying an image.
  • FIG. 6 is a diagram showing an outline of the endoscope system 40.
  • the endoscope system 40 includes an endoscope 50, a camera control unit 60, a monitor 80, and a light source 53.
  • the camera control unit 60 performs image processing on the captured image input from the endoscope 50 via the transmission cable, and generates the captured image after the image processing.
  • the camera control unit 60 is connected to the first excitation light source 325 and the second excitation light source 327, respectively, via the light source drive cable 314.
  • the camera control unit 60 generates a control signal for driving the first excitation light source in the light source drive circuit 365 via the light source drive cable and outputs the control signal to the first excitation light source 325.
  • the camera control unit 60 generates a control signal for driving the second excitation light source 327 in the light source drive circuit 365 via the light source drive cable and outputs the control signal to the second excitation light source 327.
  • the monitor 80 displays the captured image output from the camera control unit 60.
  • the monitor 80 has a display device such as an LCD (Liquid Crystal Display) or a CRT (Cathode Ray Tube), for example.
  • the monitor 80 uses a visible light image captured based on the irradiated visible light and a fluorescence image captured based on the fluorescence generated by the irradiated excitation light. And are displayed.
  • the light source 53 includes a first excitation light source 325, a second excitation light source 327, a combiner prism 335, an optical fiber 329, an insertion portion 331, and a phosphor 333 as main constituent members.
  • the first excitation light source 325 is configured by using, for example, a semiconductor laser such as a laser diode (LaserDiode) capable of irradiating a laser beam having a half-value width of about 1/10 of the light from an LED (Light Emission Diode). ..
  • the first excitation light source 325 irradiates and outputs light in a narrow band blue region (for example, light having a wavelength in the wavelength band of 380 to 450 nm) for exciting the phosphor 333 to generate pseudo white light.
  • the second excitation light source 327 is configured by using, for example, a semiconductor laser such as a laser diode capable of irradiating a laser beam having a half-value width of about 1/10 of that of the light from the LED.
  • the second excitation light source 327 irradiates and outputs light in a narrow wavelength band different from the wavelength band of the light emitted from the first excitation light source 325. That is, the second excitation light source 327 is light in the infrared region (for example, a wavelength band of 690 to 820 nm) for exciting a fluorescent agent to be administered to the affected area of the subject in advance before endoscopic surgery or endoscopy. Light having the wavelength of) is irradiated and output.
  • the combined wave prism 335 guides the light emitted from the first excitation light source 325 and the light emitted from the second excitation light source 327 to the same optical fiber 329, respectively.
  • the combined wave prism 335 multiplexes the light in the blue region and the light in the infrared region.
  • the light source 53 causes the light in the blue region and the light in the infrared region multiplexed by the combiner prism 335 to be incident from the light incident end of the optical fiber 329.
  • a condensing lens 341 is provided between the combined wave prism 335 and the optical fiber 329.
  • the optical fiber 329 can be, for example, one optical fiber wire. Further, as the optical fiber 329, a bundle fiber in which a plurality of optical fiber strands are bundled may be used.
  • the insertion portion 331 inserts the optical fiber 329.
  • a transmission cable 323 is inserted through the insertion portion 331.
  • the insertion portion 331 is, for example, an insertion portion of the endoscope 50 to be inserted into the body cavity of the subject.
  • the insertion portion 331 is a tubular rigid member.
  • An imaging window is arranged on the tip surface of the insertion portion 331.
  • the imaging window is formed by including an optical material such as optical glass or optical plastic, and allows light from a subject (for example, a subject or an affected portion in the subject) to be incident.
  • an illumination window is arranged on the tip surface of the insertion portion 331.
  • the illumination window is formed by including an optical material such as optical glass or optical plastic, and emits illumination light from the light emitting end of the optical fiber 329.
  • FIG. 7 is a flowchart showing an example of an operating procedure of the endoscope system 40 according to the second embodiment. This operation is executed when an operator such as a doctor activates the image recognition function of the vessel to the camera control unit 60.
  • the image input unit 61 acquires a visible light image captured by the endoscope head 51 with the surgical field as a subject (S11).
  • the image processing unit 62 analyzes the visible light image using the image recognition function of the vessel (that is, a learned model capable of realizing the image recognition function using the AI described above) (S12).
  • the image processing unit 62 determines whether or not a vessel has been detected as a result of image analysis (S13).
  • the image output unit 63 When a vessel is detected (S13, YES), the image output unit 63 adds a vessel-specific image corresponding to the vessel detected and recognized by the image processing unit 62 to the captured image input from the image input unit 61. A superimposed composite image is generated (S14). The image output unit 63 outputs the composite image to the monitor 80 (S15).
  • the image processing unit 62 does not detect the vessel (S13, NO)
  • the image output unit 63 is input from the visible light image (that is, the image input unit 61) captured by the endoscope head 51.
  • the captured image) is output to the monitor 80 as it is (S16).
  • the camera control unit 10 ends the operation shown in FIG. 7.
  • the operation shown in FIG. 7 is repeated until an operator such as a doctor stops the image recognition function of the vessel with respect to the camera control unit 10.
  • the endoscope system 40 displays the location of the vessel on the monitor 80 by connecting the image captured in the subject captured by the endoscope 50 to the camera control unit 60. It can be displayed accurately. Further, since the camera control unit 60 can be connected to the endoscope system 40 existing in many hospitals, the endoscope system 40 can be used in many hospital facilities and its versatility is increased.
  • Modification 1 of Embodiment 1 When the medical image projection system 5 according to the first embodiment is used, for example, if the head of an operator such as a doctor is reflected in the captured image captured by the camera head 21, the part to be treated is not displayed on the monitor 30. In some cases. Therefore, in the first modification of the first embodiment, when it is detected that the head of an operator such as a doctor has been imaged, the medical image projection system 5 issues an alarm to the operator such as a doctor. You may let us know.
  • the camera control unit 10 has a built-in speaker and outputs an alarm sound from the speaker.
  • the alarm sound is a sound for notifying a surgeon such as a doctor of the situation where the part such as the tumor part to be treated is not reflected, it may be a sound that calls attention such as "pip". It may be a pleasant sound such as "melody”. Further, the alarm sound may be a voice explaining the situation.
  • the camera control unit 10 outputs a default alarm image instead of or outputs an alarm sound, and displays a default alarm image on the monitor 30, so that a site such as a tumor to be treated by a surgeon such as a doctor is not shown. May be informed.
  • the alarm image may be, for example, a text for explaining a situation in which a mark indicating an abnormality, a tumor portion, or the like is not shown.
  • the surgeon such as a doctor simply shifts his or her head, and the image projected on the monitor 30 returns to the state in which the part to be treated is accurately projected.
  • a nurse or the like who assists the surgery may change the posture of the camera head 21.
  • the case where the medical instrument used for the operation covers the part to be operated is the same as the case where the operator's head is covered.
  • the surgeon simply shifts the medical instrument, and the image returns to the state in which the part to be treated is projected.
  • a nurse or the like who assists the surgery may change the posture of the camera head 21.
  • the camera control unit 10 changes the magnifying power of the camera head 21 according to an operation instruction of an operator such as a doctor or automatically, takes an image at the changed magnifying power, and acquires an captured image.
  • the camera control unit 10 may execute the image recognition function of the vessel on the captured image at the changed magnification and detect the vessel.
  • the camera control unit 10 may perform image processing on the image captured by the camera head 21. For example, the camera control unit 10 adjusts the white balance of the captured image and changes the color tone of the captured image so as to suppress the white color. Further, the camera control unit 10 performs a process of reducing the white component in the frequency spectrum of the captured image. The camera control unit 10 may execute the image recognition function of the vessel on the image in which the white color is suppressed and become clear, and detect the vessel.
  • the camera control unit 10 determines the current treatment scene (situation) by analyzing the captured image captured by the camera head 21, and determines the determined scene. Based on this, the image recognition function of the vessel is activated or stopped.
  • FIG. 8 is a diagram showing the registered contents of the scene determination table Tb1 representing a scene when performing a laparotomy according to the second modification of the first embodiment.
  • the scene No. 1 is a scene of "incision”.
  • Scene No. 2 is a scene of "securing blood vessels”.
  • Scene No. Reference numeral 3 denotes a scene of "peeling the liver (hepatic coronary mesentery, etc.)”.
  • Scene No. 4 is a scene of "cholecystectomy”.
  • Scene No. 5 is a scene of "preparation before excision”.
  • Scene No. Reference numeral 6 is a scene of "marking the excised part”.
  • Scene No. 7 is a scene of "during excision”.
  • Scene No. 8 is a scene of "during excision (rest)”.
  • Scene No. 9 is a scene "after excision”.
  • Information for determining the scene includes procedures, treatments, and instruments used.
  • the procedure is "incision”.
  • the procedure is "cutting the skin (laparotomy)".
  • the instrument used is a "female".
  • the camera control unit 10 stops the image recognition function of the vessel.
  • the scene No. In 7 the procedure is "during excision”.
  • the procedure is "remove the liver to remove the tumor”.
  • the instrument used is "Cusa". In this case, since the organ in which the vessel exists appears, the camera control unit 10 activates the image recognition function of the vessel.
  • the nurse who assists the treatment sets the scene number for the camera control unit 10.
  • the scene may be determined by instructing and operating.
  • the camera control units 10 and 60 may automatically determine the scene by the equipment used in the image captured by the camera head 21 or the endoscope head 51.
  • the camera control units 10 and 60 accumulate image data obtained by capturing images of many used devices, perform machine learning by deep learning on these image data, and determine a scene from the used devices included in the captured images. Generate a trained model in advance.
  • the camera control units 10 and 60 may input image data captured by the camera head 21 and the endoscope head 51 into the trained model and determine the scene from the equipment used.
  • the camera control units 10 and 60 can determine the scene and activate or stop the image recognition function of the vessel according to the scene. Therefore, the camera control units 10 and 60 can execute the image recognition function of the vascular tube only when necessary during the treatment.
  • the camera control unit 10 may activate the image recognition function of the vascular tube when the image captured by the camera head 21 temporarily does not include the medical device.
  • the image recognition function of the vessel may be stopped if it is not necessary. As a result, the activation / stop of the image recognition function of the vessel can be switched more finely.
  • the camera control units 10 and 60 may activate or stop the image recognition function of the vessel other than the determination of the scene.
  • the camera control unit 10 may detect the movement of the camera head 21 and stop the image recognition function of the vessel when the camera head 21 is moving.
  • the image processing unit 12 when the image processing unit 12 detects the medical device for removing the tumor portion in the input captured image, the image recognition of the vessel using the captured image is performed. Invokes the function (that is, starts the recognition process). As a result, the camera control unit 10 can display the vascular specific image at an appropriate timing according to the start of the operation.
  • the image processing unit 12 stops the image recognition function of the vessel when the medical device is no longer detected from the input captured image.
  • the camera control unit 10 can erase the display of the vessel specific image at the end of the operation at a timing when it is no longer needed. Therefore, the load on the camera control unit 10 can be reduced.
  • the camera control unit 10 changes the display mode of the vascular specific image according to the surgical situation. Specifically, the camera control unit 10 changes the display of the vascular specific image according to the situation of vascular discovery and treatment. For example, when the camera control unit 10 first detects a vessel included in an image captured by the camera head 21 by the image recognition function of the vessel, the camera control unit 10 obtains a vessel specific image suitable for "first discovery (undecided)". The composite image superimposed on the captured image is displayed on the monitor 30.
  • FIG. 9 is a diagram showing a screen of a monitor 30 in which a vessel specific image according to a modification 3 of the first embodiment is superimposed and displayed on an organ.
  • the vascular specific image is a circular marker mk1 having a diameter centered on the position of the vascular and surrounding the vascular.
  • the circular marker mk1 is lit or blinks.
  • the camera control unit 10 draws a circular marker mk1 with a thin purple line.
  • the camera control unit 10 may draw the circular marker mk1 with a thicker green line according to the number of detections.
  • the camera control unit 10 deforms the outer shape of the marker mk1 from a circle to a quadrangle and draws it in the same color or another color. You may.
  • the camera control unit 10 determines that "treated", for example, after the first discovery of the vessel, confirms that the image captured by the camera head 21 includes forceps, an electric knife, a cusa, and the like. After that, when it is confirmed that these medical devices are no longer included, it may be determined that the treatment has been completed.
  • the camera control unit 10 may determine "treated” by using artificial intelligence.
  • the camera control unit 10 accumulates images of organs including many treated organs, uses these images as teacher data, performs machine learning by deep learning on the treated organs, and learns to detect the treated organs from the captured images. Generate a completed model in advance.
  • the camera control unit 10 may input the data of the image captured by the camera head 21 into the trained model to determine "treated".
  • the camera control unit 10 may draw in the same color or another color so that the outer shape of the marker mk1 is deformed from a circle to a triangle, or the marker mk1 is blinked.
  • the determination of "bloodshed” may use artificial intelligence in the same manner as the determination of "treated”.
  • the camera control unit 10 accumulates images of many bloody organs, uses these images as teacher data, performs machine learning by deep learning on the bloody organs, and prepares a trained model for detecting the bloody organs from the captured images in advance. Generate in.
  • the camera control unit 10 may input data of an image captured by the camera head into the trained model to determine "bloodshed”.
  • the camera control unit 10 determines "bloodshed”, for example, after confirming that the captured image contains an electric knife, and then confirming that the red component in the miscellaneous image image has increased sharply, "bloodshed”. You may judge that.
  • the camera control unit 10 may change the display form of the marker mk1 according to the result of time. For example, immediately after the camera control unit 10 discovers the vessel, the marker mk1 is displayed in a conspicuous manner, for example, by increasing the brightness. When the first time elapses after the discovery, the camera control unit 10 displays the marker mk1 at a reduced brightness so as to gradually become inconspicuous. At this time, the brightness of the marker mk1 may be lowered to the extent that the marker mk1 almost disappears. Further, after the lapse of the first time and the lapse of the second time, the camera control unit 10 raises the brightness of the marker mk1 so as to be conspicuous again so as not to forget the existence position of the vessel. In this way, the camera control unit 10 can change the display mode of the marker mk1 in consideration of the state of the operator. When making the marker mk1 stand out, the camera control unit 10 may blink the marker mk1.
  • the vascular tube may temporarily disappear from the field of view, and the marker mk1 indicating that the vascular tube is temporarily present may disappear.
  • the camera control unit 10 has a rectangular frame image (for example, a yellow frame image) indicating that the vessel has been detected in the screen frame (for example, the upper left and right upper and lower ends of the screen) of the monitor 30. ) Wg may always be displayed.
  • the rectangular frame image wg is displayed by lighting or blinking.
  • the camera control unit 10 may display a rectangular frame image only when the marker mk1 indicating that there is a vessel temporarily disappears.
  • the camera control unit 10 may change the line width of the rectangular frame image depending on the surgical situation or the passage of time.
  • the camera control unit 10 may display a wide rectangular frame image when a surgeon such as a doctor excises a tumor portion using an electric knife.
  • the camera control unit 10 may change the color of the rectangular frame image from yellow to another color such as green.
  • the camera control unit 10 may gradually narrow the width of the rectangular frame image with the passage of time.
  • the image output unit 13 when the image processing unit 12 detects the vessel, the image output unit 13 has a rectangular frame image wg (colored frame) indicating that the vessel is detected.
  • the rectangular frame image wg is displayed on the monitor 30 so that the image) blinks.
  • a circular marker mk1 (colored circular image having a predetermined diameter) centered on the position in the captured image GZ where the vessel is detected is generated.
  • a circular marker mk1 is displayed on the monitor 30 so as to blink. This makes it easier for an operator such as a doctor to grasp the position of the vessel.
  • FIG. 10 is a diagram showing a screen of a monitor 30 in which the ICG fluorescence image fg and the vessel specific image mg according to the modified example 4 of the first embodiment partially overlap and are superimposed on the captured image GZ including an organ.
  • the vascular specific image mg is a circular image painted in green.
  • the ICG fluorescence image fg is displayed in blue, it is mixed with the color of the organ behind it (red) and expressed in magenta (magenta).
  • the camera control unit 10 displays the captured image so that an image having a high degree of urgency or importance can be preferentially discriminated.
  • the camera control unit 10 displays the captured image GZ (visible light image) by the camera head 21, the ICG fluorescence image fg, and the vessel specific image mg on the monitor 30, these are displayed on the respective layers.
  • the camera control unit 10 displays the vascular specific image mg on the uppermost layer, displays the ICG fluorescence image fg on the intermediate layer, and displays the captured image GZ ( Visible light image) is displayed on the lowest layer.
  • the camera control unit 10 sets the color of the vascular identification image on the back side thereof. Set so that there is a complementary color relationship with the color of the visible light image (image of the organ). This makes it easier for the operator or the like to visually recognize the vascular specific image.
  • the color of the vessel-specific image may be set so as to have a complementary color relationship with the background color (black in this case) around the captured image. Red and inconspicuous yellow, which are difficult to distinguish from blood color, are not used as the color of the vascular specific image.
  • the camera control unit 10 displays the vessel specific image mg on the uppermost layer and the ICG fluorescence image fg on the intermediate layer.
  • the vessel specific image mg is drawn on the upper layer of the ICG fluorescence image fg, so that even in the region where the vessel specific image mg and the ICG fluorescence image fg overlap, the surgeon such as a doctor can identify the vessel.
  • Image mg that is, the position of the vessel is easy to see.
  • the camera control unit 10 changes the color of the vascular specific image and the ICG fluorescent image in the region where the vascular specific image and the ICG fluorescent image overlap, without dividing the layer. Image processing such as blinking the tube specific image or not displaying the ICG fluorescent image around the vessel may be performed.
  • the camera control unit 10 may change the color to have a complementary color with the color of the ICG fluorescence image fg. This makes it easier to discriminate the vascular specific image mg.
  • the color of the vessel-specific image may be changed to a color that has a complementary color to the background color of the captured image GZ.
  • the camera head 21 can image the fluorescence based on the fluorescence emission of the fluorescent agent accumulated in the tumor portion in the surgical field.
  • the image output unit 13 both the ICG fluorescence image fg (fluorescence image of the tumor portion based on fluorescence imaging) and the vessel are detected in the captured image of the surgical field, and a part of the ICG fluorescence image fg and the vessel are detected.
  • a part of the specific image mg (a part of a circular region having a predetermined diameter centered on the position of the vessel) overlaps, the position of the vessel 100 is centered by using the complementary color of the background color of the captured image GZ.
  • the vascular specific image mg is displayed on the monitor 30 so as to blink.
  • the camera control unit 10 can make the vasculature in the captured image stand out by displaying the vascular specific image using complementary colors.
  • FIG. 11 shows the ICG fluorescence image fg and the vessel identification displayed on the monitor 30 when the ICG fluorescence image fg and the vessel identification image mg1 according to the modified example 5 of the first embodiment overlap with the organ included in the captured image GZ. It is a figure which shows the image mg1 schematically.
  • the vascular specific image mg1 is a green image that substantially matches the outer shape of the vascular.
  • the ICG fluorescence image fg is a bluish-purple image.
  • the cusa detection image cg that identifies the position of the cusa, which is a medical instrument, is a blue circular image.
  • the camera control unit 10 can detect a cusa, which is a medical device.
  • the cusa detection image cg is arranged so as to cover the vessel specific image mg1.
  • the camera control unit 10 performs the image processing shown in the mathematical formula (1) on the ICG fluorescence image fg, the cusa detection image cg, and the vessel specific image mg1, and the image after the image processing is the captured image GZ including the organ. Is superimposed on the image and displayed on the monitor 30. Further, in this image processing, the camera control unit 10 performs contour extraction on the cucusa detection image cg to obtain contour extraction image cgs of the cucumber detection image cg.
  • ICG fluorescence image fg-Cusa detection image cg + Vessel specific image mg1 + Cusa detection image cg contour extraction image csg > (1)
  • the ICG fluorescence image fg is hollowed out in the region where the ICG fluorescence image fg and the Cusser detection image cg overlap.
  • a vascular specific image mg1 appears in the hollowed out area. Therefore, the operator such as a doctor can accurately grasp the position of the vessel without the vessel specific image mg1 overlapping the ICG fluorescence image fg. Further, by adding the contour extraction image csg to the ICG fluorescence image fg, the region of the ICG fluorescence image fg can also be discriminated.
  • the camera head 21 can image the fluorescence based on the fluorescence emission of the fluorescent agent accumulated in the tumor portion in the surgical field.
  • the image output unit 13 determines.
  • a vessel specific image mg1 (an example of an image of a predetermined color) is displayed in a region excluding a part of the ICG fluorescence image fg around the position of the vessel. This makes it easier for the surgeon or the like to find the position of the vessel even if the ICG fluorescence image and the vessel specific image overlap.
  • the medical image projection system 5 superimposes a vasa vasorum specific image of a vasa vasorum detected in an organ such as an affected organ on an image of this organ and displays it on a monitor 30, and also displays the surgical field.
  • the case of projecting toward is shown.
  • the medical image projection system 5 may be a system that simply projects the vasa vasorum specific image of the vasa vasorum toward the surgical field without superimposing it on the captured image of the organ and displaying it on the monitor 30.
  • This disclosure reproduces the vascular recognition function equivalent to the implicit knowledge of a skilled doctor who quickly recognizes the vascular appearing in the surgical field in real time and with high accuracy, and notifies the location of the vascular appearing on the dissection surface. It is useful as a vascular recognition device, a vascular recognition method, and a vascular recognition system that support the provision of safe and secure surgery.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)

Abstract

This blood vessel recognition device is provided with: an image input unit that is connected to an imaging device for capturing an image of an operation field and that inputs a captured image of the operation field from the imaging device; an image processing unit that recognizes a blood vessel reflected in the captured image on the basis of the inputted captured image; and an image output unit that generates a composite image obtained by superimposing, onto the captured image, information for teaching about the recognized blood vessel and that outputs the generated composite image to a monitor.

Description

脈管認識装置、脈管認識方法および脈管認識システムVascular recognition device, vascular recognition method and vascular recognition system
 本開示は、術野を撮像した撮像画像内に映る脈管を認識する脈管認識装置、脈管認識方法および脈管認識システムに関する。 The present disclosure relates to a vascular recognition device, a vascular recognition method, and a vascular recognition system that recognize a vascular image reflected in a captured image of a surgical field.
 従来、デジタルサブトラクションアンギオグラフィ(DSA)において血管造影を行う際、血管に造影剤を注入した後に得られるライブ像の表示だけでは造影剤が希釈されるために血管の観察が困難である。そこで、特許文献1には、血管だけが現れるサブトラクション像と少なくとも骨が現れるマスク像あるいはライブ像とを利用し、サブトラクション像だけを強調処理してサブトラクション像とマスク像あるいはライブ像との加算により、血管と骨との位置関係が明確に判明可能な画像を得る、DSA後処理血管部の強調表示装置が開示されている。 Conventionally, when performing angiography in digital subtraction angiography (DSA), it is difficult to observe the blood vessel because the contrast medium is diluted only by displaying the live image obtained after injecting the contrast medium into the blood vessel. Therefore, in Patent Document 1, a subtraction image in which only blood vessels appear and a mask image or live image in which at least bones appear are used, and only the subtraction image is emphasized and the subtraction image and the mask image or live image are added. A DSA post-treatment blood vessel highlighting device is disclosed that obtains an image in which the positional relationship between a blood vessel and a bone can be clearly identified.
日本国特開昭63-139532号公報Japanese Patent Application Laid-Open No. 63-139532
 ここで、上述した血管が組織内で複雑に入り組んで構成される臓器として、例えば肝臓が知られている。肝臓における肝静脈あるいはグリソン鞘は複雑に交差しながら網目状に走行し、その走行パターンは個人個人で異なる。また、手術前のCTスキャン(コンピュータ断層診断)あるいはMRI(磁気共鳴画像)処理では描出が困難な程に細い血管(脈管の一例)が存在し、更に手術中には臓器の変形等によって脈管の位置が微妙に変化することがある。手術中において脈管が切除対象の切離面に現れる予感とその認識力は相当の経験を積んだ熟練医師により培われた観察眼(いわゆる暗黙知)に頼るところが大きく、肝臓等の脈管が複雑に入り組んだ臓器を対象とする手術の難易度が高くなっているという課題があった。 Here, for example, the liver is known as an organ composed of the above-mentioned blood vessels intricately intertwined in a tissue. The hepatic veins or Glisson's capsule in the liver run in a mesh pattern while crossing intricately, and the running pattern varies from person to person. In addition, there are blood vessels (an example of a vessel) that are so thin that it is difficult to visualize them by CT scan (Computed Tomography) or MRI (Magnetic Resonance Imaging) processing before surgery, and during surgery, the pulse is caused by deformation of organs. The position of the tube may change slightly. The premonition that the vessel appears on the dissection surface to be excised during surgery and its cognitive ability largely depend on the observation eye (so-called tacit knowledge) cultivated by a skilled doctor with considerable experience, and the vessel such as the liver There is a problem that the difficulty of surgery for complicated and intricate organs is increasing.
 本開示は、上述した従来の状況に鑑みて案出され、術野に現れる脈管を一早く認識する熟練医師の暗黙知と同等の脈管の認識機能をリアルタイムかつ高精度に再現し、切離面に現れる脈管の箇所を報知し、安心安全な手術の提供を支援する脈管認識装置、脈管認識方法および脈管認識システムを提供することを目的とする。 This disclosure was devised in view of the above-mentioned conventional situation, and reproduces the vascular recognition function equivalent to the implicit knowledge of a skilled doctor who quickly recognizes the vascular appearing in the surgical field in real time and with high accuracy. An object of the present invention is to provide a vascular recognition device, a vascular recognition method, and a vascular recognition system that notify the location of a vascular appearing on a detached surface and support the provision of safe and secure surgery.
 本開示は、術野を撮像する撮像装置と接続される脈管認識装置であって、前記撮像装置からの前記術野の撮像画像を入力する画像入力部と、入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識する画像処理部と、認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成し、生成された前記合成画像をモニタに出力する画像出力部と、を備える、脈管認識装置を提供する。 The present disclosure is a vessel recognition device connected to an imaging device that images a surgical field, and is based on an image input unit that inputs an image captured by the surgical field from the imaging device and the input image. An image processing unit that recognizes the vessels reflected in the captured image and a composite image in which the recognized information teaching the vessels is superimposed on the captured image are generated, and the generated composite image is used as a monitor. Provided is a vessel recognition device including an image output unit for outputting.
 また、本開示は、術野を撮像する撮像装置と接続される脈管認識装置における脈管認識方法であって、前記撮像装置からの前記術野の撮像画像を入力するステップと、入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識するステップと、認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成するステップと、生成された前記合成画像をモニタに出力するステップと、を有する、脈管認識方法を提供する。 Further, the present disclosure is a vascular recognition method in a vascular recognition device connected to an imaging device that images the surgical field, and is a step of inputting an image captured in the surgical field from the imaging device. A step of recognizing a vessel reflected in the captured image based on the captured image, a step of generating a composite image in which information for teaching the recognized vessel is superimposed on the captured image, and the generated said Provided is a vascular recognition method having a step of outputting a composite image to a monitor.
 また、本開示は、術野を撮像する撮像装置と脈管認識装置とが互いに接続される脈管認識システムであって、前記脈管認識装置は、前記撮像装置からの前記術野の撮像画像を入力し、入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識し、認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成し、生成された前記合成画像をモニタに出力する、脈管認識システムを提供する。 Further, the present disclosure is a vessel recognition system in which an imaging device for imaging a surgical field and a vessel recognition device are connected to each other, and the vessel recognition device is an image captured of the surgical field from the imaging device. Is input, the vessel reflected in the captured image is recognized based on the input captured image, and a composite image in which the information for teaching the recognized vessel is superimposed on the captured image is generated and generated. Provided is a vessel recognition system that outputs the combined image to a monitor.
 本開示によれば、術野に現れる脈管を一早く認識する熟練医師の暗黙知と同等の脈管の認識機能をリアルタイムかつ高精度に再現し、切離面に現れる脈管の箇所を報知し、安心安全な手術の提供を支援できる。 According to the present disclosure, the recognition function of the vessel, which is equivalent to the tacit knowledge of a skilled doctor who quickly recognizes the vessel appearing in the surgical field, is reproduced in real time and with high accuracy, and the location of the vessel appearing on the incision surface is notified. However, we can support the provision of safe and secure surgery.
実施の形態1に係る医用画像投影システムの概略構成例を示す図The figure which shows the schematic configuration example of the medical image projection system which concerns on Embodiment 1. 医用画像投影システムの具体的な構成を示す図The figure which shows the concrete structure of the medical image projection system 実施の形態1に係る医用画像投影システムの動作手順例を示すフローチャートA flowchart showing an example of an operation procedure of the medical image projection system according to the first embodiment. 脈管の画像認識機能により検出された脈管に対し脈管特定画像が投影される臓器の画像を示す図The figure which shows the image of the organ which the vessel specific image is projected on the vessel detected by the image recognition function of the vessel. 実施の形態2に係る内視鏡システムの概略構成例を示す図The figure which shows the schematic configuration example of the endoscope system which concerns on Embodiment 2. 内視鏡システムの概要を示す図Diagram showing an overview of the endoscopic system 実施の形態2に係る内視鏡システムの動作手順例を示すフローチャートFlow chart showing an example of operating procedure of the endoscope system according to the second embodiment 実施の形態1の変形例2に係る開腹手術を行う際のシーンを表すシーン判定テーブルの登録内容を示す図The figure which shows the registration content of the scene determination table which shows the scene at the time of performing the laparotomy which concerns on the modification 2 of Embodiment 1. 実施の形態1の変形例3に係る脈管特定画像が臓器に重畳して表示されるモニタの画面を示す図The figure which shows the screen of the monitor which the tube specific image which concerns on the modification 3 of Embodiment 1 is superposed on the organ and is displayed. 実施の形態1の変形例4に係るICG蛍光画像と脈管特定画像とが一部重なって臓器を含む撮像画像に重畳して表示されるモニタの画面を示す図The figure which shows the screen of the monitor which the ICG fluorescence image which concerns on the modification 4 of Embodiment 1 and the vessel specific image partially overlap and are superposed on the captured image including an organ. 実施の形態1の変形例5に係るICG蛍光画像と脈管特定画像が撮像画像に含まれる臓器と重なる場合、モニタに表示されるICG蛍光画像と脈管特定画像を模式的に示す図A diagram schematically showing an ICG fluorescence image and a vessel specific image displayed on a monitor when the ICG fluorescence image and the vessel specific image according to the modified example 5 of the first embodiment overlap with an organ included in the captured image.
 以下、適宜図面を参照しながら、本開示に係る脈管認識装置、脈管認識方法および脈管認識システムの構成および動作を具体的に開示した実施の形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面及び以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより特許請求の範囲に記載の主題を限定することは意図されていない。 Hereinafter, embodiments in which the configuration and operation of the vessel recognition device, the vessel recognition method, and the vessel recognition system according to the present disclosure will be specifically disclosed will be described in detail with reference to the drawings as appropriate. However, more detailed explanation than necessary may be omitted. For example, detailed explanations of already well-known matters and duplicate explanations for substantially the same configuration may be omitted. This is to avoid unnecessary redundancy of the following description and to facilitate the understanding of those skilled in the art. It should be noted that the accompanying drawings and the following description are provided for those skilled in the art to fully understand the present disclosure, and are not intended to limit the subject matter described in the claims.
(実施の形態1)
 先ず、実施の形態1に係る脈管認識システムの一例として、医用画像投影システム(MIPS:Medical Imaging Projection System)を例示して説明する。図1は、実施の形態1に係る医用画像投影システム5の概略構成例を示す図である。医用画像投影システム5は、ICG(Indocyanine Green)等の蛍光薬剤が集積した腫瘍部の蛍光画像を臓器に直接プロジェクションマッピングし、医師等の術者が術野から目を離すことなく血流域と阻血域の境界をリアルタイムに可視化するシステムである。ここで、脈管は、体内を血液が流れる血管やリンパ液が流れるリンパ管を含む。また、グリソン鞘は、肝臓を流れる血管や胆管を内包する組織である。実施の形態1では、グリソン鞘も、脈管と同様に取り扱うこととする。
(Embodiment 1)
First, as an example of the vascular recognition system according to the first embodiment, a medical imaging projection system (MIPS) will be illustrated and described. FIG. 1 is a diagram showing a schematic configuration example of the medical image projection system 5 according to the first embodiment. The medical image projection system 5 projects and maps the fluorescent image of the tumor part where fluorescent agents such as ICG (Indocyanine Green) are accumulated directly to the organ, and the surgeon such as a doctor keeps an eye on the surgical field to prevent blood flow and ischemia. It is a system that visualizes the boundaries of regions in real time. Here, the vessels include blood vessels through which blood flows in the body and lymph vessels through which lymph fluid flows. The Grisson capsule is a tissue that contains blood vessels and bile ducts that flow through the liver. In the first embodiment, the Grisson capsule is treated in the same manner as the vessel.
 医用画像投影システム5は、撮像照射装置20と、脈管認識装置の一例としてのカメラコントロールユニット10(CCU:Camera Control Unit)と、モニタ30とを含む構成である。 The medical image projection system 5 includes an image pickup irradiation device 20, a camera control unit 10 (CCU: Camera Control Unit) as an example of a vessel recognition device, and a monitor 30.
 撮像照射装置20は、白色光(つまり可視光)を照射して術野を照明し、被検体(例えば人物)の患部(例えば臓器)で反射された反射光を受光して可視光画像を撮像する。撮像照射装置20は、赤外光を照射して患部等に集積された蛍光薬剤を励起し、蛍光薬剤の蛍光発光により生じた蛍光を含む赤外光を受光して蛍光画像を撮像する。また、撮像照射装置20は、患部(例えば臓器)に対し、脈管の位置を教示する脈管特定画像を投影する。撮像照射装置20は、カメラヘッド21と、プロジェクタ22と、光源23とを含む構成である。 The imaging irradiation device 20 irradiates the surgical field with white light (that is, visible light), receives the reflected light reflected by the affected part (for example, an organ) of the subject (for example, a person), and captures a visible light image. To do. The imaging irradiation device 20 irradiates infrared light to excite the fluorescent agent accumulated in the affected area or the like, receives infrared light including fluorescence generated by the fluorescence emission of the fluorescent agent, and images a fluorescent image. Further, the image pickup irradiation device 20 projects a tube specific image that teaches the position of the tube on the affected part (for example, an organ). The image pickup irradiation device 20 includes a camera head 21, a projector 22, and a light source 23.
 撮像装置の一例としてのカメラヘッド21は、可視光用イメージセンサ25AおよびIRカットフィルタ26Aを含む可視光センサユニット24Aと、赤外光用イメージセンサ25Bおよび可視光カットフィルタ26Bを含む赤外光センサユニット24Bとを含む。 The camera head 21 as an example of the imaging device includes a visible light sensor unit 24A including a visible light image sensor 25A and an IR cut filter 26A, and an infrared light sensor including an infrared light image sensor 25B and a visible light cut filter 26B. Includes unit 24B.
 IRカットフィルタ26Aは、光源23から患部(例えば臓器)に照射され、患部で反射されたIRの波長帯を有する励起光(IR光:Infrared Light)を遮断(カット)する。可視光用イメージセンサ25Aは、IRカットフィルタ26Aを通過した可視光(つまり患部で反射された可視光)を受光し、可視光画像を撮像する。 The IR cut filter 26A blocks (cuts) the excitation light (IR light: Infrared Light) having an IR wavelength band that is irradiated from the light source 23 to the affected area (for example, an organ) and reflected by the affected area. The visible light image sensor 25A receives visible light that has passed through the IR cut filter 26A (that is, visible light reflected by the affected area) and captures a visible light image.
 可視光カットフィルタ26Bは、光源23から患部(例えば臓器)に照射され、患部で反射された可視光を遮断(カット)する。また、可視光カットフィルタ26Bは、可視光だけでなく、励起光の波長帯域(例えば690nm~820nm)のIR光を遮断(カット)する。赤外光用イメージセンサ25Bは、可視光カットフィルタ26Bを通過した蛍光(つまり、患部に集積した蛍光薬剤の蛍光発光に基づいて発した蛍光)を受光し、蛍光画像を撮像する。 The visible light cut filter 26B irradiates the affected area (for example, an organ) from the light source 23 and blocks (cuts) the visible light reflected by the affected area. Further, the visible light cut filter 26B blocks (cuts) not only visible light but also IR light in the wavelength band of excitation light (for example, 690 nm to 820 nm). The infrared light image sensor 25B receives fluorescence that has passed through the visible light cut filter 26B (that is, fluorescence emitted based on the fluorescence emission of the fluorescent agent accumulated in the affected area), and captures a fluorescence image.
 カメラヘッド21は、例えば可視光画像と蛍光画像とを時分割で撮像する。光源23が白色光を発光する時、カメラヘッド21は、光学フィルタをIRカットフィルタ26Aに切り替える。この時、可視光用イメージセンサ25Aは、IRカットフィルタ26Aを通過した可視光を受光して可視光画像を撮像する。 The camera head 21 captures, for example, a visible light image and a fluorescent image in a time-division manner. When the light source 23 emits white light, the camera head 21 switches the optical filter to the IR cut filter 26A. At this time, the visible light image sensor 25A receives the visible light that has passed through the IR cut filter 26A and captures a visible light image.
 また、光源23が赤外光を発光する時、カメラヘッド21は、光学フィルタを可視光カットフィルタ26Bに切り替える。この時、赤外光用イメージセンサ25Bは、可視光カットフィルタ26Bを通過した蛍光(上述参照)を受光して蛍光画像を撮像する。 Further, when the light source 23 emits infrared light, the camera head 21 switches the optical filter to the visible light cut filter 26B. At this time, the infrared light image sensor 25B receives the fluorescence (see above) that has passed through the visible light cut filter 26B and captures a fluorescence image.
 なお、カメラヘッド21は、IRカットフィルタ26Aと可視光カットフィルタ26Bを併用できる構成である場合、可視光画像と蛍光画像を同時に撮像できる。また、可視光用イメージセンサ25Aおよび赤外光用イメージセンサ25Bを構成するイメージセンサは、例えばCMOS(Complementary Metal Oxide Semiconductor)が用いられるが、CCD(Charged Coupled Device)が用いられてもよい。 If the camera head 21 has a configuration in which the IR cut filter 26A and the visible light cut filter 26B can be used together, the visible light image and the fluorescent image can be captured at the same time. Further, as the image sensor constituting the visible light image sensor 25A and the infrared light image sensor 25B, for example, CMOS (Complementary Metal Oxide Semiconductor) is used, but CCD (Charged Coupled Device) may be used.
 光源23は、蛍光薬剤(例えばインドシアニングリーン)を励起するためのIR帯の励起光(例えば波長760nmの光)を照射する。また、光源23は、白色光(例えば波長700nm以下の光)を照射する。光源23は、IR励起光と白色光を時分割であるいは同時に発光可能である。 The light source 23 irradiates excitation light in the IR band (for example, light having a wavelength of 760 nm) for exciting a fluorescent agent (for example, indocyanine green). Further, the light source 23 irradiates white light (for example, light having a wavelength of 700 nm or less). The light source 23 can emit IR excitation light and white light in a time-division manner or at the same time.
 プロジェクタ22は、カメラヘッド21により撮像された患部の撮像画像内に脈管が検出および認識された場合に、脈管の形状を教示する(言い換えれば、特徴付ける)ための脈管特定画像を生成し、その脈管特定画像に対応する投影光を術野内の患部に投影する。なお、プロジェクタ22は、脈管特定画像をカメラコントロールユニット10から取得してもよい。このとき、プロジェクタ22は、脈管特定画像の位置が患部の表面に現れた脈管の位置と一致、もしくは脈管特定画像が脈管の外形状を覆うように、脈管特定画像に対応する投影光を投影する。以下、脈管特定画像に対応する投影光の投影を、単に、脈管特定画像の投影と称する。プロジェクタ22内の構成は周知のプロジェクタの構成と同一でよいため、詳細な説明は省略するが、プロジェクタ22は投影光を投影するための投影光源および投影光学系と、脈管特定画像を生成するための画像形成部とを含む。プロジェクタ22は、例えばDLP(Digital Light Processing)方式、3LCD(Liquid Crystal Display)方式、LCOS(Liquid Crystal On Silicon)方式等、いずれの方式を採用したプロジェクタでよい。 The projector 22 generates a vascular specific image for teaching (in other words, characterizing) the shape of the vascular when the vascular is detected and recognized in the captured image of the affected area captured by the camera head 21. , The projected light corresponding to the vascular specific image is projected onto the affected area in the surgical field. The projector 22 may acquire a vessel specific image from the camera control unit 10. At this time, the projector 22 corresponds to the vascular specific image so that the position of the vascular specific image coincides with the position of the vascular appearing on the surface of the affected area, or the vascular specific image covers the outer shape of the vascular. Project the projected light. Hereinafter, the projection of the projected light corresponding to the vascular specific image is simply referred to as the projection of the vascular specific image. Since the configuration inside the projector 22 may be the same as the configuration of a well-known projector, detailed description thereof will be omitted, but the projector 22 generates a projection light source and a projection optical system for projecting projected light, and a vessel-specific image. Includes an image forming unit for The projector 22 may be a projector that employs any method such as a DLP (Digital Light Processing) method, a 3LCD (Liquid Crystal Display) method, and an LCOS (Liquid Crystal On Silicon) method.
 カメラコントロールユニット10は、撮像照射装置20、モニタ30およびマウス45(図2参照)との間で電気的に接続され、これらの装置の動作を統括的に制御する。カメラコントロールユニット10は、画像入力部11と、画像処理部12と、画像出力部13とを含む。 The camera control unit 10 is electrically connected to the imaging irradiation device 20, the monitor 30, and the mouse 45 (see FIG. 2), and controls the operation of these devices in an integrated manner. The camera control unit 10 includes an image input unit 11, an image processing unit 12, and an image output unit 13.
 画像入力部11は、カメラヘッド21により撮像された手術時の撮像画像のデータを入力する。画像入力部11は、専用の画像入力インターフェースの他、映像データを高速に転送可能なHDMI(登録商標)(High-Definition Multimedia Interface)あるいはUSB(Universal Serial Bus) Type-C等をインターフェースでもよい。 The image input unit 11 inputs the data of the captured image at the time of surgery captured by the camera head 21. In addition to the dedicated image input interface, the image input unit 11 may use an HDMI (registered trademark) (High-Definition Multimedia Interface) or USB (Universal Serial Bus) Type-C interface capable of transferring video data at high speed.
 カメラコントロールユニット10は、プロセッサおよび内蔵メモリを有し、プロセッサが内蔵メモリに記憶されたプログラムを実行することで、画像処理部12および画像出力部13の機能を具体的に実現する。プロセッサは、画像処理に適したGPU(Graphical Processing Unit)でよい。なお、画像処理部12は、GPUの代わりに、MPU(Micro Processing Unit)、CPU(Central Processing Unit)、ASIC(Application Specific Integrated Circuit)等で設計された専用の電子回路や、FPGA(Field Programmable Gate Array)等で再構成可能に設計された電子回路で構成されてもよい。 The camera control unit 10 has a processor and a built-in memory, and the processor executes a program stored in the built-in memory to specifically realize the functions of the image processing unit 12 and the image output unit 13. The processor may be a GPU (Graphical Processing Unit) suitable for image processing. The image processing unit 12 uses a dedicated electronic circuit designed by an MPU (Micro Processing Unit), a CPU (Central Processing Unit), an ASIC (Application Specific Integrated Circuit), etc., or an FPGA (Field Programmable Gate) instead of the GPU. It may be composed of an electronic circuit designed to be reconfigurable by Array) or the like.
 画像処理部12は、人工知能(AI:Artificial Intelligent)を搭載し、画像入力部11により入力された撮像画像内に映る脈管を検出および認識するための画像認識機能を有する。画像処理部12は、人工知能が搭載された学習済みモデルを用いて、上述した画像認識機能を実現可能である。学習済みモデルは、医用画像投影システム5の実運用の開始前に予め生成される。学習済みモデルの生成手順は、次のような手順で行われる。 The image processing unit 12 is equipped with artificial intelligence (AI: Artificial Intelligent) and has an image recognition function for detecting and recognizing a vessel reflected in the captured image input by the image input unit 11. The image processing unit 12 can realize the above-mentioned image recognition function by using a trained model equipped with artificial intelligence. The trained model is pre-generated before the start of actual operation of the medical image projection system 5. The procedure for generating the trained model is as follows.
 先ず、医用画像投影システム5のユーザ(例えば管理者あるいは運営者)により、手術時の撮像映像のデータが用意される。なお、この撮像映像のデータには、医師により、それぞれの撮像映像内に映る臓器等の箇所の名称等を含む所見が音声で吹き込まれて説明された音声のデータも含まれている。例えば、1症例当たり1000枚程度の撮像画像が使用されてそれぞれの撮像画像内に映る臓器の名称等を含む所見が音声で説明されており、100症例で計10万枚程の撮像画像に対する所見が準備される。 First, the user (for example, an administrator or an operator) of the medical image projection system 5 prepares the data of the captured image at the time of surgery. It should be noted that the data of this captured image also includes audio data explained by a doctor by injecting findings including names of parts such as organs reflected in each captured image by voice. For example, about 1000 captured images are used per case, and the findings including the names of organs reflected in each captured image are explained by voice, and the findings for a total of about 100,000 captured images in 100 cases are explained. Is prepared.
 医用画像投影システム5のユーザ(上述参照)は、取得された撮像映像の中で医師により説明された所見の音声を参考にして、個々の撮像画像中において検出されるべき脈管に、その脈管を教示する情報(例えば枠画像)を付与する等のアノテーション作業を行い、アノテーション作業が行われた個々の撮像画像のデータを教師データとして準備する。これにより、例えば撮像画像内に映る脈管を検出および認識するための数多くの教示データが用意されたことになる。 The user of the medical image projection system 5 (see above) refers to the sound of the findings explained by the doctor in the acquired image, and puts the pulse into the vessel to be detected in each image. Annotation work such as adding information for teaching the tube (for example, a frame image) is performed, and the data of each captured image to which the annotation work is performed is prepared as teacher data. As a result, for example, a large amount of teaching data for detecting and recognizing the vessels reflected in the captured image is prepared.
 ここで、画像処理部12は、上述したように事前に用意された数多くの教師データと手術時の撮像映像のデータとを入力して、ディープラーニング(深層学習)等の機械学習を行う。この機械学習では、ディープラーニングを実行するための学習済みモデルを構成するニューラルネットワークの各層(入力層、中間層、出力層)のニューロン間の重み付け係数が、手術時の撮像画像内に映る脈管を的確に検出および認識できるように、最適化される。 Here, the image processing unit 12 inputs a large number of teacher data prepared in advance and data of the captured image at the time of surgery as described above, and performs machine learning such as deep learning (deep learning). In this machine learning, the weighting coefficient between the neurons of each layer (input layer, intermediate layer, output layer) of the neural network that composes the trained model for performing deep learning is reflected in the captured image during surgery. Is optimized so that it can be detected and recognized accurately.
 画像処理部12は、上述した機械学習の結果、脈管の検出および認識に適した学習済みモデルを生成する。画像処理部12は、この学習済みモデルを用いて、手術時に画像入力部11から入力された撮像画像内に映り得る脈管の画像認識処理を行い、検出された脈管を囲む領域を色付けして脈管特定画像を生成する。なお、この脈管特定画像は、画像出力部13により、手術時の撮像画像に重畳される。また、画像処理部12は、検出された脈管の画像認識処理の結果(例えば適合確率)を評価(スコア化)し、この評価結果を用いてリアルタイムで機械学習によって学習済みモデルを更新してもよい。 The image processing unit 12 generates a trained model suitable for detecting and recognizing a vessel as a result of the machine learning described above. Using this trained model, the image processing unit 12 performs image recognition processing of the vascular tube that can be reflected in the captured image input from the image input unit 11 at the time of surgery, and colors the area surrounding the detected vascular tube. To generate a vessel-specific image. The vessel-specific image is superimposed on the captured image at the time of surgery by the image output unit 13. Further, the image processing unit 12 evaluates (scores) the result of the image recognition processing of the detected vessel (for example, the matching probability), and updates the trained model by machine learning in real time using the evaluation result. May be good.
 画像出力部13は、画像処理部12により生成された脈管特定画像を、画像入力部11から入力された手術時の撮像画像に重畳した合成画像を生成する。画像出力部13は、合成画像のデータをモニタ30に送信する。この合成画像データは、手術時の撮像画像と脈管特定画像との他に、脈管の合成画像中の位置情報と脈管の領域を教示するための色情報とを更に含む。なお、画像出力部13は、上述した合成画像のデータの他に、脈管に関連するテキストデータを含めて送信してもよい。テキストデータは、例えば脈管の画像認識結果の評価値(スコア値)や臓器名称等を含んでよい。 The image output unit 13 generates a composite image in which the vessel-specific image generated by the image processing unit 12 is superimposed on the captured image at the time of surgery input from the image input unit 11. The image output unit 13 transmits the composite image data to the monitor 30. This composite image data further includes position information in the composite image of the vessel and color information for teaching the region of the vessel, in addition to the captured image at the time of surgery and the vessel specific image. The image output unit 13 may transmit text data related to the vessel in addition to the above-mentioned composite image data. The text data may include, for example, an evaluation value (score value) of an image recognition result of a vessel, an organ name, or the like.
 また、画像出力部13は、上述した合成画像のデータおよび関連するテキストデータをプロジェクタ22に送信する。なお、画像出力部13は、合成画像のデータおよび関連するテキストデータの代わりに、画像処理部12により生成された脈管特定画像をプロジェクタ22に出力してもよい。プロジェクタ22は、画像出力部13から送信された合成画像のデータに含まれる脈管の撮像画像内に映る位置情報を基に、手術時の術野内の患部に現れる脈管の位置と一致するように(つまり脈管と重なるように)脈管特定画像を投影する。脈管特定画像の色情報として、例えば単色の緑色が用いられる。なお、単色の場合、黄色だと術野内で見分けにくく、また、赤色だと臓器や血の色と区別しにくいので、黄色および赤色は用いられない方が好ましい。ここでは、緑色が用いられる。なお、脈管特定画像の色情報として、医師等が見た目で瞬時に区別し易くなるように撮像画像の背景色と補色の関係にある色が用いられてもよい。 Further, the image output unit 13 transmits the above-mentioned composite image data and related text data to the projector 22. The image output unit 13 may output the vessel-specific image generated by the image processing unit 12 to the projector 22 instead of the composite image data and the related text data. The projector 22 is set to match the position of the vascular tube appearing in the affected area in the surgical field at the time of surgery, based on the position information reflected in the captured image of the vascular tube included in the composite image data transmitted from the image output unit 13. Project a vascular specific image on (that is, overlap the vascular). For example, a single color green is used as the color information of the vessel specific image. In the case of a single color, it is preferable not to use yellow and red because yellow is difficult to distinguish in the surgical field and red is difficult to distinguish from the colors of organs and blood. Here, green is used. As the color information of the vessel specific image, a color having a complementary color relationship with the background color of the captured image may be used so that a doctor or the like can easily distinguish the image instantly.
 モニタ30は、画像出力部13からの合成画像(つまり、撮像画像に脈管特定画像が重畳された合成画像)のデータを表示する。モニタ30は、例えば液晶ディスプレイまたは有機EL(Electroluminescence)ディスプレイで構成され、画像を表示する表示面を有する。 The monitor 30 displays the data of the composite image from the image output unit 13 (that is, the composite image in which the vessel specific image is superimposed on the captured image). The monitor 30 is composed of, for example, a liquid crystal display or an organic EL (Electroluminescence) display, and has a display surface for displaying an image.
 図2は、医用画像投影システム5の具体的な構成を示す図である。医用画像投影システム5は、例えば病院の手術室に設置される。手術室では、手術台110の上に仰向けに寝ている状態の患者hmに対し、開腹手術が行われている。撮像照射装置20は、患者hmの開腹されている部位を含む術野130(例えば患者の患部)に向けて白色光、励起光および投影光を照射し、また、術野130を撮像する。医用画像投影システム5は、図1に示した撮像照射装置20、カメラコントロールユニット10およびモニタ30の他、医師等のユーザが操作可能なマウス45を含む。 FIG. 2 is a diagram showing a specific configuration of the medical image projection system 5. The medical image projection system 5 is installed, for example, in an operating room of a hospital. In the operating room, open surgery is performed on a patient hm who is lying on his back on the operating table 110. The imaging irradiation device 20 irradiates the surgical field 130 (for example, the affected part of the patient) including the opened portion of the patient hm with white light, excitation light, and projected light, and also images the surgical field 130. The medical image projection system 5 includes an imaging irradiation device 20, a camera control unit 10 and a monitor 30 shown in FIG. 1, as well as a mouse 45 that can be operated by a user such as a doctor.
 撮像照射装置20は、図1に示すカメラヘッド21、プロジェクタ22および光源23の他、光学部28を有する。光学部28は、カメラヘッド21とプロジェクタ22とに対向して配置される。光学部28は、術野からカメラヘッド21に向かう可視光および蛍光を透過し、一方、プロジェクタ22から照射された投影光を反射して術野に投影する。光学部28は、カメラヘッド21に向かう可視光および蛍光の光軸と、プロジェクタ22から術野に向けて照射された投影光の光軸とが一致するように調整される。カメラヘッド21の光軸とプロジェクタ22の光軸を一致させた状態において、カメラコントロールユニット10は、カメラヘッド21による撮像画像に含まれる脈管の位置を、例えばカメラヘッド21の光軸を中心とする座標で表し、この座標情報を脈管特定画像の色情報とともにプロジェクタ22に送信する。プロジェクタ22は、受信された座標情報の位置に脈管特定画像を投影する。これにより、プロジェクタ22は、術野内にある脈管を脈管特定画像でマッピング(投影)できる。 The image pickup irradiation device 20 includes an optical unit 28 in addition to the camera head 21, the projector 22, and the light source 23 shown in FIG. The optical unit 28 is arranged so as to face the camera head 21 and the projector 22. The optical unit 28 transmits visible light and fluorescence from the surgical field toward the camera head 21, while reflecting the projected light emitted from the projector 22 and projecting it onto the surgical field. The optical unit 28 is adjusted so that the optical axes of visible light and fluorescence toward the camera head 21 and the optical axes of the projected light emitted from the projector 22 toward the surgical field are aligned with each other. In a state where the optical axis of the camera head 21 and the optical axis of the projector 22 are aligned, the camera control unit 10 sets the position of the vessel included in the image captured by the camera head 21 as the center of, for example, the optical axis of the camera head 21. It is represented by the coordinates to be used, and this coordinate information is transmitted to the projector 22 together with the color information of the vessel specific image. The projector 22 projects a vessel-specific image at the position of the received coordinate information. As a result, the projector 22 can map (project) the vascular tube in the surgical field with the vascular tube specific image.
 マウス45は、カメラコントロールユニット10に対し、術者等による操作を受け付ける入力装置である。なお、入力装置としては、キーボード、タッチパッド、タッチパネル、ボタン、スイッチ等でもよい。術者等は、例えば手術中、カメラヘッド21による撮像画像をモニタ30に表示し、手術の状況を視認する。 The mouse 45 is an input device that accepts operations by an operator or the like on the camera control unit 10. The input device may be a keyboard, a touch pad, a touch panel, a button, a switch, or the like. For example, during surgery, the surgeon or the like displays an image captured by the camera head 21 on the monitor 30 to visually recognize the state of surgery.
 次に、実施の形態1に係る医用画像投影システム5の動作を示す。 Next, the operation of the medical image projection system 5 according to the first embodiment will be shown.
 図3は、実施の形態1に係る医用画像投影システム5の動作手順例を示すフローチャートである。この動作は、医師等の術者が、カメラコントロールユニット10に対し、脈管の画像認識機能を起動させる場合に実行される。 FIG. 3 is a flowchart showing an example of an operation procedure of the medical image projection system 5 according to the first embodiment. This operation is executed when an operator such as a doctor activates the image recognition function of the vessel to the camera control unit 10.
 図3において、画像入力部11は、カメラヘッド21が術野を被写体として撮像した可視光画像を取得する(S1)。画像処理部12は、脈管の画像認識機能(つまり、上述したAIを用いた画像認識機能を実現可能な学習済みモデル)を用いて、可視光画像を解析する(S2)。画像処理部12は、画像解析の結果、脈管を検出したか否かを判別する(S3)。 In FIG. 3, the image input unit 11 acquires a visible light image captured by the camera head 21 with the surgical field as a subject (S1). The image processing unit 12 analyzes a visible light image using the image recognition function of the vessel (that is, a learned model capable of realizing the image recognition function using AI described above) (S2). The image processing unit 12 determines whether or not a vessel has been detected as a result of image analysis (S3).
 脈管を検出した場合(S3、YES)、画像出力部13は、プロジェクタ22に脈管画像投影指示を送信する(S4)。脈管画像投影指示は、撮像画像中の脈管の位置情報および脈管に重畳される脈管特定画像を含む。脈管特定画像は、撮像画像に含まれる脈管の外形を表す領域を特定色で塗りつぶした色画像である。更に、画像出力部13は、画像入力部11から入力された撮像画像に、画像処理部12により検出および認識された脈管に対応する脈管特定画像を重畳した合成画像を生成する(S5)。画像出力部13は、この合成画像をモニタ30に出力する(S6)。 When a vessel is detected (S3, YES), the image output unit 13 transmits a vessel image projection instruction to the projector 22 (S4). The vascular image projection instruction includes the position information of the vascular in the captured image and the vascular specific image superimposed on the vascular. The vessel specific image is a color image in which a region representing the outer shape of the vessel included in the captured image is filled with a specific color. Further, the image output unit 13 generates a composite image in which a vascular specific image corresponding to the vascular detected and recognized by the image processing unit 12 is superimposed on the captured image input from the image input unit 11 (S5). .. The image output unit 13 outputs this composite image to the monitor 30 (S6).
 一方、画像処理部12が脈管を検出していない場合(S3、NO)、画像出力部13は、カメラヘッド21により撮像された可視光画像(つまり、画像入力部11から入力された撮像画像)をそのままモニタ30に出力する(S7)。ステップS6,S7の処理後、カメラコントロールユニット10は、図3に示す動作を終了する。なお、図3に示す動作は、医師等の術者が、カメラコントロールユニット10に対し、脈管の画像認識機能を停止させるまで、繰り返される。 On the other hand, when the image processing unit 12 does not detect the vessel (S3, NO), the image output unit 13 is a visible light image captured by the camera head 21 (that is, an captured image input from the image input unit 11). ) Is output to the monitor 30 as it is (S7). After the processing of steps S6 and S7, the camera control unit 10 ends the operation shown in FIG. The operation shown in FIG. 3 is repeated until an operator such as a doctor stops the image recognition function of the vessel with respect to the camera control unit 10.
 図4は、脈管の画像認識機能により検出された脈管100に対し脈管特定画像mgが投影される臓器200の画像を示す図である。この臓器200の画像は、モニタ30に映し出される。なお、この臓器200の画像は、医師が術野で目視する臓器の画像であってもよい。脈管の画像認識機能が停止している場合、脈管特定画像は生成されないため、臓器200の画像は画像入力部11から入力された撮像画像のまま特に何も変化しない。脈管の画像認識機能が起動し、脈管100が検出された場合、臓器200の画像には、脈管の領域(外形)を表す脈管特定画像mgが重畳される。ここでは、脈管として取り扱われるグリソン鞘が現れている。また、脈管特定画像mgは、脈管100の外形と同じ輪郭を持つ着色(例えば、緑色)画像である。なお、脈管特定画像は、黄色系および赤色系の色を除く任意の色でよい。 FIG. 4 is a diagram showing an image of an organ 200 on which a vascular specific image mg is projected onto a vascular 100 detected by a vascular image recognition function. The image of the organ 200 is displayed on the monitor 30. The image of the organ 200 may be an image of an organ visually observed by a doctor in the surgical field. When the image recognition function of the vessel is stopped, the vessel specific image is not generated, so that the image of the organ 200 remains the captured image input from the image input unit 11 and does not change in particular. When the image recognition function of the vessel is activated and the vessel 100 is detected, the vessel specific image mg representing the region (outer shape) of the vessel is superimposed on the image of the organ 200. Here, the Grisson capsule, which is treated as a vessel, appears. Further, the vessel specific image mg is a colored (for example, green) image having the same contour as the outer shape of the vessel 100. The vessel specific image may be any color except yellowish and reddish colors.
 このように、実施の形態1に係る医用画像投影システム5は、プロジェクションマッピングによりICG蛍光画像を臓器に投影する。医師等の術者は、術野から目を離すことなく、脈管と阻血域の境界をリアルタイムで可視化できる。医用画像投影システム5は、カメラコントロールユニット10が脈管の画像認識機能を保持する人工知能(言い換えると、画像処理部12)を搭載することで、熟練医師と同等の判断レベルで術野内の脈管を的確に検出かつ認識でき、周囲の術者等に把握させることができるように術野に対して表示できる。従って、安心安全な手術のナビゲーションが可能である。 As described above, the medical image projection system 5 according to the first embodiment projects an ICG fluorescence image onto an organ by projection mapping. A surgeon such as a doctor can visualize the boundary between the vascular tube and the ischemic area in real time without taking his eyes off the surgical field. In the medical image projection system 5, the camera control unit 10 is equipped with artificial intelligence (in other words, the image processing unit 12) that holds the image recognition function of the vessel, so that the pulse in the surgical field is at the same judgment level as that of a skilled doctor. The tube can be accurately detected and recognized, and can be displayed on the surgical field so that the surrounding surgeons can grasp it. Therefore, safe and secure surgical navigation is possible.
 以上により、実施の形態1では、カメラコントロールユニット10は、術野を撮像するカメラヘッド21と接続される。カメラコントロールユニット10は、カメラヘッド21からの術野の撮像画像を入力する。カメラコントロールユニット10は、入力された撮像画像に基づいて、撮像画像内に映る脈管を認識する。カメラコントロールユニット10は、認識された脈管を表す脈管特定画像(脈管を教示する情報の一例)を撮像画像に重畳した合成画像を生成し、生成された合成画像をモニタ30に出力する。 As described above, in the first embodiment, the camera control unit 10 is connected to the camera head 21 that images the surgical field. The camera control unit 10 inputs a captured image of the surgical field from the camera head 21. The camera control unit 10 recognizes the vessels reflected in the captured image based on the input captured image. The camera control unit 10 generates a composite image in which a vessel specific image (an example of information for teaching the vessel) representing the recognized vessel is superimposed on the captured image, and outputs the generated composite image to the monitor 30. ..
 これにより、医用画像投影システム5は、例えば手術時に術野を撮像するカメラヘッド21により撮像された撮像画像内に現れる脈管を一早く認識する熟練医師の暗黙知と同等の認識機能をリアルタイムかつ高精度に再現できる。従って、医用画像投影システム5は、術野内の腫瘍部等の切離面に現れる脈管の箇所を的確に医師等の術者に対して報知でき、安心安全な手術の提供を支援できる。 As a result, the medical image projection system 5 provides, for example, a recognition function equivalent to the tacit knowledge of a skilled doctor who quickly recognizes the vessels appearing in the captured image captured by the camera head 21 that captures the surgical field during surgery in real time. Can be reproduced with high accuracy. Therefore, the medical image projection system 5 can accurately notify the surgeon such as a doctor of the location of the vessel appearing on the dissected surface such as the tumor portion in the surgical field, and can support the provision of safe and secure surgery.
 また、カメラコントロールユニット10は、術野に投影可能に配置されたプロジェクタ22と接続される。画像出力部13は、撮像画像GZ内の脈管の位置情報と脈管特定画像とを含む脈管画像投影指示を生成してプロジェクタ22に送る。これにより、術者等は、モニタを見ることなく、術野において脈管を視認できる。 Further, the camera control unit 10 is connected to a projector 22 arranged so as to be projectable in the surgical field. The image output unit 13 generates a vascular image projection instruction including the position information of the vascular in the captured image GZ and the vascular specific image and sends it to the projector 22. As a result, the surgeon or the like can visually recognize the vessel in the surgical field without looking at the monitor.
 また、画像処理部12は、術野の撮像画像内に映る複数枚の脈管の教師データを用いた機械学習に基づいて生成された学習済みモデルを用いて、脈管を認識する。これにより、カメラコントロールユニット10は、学習によって脈管の検出精度を向上できる。 Further, the image processing unit 12 recognizes the vessels by using a learned model generated based on machine learning using the teacher data of a plurality of vessels displayed in the captured image of the surgical field. As a result, the camera control unit 10 can improve the detection accuracy of the vessel by learning.
 また、画像入力部11は、撮像画像として可視光画像を入力する。画像処理部12は、可視光画像に基づいて、脈管を認識する。学習済みモデルは、上述した医師等のアノテーション作業により、可視光画像内に映る脈管を検出および認識可能である。これにより、カメラコントロールユニット10は、仮にICG等の蛍光薬剤の蛍光発光に基づく蛍光(言い換えると、腫瘍部等の患部)が撮像されていない状況であっても、術野を明るく照らす白色光の撮像に基づいて得られる可視光画像から、脈管を的確に検出および認識できる。従って、医用画像投影システム5の構成が簡単になり、コストの削減を図ることができる。 Further, the image input unit 11 inputs a visible light image as an captured image. The image processing unit 12 recognizes the vessels based on the visible light image. The trained model can detect and recognize the vessels reflected in the visible light image by the annotation work of the doctor or the like described above. As a result, the camera control unit 10 can brightly illuminate the surgical field even if the fluorescence based on the fluorescence emission of a fluorescent agent such as ICG (in other words, the affected part such as the tumor part) is not imaged. The vasculature can be accurately detected and recognized from the visible light image obtained based on the imaging. Therefore, the configuration of the medical image projection system 5 can be simplified and the cost can be reduced.
(実施の形態2)
 次に、実施の形態2に係る脈管認識システムの一例として、内視鏡システムを例示して説明する。図5は、実施の形態2に係る内視鏡システム40の概略構成例を示す図である。
(Embodiment 2)
Next, as an example of the vascular recognition system according to the second embodiment, an endoscopic system will be illustrated and described. FIG. 5 is a diagram showing a schematic configuration example of the endoscope system 40 according to the second embodiment.
 内視鏡システム40は、内視鏡50と、脈管認識装置の一例としてのカメラコントロールユニット60と、光源53と、モニタ80とを含む構成である。 The endoscope system 40 includes an endoscope 50, a camera control unit 60 as an example of a vessel recognition device, a light source 53, and a monitor 80.
 内視鏡50は、例えば医療用の硬性内視鏡である。カメラコントロールユニット60は、被検体(例えば人物)の観察対象である患部(例えば人体の皮膚、人体内部の臓器壁等)に挿入された内視鏡50により撮像された撮像画像(例えば、静止画もしくは動画)に対して所定の画像処理を施し、画像処理後の撮像画像内に映る脈管を検出する。カメラコントロールユニット60は、画像処理後の撮像画像に、検出された脈管画像を重畳した合成画像を生成してモニタ80に出力する。モニタ80は、カメラコントロールユニット60から出力された合成画像を表示する。 The endoscope 50 is, for example, a medical rigid endoscope. The camera control unit 60 is a captured image (for example, a still image) captured by an endoscope 50 inserted into an affected area (for example, the skin of a human body, an organ wall inside the human body, etc.) to be observed by a subject (for example, a person). Alternatively, the moving image) is subjected to predetermined image processing, and the vessels reflected in the captured image after the image processing are detected. The camera control unit 60 generates a composite image in which the detected vascular image is superimposed on the captured image after image processing, and outputs the composite image to the monitor 80. The monitor 80 displays a composite image output from the camera control unit 60.
 内視鏡50は、白色光(つまり可視光)を照射して術野を照明し、被検体(例えば人物)の患部(例えば臓器)で反射された反射光を受光して可視光画像を撮像する。内視鏡50は、赤外光を照射して患部等に集積された蛍光薬剤を励起し、蛍光薬剤の蛍光発光により生じた蛍光を含む赤外光を受光して蛍光画像を撮像する。内視鏡50は、内視鏡ヘッド51を含む。 The endoscope 50 irradiates the surgical field with white light (that is, visible light), receives the reflected light reflected by the affected part (for example, an organ) of the subject (for example, a person), and captures a visible light image. To do. The endoscope 50 irradiates infrared light to excite the fluorescent agent accumulated in the affected area or the like, and receives infrared light including fluorescence generated by the fluorescence emission of the fluorescent agent to capture a fluorescent image. The endoscope 50 includes an endoscope head 51.
 内視鏡ヘッド51は、可視光用イメージセンサ54AおよびIRカットフィルタ56Aを含む可視光センサユニット51Aと、赤外光用イメージセンサ54Bおよび可視光カットフィルタ56Bを含む赤外光センサユニット51Bとを含む。可視光用イメージセンサは可視光用センサと称してもよい。同様に、赤外光用イメージセンサは赤外光用センサと称してもよい。 The endoscope head 51 includes a visible light sensor unit 51A including a visible light image sensor 54A and an IR cut filter 56A, and an infrared light sensor unit 51B including an infrared light image sensor 54B and a visible light cut filter 56B. Including. The image sensor for visible light may be referred to as a sensor for visible light. Similarly, the infrared light image sensor may be referred to as an infrared light sensor.
 IRカットフィルタ56Aは、光源53から患部(例えば臓器)に照射され、患部で反射されたIRの波長帯を有する励起光を遮断(カット)する。可視光用イメージセンサ54Aは、IRカットフィルタ56Aを通過した可視光(つまり患部で反射された可視光)を受光し、可視光画像を撮像する。 The IR cut filter 56A blocks (cuts) the excitation light having the IR wavelength band that is irradiated from the light source 53 to the affected area (for example, an organ) and reflected by the affected area. The visible light image sensor 54A receives visible light that has passed through the IR cut filter 56A (that is, visible light reflected by the affected area) and captures a visible light image.
 可視光カットフィルタ56Bは、光源53から患部(例えば臓器)に照射され、患部で反射された可視光を遮断(カット)する。また、可視光カットフィルタ56Bは、可視光だけでなく、励起光の波長帯域(例えば690nm~820nm)のIR光を遮断(カット)する。赤外光用イメージセンサ54Bは、可視光カットフィルタ56Bを通過した蛍光(つまり、患部に集積した蛍光薬剤の蛍光発光に基づいて発した蛍光)を受光し、蛍光画像を撮像する。 The visible light cut filter 56B irradiates the affected area (for example, an organ) from the light source 53 and blocks (cuts) the visible light reflected by the affected area. Further, the visible light cut filter 56B blocks (cuts) not only visible light but also IR light in the wavelength band of excitation light (for example, 690 nm to 820 nm). The infrared light image sensor 54B receives fluorescence that has passed through the visible light cut filter 56B (that is, fluorescence emitted based on the fluorescence emission of the fluorescent agent accumulated in the affected area), and captures a fluorescence image.
 内視鏡ヘッド51は、例えば可視光画像と蛍光画像とを時分割で撮像する。光源53が白色光を発光する時、内視鏡ヘッド51は、光学フィルタをIRカットフィルタ56Aに切り替える。この時、可視光用イメージセンサ54Aは、IRカットフィルタ56Aを通過した可視光を受光して可視光画像を撮像する。 The endoscope head 51 captures, for example, a visible light image and a fluorescence image in a time-division manner. When the light source 53 emits white light, the endoscope head 51 switches the optical filter to the IR cut filter 56A. At this time, the visible light image sensor 54A receives the visible light that has passed through the IR cut filter 56A and captures a visible light image.
 また、光源53が赤外光を発光する時、内視鏡ヘッド51は、光学フィルタを可視光カットフィルタ56Bに切り替える。この時、赤外光用イメージセンサ54Bは、可視光カットフィルタ56Bを通過した蛍光(上述参照)を受光して蛍光画像を撮像する。 Further, when the light source 53 emits infrared light, the endoscope head 51 switches the optical filter to the visible light cut filter 56B. At this time, the infrared light image sensor 54B receives the fluorescence (see above) that has passed through the visible light cut filter 56B and captures a fluorescence image.
 なお、内視鏡ヘッド51は、IRカットフィルタ56Aと可視光カットフィルタ56Bを併用できる構成である場合、可視光画像と蛍光画像を同時に撮像できる。また、可視光用イメージセンサ25Aおよび赤外光用イメージセンサ25Bを構成するイメージセンサは、例えばCMOSが用いられるが、CCDが用いられてもよい。 If the endoscope head 51 has a configuration in which the IR cut filter 56A and the visible light cut filter 56B can be used together, the visible light image and the fluorescent image can be captured at the same time. Further, as the image sensor constituting the visible light image sensor 25A and the infrared light image sensor 25B, for example, CMOS is used, but CCD may be used.
 光源53は、蛍光薬剤(例えばインドシアニングリーンを励起するためのIR帯の励起光(例えば波長760nmの光)を照射する。また、光源53は、白色光(例えば波長700nm以下の光)を照射する。光源53は、IR励起光と白色光を時分割であるいは同時に発光可能である。 The light source 53 irradiates a fluorescent agent (for example, excitation light in the IR band for exciting indocyanine green (for example, light having a wavelength of 760 nm), and the light source 53 irradiates white light (for example, light having a wavelength of 700 nm or less). The light source 53 can emit IR excitation light and white light in a time-divided manner or at the same time.
 カメラコントロールユニット60は、内視鏡ヘッド51、光源53およびモニタ80との間で電気的に接続され、これらの装置の動作を統括的に制御する。カメラコントロールユニット60は、画像入力部61と、画像処理部62と、画像出力部63とを含む。 The camera control unit 60 is electrically connected to the endoscope head 51, the light source 53, and the monitor 80, and controls the operation of these devices in an integrated manner. The camera control unit 60 includes an image input unit 61, an image processing unit 62, and an image output unit 63.
 画像入力部61は、内視鏡ヘッド51により撮像された手術時の撮像画像のデータを入力する。画像入力部61は、専用の画像入力インターフェースの他、映像データを高速に転送可能なHDMI(登録商標)やUSB Type-C等をインターフェースでもよい。 The image input unit 61 inputs the data of the captured image at the time of surgery captured by the endoscope head 51. In addition to the dedicated image input interface, the image input unit 61 may use an HDMI (registered trademark), USB Type-C, or the like capable of transferring video data at high speed.
 カメラコントロールユニット60は、プロセッサおよび内蔵メモリを有し、プロセッサが内蔵メモリに記憶されたプログラムを実行することで、画像処理部62および画像出力部63の機能を具体的に実現する。プロセッサは、画像処理に適したGPUでよい。なお、画像処理部62は、GPUの代わりに、MPU、CPU、ASIC等で設計された専用の電子回路や、FPGA等で再構成可能に設計された電子回路で構成されてもよい。 The camera control unit 60 has a processor and a built-in memory, and the processor executes a program stored in the built-in memory to specifically realize the functions of the image processing unit 62 and the image output unit 63. The processor may be a GPU suitable for image processing. The image processing unit 62 may be configured by a dedicated electronic circuit designed by an MPU, a CPU, an ASIC, or the like, or an electronic circuit designed so that it can be reconfigured by an FPGA or the like, instead of the GPU.
 画像処理部62は、人工知能(AI、実施の形態1参照)を搭載し、画像入力部61により入力された撮像画像内に映る脈管を検出および認識するための画像認識機能を有する。画像処理部62は、人工知能が搭載された学習済みモデルを用いて、実施の形態1と同様に上述した画像認識機能を実現可能である。学習済みモデルは、内視鏡システム40の実運用の開始前に予め生成される。学習済みモデルの生成手順は、実施の形態1を参照して説明した通りであるため、ここでは説明を省略する。 The image processing unit 62 is equipped with artificial intelligence (AI, see the first embodiment) and has an image recognition function for detecting and recognizing a vessel reflected in the captured image input by the image input unit 61. The image processing unit 62 can realize the above-mentioned image recognition function as in the first embodiment by using the trained model equipped with artificial intelligence. The trained model is pre-generated before the start of actual operation of the endoscope system 40. Since the procedure for generating the trained model is as described with reference to the first embodiment, the description thereof will be omitted here.
 画像処理部62は、上述した学習済みモデルを用いて、手術時に画像入力部61から入力された撮像画像内に映り得る脈管の画像認識処理を行い、検出された脈管を囲む領域を色付けして脈管特定画像を生成する。なお、この脈管特定画像は、画像出力部63により、手術時の撮像画像に重畳される。また、画像処理部62は、検出された脈管の画像認識処理の結果(例えば適合確率)を評価(スコア化)し、この評価結果を用いてリアルタイムで機械学習によって学習済みモデルを更新してもよい。 Using the trained model described above, the image processing unit 62 performs image recognition processing of the vascular tube that can be reflected in the captured image input from the image input unit 61 at the time of surgery, and colors the area surrounding the detected vascular tube. To generate a vascular specific image. The vessel-specific image is superimposed on the captured image at the time of surgery by the image output unit 63. Further, the image processing unit 62 evaluates (scores) the result (for example, matching probability) of the detected vascular image recognition process, and updates the trained model by machine learning in real time using the evaluation result. May be good.
 画像出力部63は、画像処理部62により生成された脈管特定画像を、画像入力部61から入力された手術時の撮像画像に重畳した合成画像を生成する。画像出力部63は、合成画像のデータをモニタ30に送信する。この合成画像データは、手術時の撮像画像と脈管特定画像との他に、脈管の合成画像中の位置情報と脈管の領域を教示するための色情報とを更に含む。なお、画像出力部63は、上述した合成画像のデータの他に、脈管に関連するテキストデータを含めて送信してもよい。テキストデータは、例えば脈管の画像認識結果の評価値(スコア値)や臓器名称等を含んでよい。 The image output unit 63 generates a composite image in which the vessel-specific image generated by the image processing unit 62 is superimposed on the captured image at the time of surgery input from the image input unit 61. The image output unit 63 transmits the composite image data to the monitor 30. This composite image data further includes position information in the composite image of the vessel and color information for teaching the region of the vessel, in addition to the captured image at the time of surgery and the vessel specific image. The image output unit 63 may transmit text data related to the vessel in addition to the above-mentioned composite image data. The text data may include, for example, an evaluation value (score value) of an image recognition result of a vessel, an organ name, or the like.
 モニタ80は、画像出力部63からの合成画像(つまり、撮像画像に脈管特定画像が重畳された合成画像)のデータを表示する。モニタ80は、例えば液晶ディスプレイまたは有機ELディスプレイで構成され、画像を表示する表示面を有する。 The monitor 80 displays the data of the composite image from the image output unit 63 (that is, the composite image in which the vessel specific image is superimposed on the captured image). The monitor 80 is composed of, for example, a liquid crystal display or an organic EL display, and has a display surface for displaying an image.
 図6は、内視鏡システム40の概要を示す図である。内視鏡システム40は、内視鏡50と、カメラコントロールユニット60と、モニタ80と、光源53とを含む。 FIG. 6 is a diagram showing an outline of the endoscope system 40. The endoscope system 40 includes an endoscope 50, a camera control unit 60, a monitor 80, and a light source 53.
 カメラコントロールユニット60は、内視鏡50から伝送ケーブルを介して入力される撮像画像を画像処理し、画像処理後の撮像画像を生成する。カメラコントロールユニット60は、光源駆動ケーブル314を介して、第1励起光源325および第2励起光源327とそれぞれ接続されている。カメラコントロールユニット60は、光源駆動ケーブルを介して、第1励起光源を駆動するための制御信号を光源駆動回路365において生成して第1励起光源325に出力する。また、カメラコントロールユニット60は、光源駆動ケーブルを介して、第2励起光源327を駆動するための制御信号を光源駆動回路365において生成して第2励起光源327に出力する。 The camera control unit 60 performs image processing on the captured image input from the endoscope 50 via the transmission cable, and generates the captured image after the image processing. The camera control unit 60 is connected to the first excitation light source 325 and the second excitation light source 327, respectively, via the light source drive cable 314. The camera control unit 60 generates a control signal for driving the first excitation light source in the light source drive circuit 365 via the light source drive cable and outputs the control signal to the first excitation light source 325. Further, the camera control unit 60 generates a control signal for driving the second excitation light source 327 in the light source drive circuit 365 via the light source drive cable and outputs the control signal to the second excitation light source 327.
 モニタ80は、カメラコントロールユニット60から出力される撮像画像を表示する。モニタ80は、例えば、LCD(Liquid Crystal Display)あるいはCRT(Cathode Ray Tube)等の表示デバイスを有する。実施の形態2に係る内視鏡システム40において、モニタ80は、照射された可視光に基づいて撮像された可視光画像と、照射された励起光により発生した蛍光に基づいて撮像された蛍光画像とを表示する。 The monitor 80 displays the captured image output from the camera control unit 60. The monitor 80 has a display device such as an LCD (Liquid Crystal Display) or a CRT (Cathode Ray Tube), for example. In the endoscope system 40 according to the second embodiment, the monitor 80 uses a visible light image captured based on the irradiated visible light and a fluorescence image captured based on the fluorescence generated by the irradiated excitation light. And are displayed.
 光源53は、第1励起光源325と、第2励起光源327と、合波用プリズム335と、光ファイバ329と、挿入部331と、蛍光体333と、を主要な構成部材として有する。 The light source 53 includes a first excitation light source 325, a second excitation light source 327, a combiner prism 335, an optical fiber 329, an insertion portion 331, and a phosphor 333 as main constituent members.
 第1励起光源325は、例えばLED(Light Emission Diode)からの光に比べて半値幅が1/10程度のレーザ光を照射可能なレーザダイオード(Laser Diode)等の半導体レーザを用いて構成される。第1励起光源325は、蛍光体333を励起して擬似的に白色光を生成するための狭帯域の青色領域の光(例えば380~450nmの波長帯域の波長を有する光)を照射出力する。 The first excitation light source 325 is configured by using, for example, a semiconductor laser such as a laser diode (LaserDiode) capable of irradiating a laser beam having a half-value width of about 1/10 of the light from an LED (Light Emission Diode). .. The first excitation light source 325 irradiates and outputs light in a narrow band blue region (for example, light having a wavelength in the wavelength band of 380 to 450 nm) for exciting the phosphor 333 to generate pseudo white light.
 第2励起光源327は、例えばLEDからの光に比べて半値幅が1/10程度のレーザ光を照射可能なレーザダイオード等の半導体レーザを用いて構成される。第2励起光源327は、第1励起光源325から照射される光の波長帯と異なる狭帯域の波長帯の光を照射出力する。即ち、第2励起光源327は、内視鏡手術もしくは内視鏡検査の前に予め被検体の患部に投与される蛍光薬剤を励起するための赤外領域の光(例えば690~820nmの波長帯域の波長を有する光)を照射出力する。 The second excitation light source 327 is configured by using, for example, a semiconductor laser such as a laser diode capable of irradiating a laser beam having a half-value width of about 1/10 of that of the light from the LED. The second excitation light source 327 irradiates and outputs light in a narrow wavelength band different from the wavelength band of the light emitted from the first excitation light source 325. That is, the second excitation light source 327 is light in the infrared region (for example, a wavelength band of 690 to 820 nm) for exciting a fluorescent agent to be administered to the affected area of the subject in advance before endoscopic surgery or endoscopy. Light having the wavelength of) is irradiated and output.
 合波用プリズム335は、第1励起光源325から照射される光および第2励起光源327から照射される光をそれぞれ同一の光ファイバ329に導光する。合波用プリズム335は、青色領域の光と赤外領域の光とを多重化する。 The combined wave prism 335 guides the light emitted from the first excitation light source 325 and the light emitted from the second excitation light source 327 to the same optical fiber 329, respectively. The combined wave prism 335 multiplexes the light in the blue region and the light in the infrared region.
 光源53は、合波用プリズム335により多重化された青色領域の光と赤外領域の光とを光ファイバ329の光入射端から入射させる。合波用プリズム335と光ファイバ329との間には、集光レンズ341が設けられる。 The light source 53 causes the light in the blue region and the light in the infrared region multiplexed by the combiner prism 335 to be incident from the light incident end of the optical fiber 329. A condensing lens 341 is provided between the combined wave prism 335 and the optical fiber 329.
 光ファイバ329は、例えば1本の光ファイバ素線とすることができる。また、光ファイバ329には、複数本の光ファイバ素線を束ねたバンドルファイバが用いられてもよい。 The optical fiber 329 can be, for example, one optical fiber wire. Further, as the optical fiber 329, a bundle fiber in which a plurality of optical fiber strands are bundled may be used.
 挿入部331は、光ファイバ329を挿通する。挿入部331には、この他、伝送ケーブル323が挿通される。挿入部331は、例えば、被検体の体腔に挿入される内視鏡50の挿入部分となる。挿入部331は、筒状の硬性部材となる。 The insertion portion 331 inserts the optical fiber 329. In addition, a transmission cable 323 is inserted through the insertion portion 331. The insertion portion 331 is, for example, an insertion portion of the endoscope 50 to be inserted into the body cavity of the subject. The insertion portion 331 is a tubular rigid member.
 挿入部331の先端面には、撮像窓が配置される。撮像窓は、光学ガラスあるいは光学プラスチック等の光学材料を含んで形成され、被写体(例えば、被検体あるいは被検体内の患部)からの光を入射させる。また、挿入部331の先端面には、照明窓が配置される。照明窓は、光学ガラスあるいは光学プラスチック等の光学材料を含んで形成され、光ファイバ329の光出射端からの照明光を出射する。 An imaging window is arranged on the tip surface of the insertion portion 331. The imaging window is formed by including an optical material such as optical glass or optical plastic, and allows light from a subject (for example, a subject or an affected portion in the subject) to be incident. Further, an illumination window is arranged on the tip surface of the insertion portion 331. The illumination window is formed by including an optical material such as optical glass or optical plastic, and emits illumination light from the light emitting end of the optical fiber 329.
 次に、実施の形態2に係る内視鏡システム40の動作を示す。 Next, the operation of the endoscope system 40 according to the second embodiment will be shown.
 図7は、実施の形態2に係る内視鏡システム40の動作手順例を示すフローチャートである。この動作は、医師等の術者が、カメラコントロールユニット60に対し、脈管の画像認識機能を起動させる場合に実行される。 FIG. 7 is a flowchart showing an example of an operating procedure of the endoscope system 40 according to the second embodiment. This operation is executed when an operator such as a doctor activates the image recognition function of the vessel to the camera control unit 60.
 図7において、画像入力部61は、内視鏡ヘッド51が術野を被写体として撮像した可視光画像を取得する(S11)。画像処理部62は、脈管の画像認識機能(つまり、上述したAIを用いた画像認識機能を実現可能な学習済みモデル)を用いて、可視光画像を解析する(S12)。画像処理部62は、画像解析の結果、脈管を検出したか否かを判別する(S13)。 In FIG. 7, the image input unit 61 acquires a visible light image captured by the endoscope head 51 with the surgical field as a subject (S11). The image processing unit 62 analyzes the visible light image using the image recognition function of the vessel (that is, a learned model capable of realizing the image recognition function using the AI described above) (S12). The image processing unit 62 determines whether or not a vessel has been detected as a result of image analysis (S13).
 脈管を検出した場合(S13、YES)、画像出力部63は、画像入力部61から入力された撮像画像に、画像処理部62により検出および認識された脈管に対応する脈管特定画像を重畳した合成画像を生成する(S14)。画像出力部63は、合成画像をモニタ80に出力する(S15)。 When a vessel is detected (S13, YES), the image output unit 63 adds a vessel-specific image corresponding to the vessel detected and recognized by the image processing unit 62 to the captured image input from the image input unit 61. A superimposed composite image is generated (S14). The image output unit 63 outputs the composite image to the monitor 80 (S15).
 一方、画像処理部62が脈管を検出していない場合(S13、NO)、画像出力部63は、内視鏡ヘッド51により撮像された可視光画像(つまり、画像入力部61から入力された撮像画像)をそのままモニタ80に出力する(S16)。この後、カメラコントロールユニット10は、図7に示す動作を終了する。なお、図7に示す動作は、医師等の術者が、カメラコントロールユニット10に対し、脈管の画像認識機能を停止させるまで、繰り返される。 On the other hand, when the image processing unit 62 does not detect the vessel (S13, NO), the image output unit 63 is input from the visible light image (that is, the image input unit 61) captured by the endoscope head 51. The captured image) is output to the monitor 80 as it is (S16). After this, the camera control unit 10 ends the operation shown in FIG. 7. The operation shown in FIG. 7 is repeated until an operator such as a doctor stops the image recognition function of the vessel with respect to the camera control unit 10.
 このように、実施の形態2に係る内視鏡システム40は、内視鏡50により撮像された被検体内の撮像画像をカメラコントロールユニット60に接続することで、モニタ80に脈管の所在を的確に表示できる。また、カメラコントロールユニット60が多くの病院に既設の内視鏡システム40に接続できるので、内視鏡システム40は、多くの病院施設で利用可能となって汎用性が高まる。 As described above, the endoscope system 40 according to the second embodiment displays the location of the vessel on the monitor 80 by connecting the image captured in the subject captured by the endoscope 50 to the camera control unit 60. It can be displayed accurately. Further, since the camera control unit 60 can be connected to the endoscope system 40 existing in many hospitals, the endoscope system 40 can be used in many hospital facilities and its versatility is increased.
(実施の形態1の変形例1)
 実施の形態1に係る医用画像投影システム5の使用時に、例えばカメラヘッド21により撮像される撮像映像には医師等の術者の頭が映ってしまうと、施術される部位がモニタ30に映らない場合がある。そこで、実施の形態1の変形例1では、医師等の術者の頭が撮像されたことが検出された時、医用画像投影システム5は、アラームを発報し、医師等の術者にその旨を知らせてもよい。例えば、カメラコントロールユニット10は、スピーカを内蔵し、アラーム音をこのスピーカから出力する。アラーム音は、医師等の術者等に施術される腫瘍部等の部位が映っていない状況を知らせるための音であるので、「ピッ」等の注意を促す音であってもよいが、「メロディー」等の耳ざわりのいい音であってもよい。また、アラーム音は、状況を説明する音声であってもよい。
(Modification 1 of Embodiment 1)
When the medical image projection system 5 according to the first embodiment is used, for example, if the head of an operator such as a doctor is reflected in the captured image captured by the camera head 21, the part to be treated is not displayed on the monitor 30. In some cases. Therefore, in the first modification of the first embodiment, when it is detected that the head of an operator such as a doctor has been imaged, the medical image projection system 5 issues an alarm to the operator such as a doctor. You may let us know. For example, the camera control unit 10 has a built-in speaker and outputs an alarm sound from the speaker. Since the alarm sound is a sound for notifying a surgeon such as a doctor of the situation where the part such as the tumor part to be treated is not reflected, it may be a sound that calls attention such as "pip". It may be a pleasant sound such as "melody". Further, the alarm sound may be a voice explaining the situation.
 また、カメラコントロールユニット10は、アラーム音を出力する代わりにあるいは出力するとともに、モニタ30に既定のアラーム画像を表示し、医師等の術者に施術される腫瘍部等の部位が映っていない状況を知らせてもよい。アラーム画像は、例えば異常を表すマーク、腫瘍部等の部位が映っていない状況を説明するためのテキストであってもよい。この場合、医師等の術者が自身の頭をずらすだけで、モニタ30に映し出される映像は、施術される部位が的確に映し出された状態に戻る。なお、医師等の術者が頭部をずらす代わりに、手術を補佐する看護師等がカメラヘッド21の姿勢を変更してもよい。また、映像において、手術に用いられる医療器具が施術される部位に被ってしまう場合も、術者の頭が被ってしまう場合と同様である。この場合、術者が医療器具をずらすだけで、映像は、施術される部位が映し出された状態に戻る。なお、術者が医療器具をずらす代わりに、手術を補佐する看護師等がカメラヘッド21の姿勢を変更してもよい。 Further, the camera control unit 10 outputs a default alarm image instead of or outputs an alarm sound, and displays a default alarm image on the monitor 30, so that a site such as a tumor to be treated by a surgeon such as a doctor is not shown. May be informed. The alarm image may be, for example, a text for explaining a situation in which a mark indicating an abnormality, a tumor portion, or the like is not shown. In this case, the surgeon such as a doctor simply shifts his or her head, and the image projected on the monitor 30 returns to the state in which the part to be treated is accurately projected. Instead of the surgeon such as a doctor shifting the head, a nurse or the like who assists the surgery may change the posture of the camera head 21. Further, in the video, the case where the medical instrument used for the operation covers the part to be operated is the same as the case where the operator's head is covered. In this case, the surgeon simply shifts the medical instrument, and the image returns to the state in which the part to be treated is projected. Instead of the surgeon shifting the medical instrument, a nurse or the like who assists the surgery may change the posture of the camera head 21.
 また、医師等の術者の頭や医療器具が被ってしまい、映像に施術される部位が映らない状況の他、カメラヘッド21による撮像画像の拡大率が低くて臓器を細部まで映し出すことが難しい場合がある。この場合、カメラコントロールユニット10は、医師等の術者の操作指示に従い、あるいは自動的にカメラヘッド21の拡大率を変更し、変更された拡大率で撮像し、撮像画像を取得する。カメラコントロールユニット10は、変更された拡大率による撮像画像に対し、脈管の画像認識機能を実行し、脈管を検出してもよい。 In addition, the head of a surgeon such as a doctor or a medical instrument is covered, and the part to be treated is not shown in the image. In addition, the enlargement ratio of the image captured by the camera head 21 is low, and it is difficult to show the organ in detail. In some cases. In this case, the camera control unit 10 changes the magnifying power of the camera head 21 according to an operation instruction of an operator such as a doctor or automatically, takes an image at the changed magnifying power, and acquires an captured image. The camera control unit 10 may execute the image recognition function of the vessel on the captured image at the changed magnification and detect the vessel.
 また、カメラヘッド21で撮像される視野が曇っており、鮮明な映像が得られない場合がある。この場合、カメラコントロールユニット10は、カメラヘッド21で撮像される画像に対し、画像処理を施してもよい。例えば、カメラコントロールユニット10は、撮像画像のホワイトバランスを調整し、白色を抑えるように撮像画像の色調を変更する。また、カメラコントロールユニット10は、撮像画像の周波数スペクトルにおいて、白色成分を低減させる処理を行う。カメラコントロールユニット10は、白色が抑えられて鮮明になった画像に対し、脈管の画像認識機能を実行し、脈管を検出してもよい。 In addition, the field of view imaged by the camera head 21 is cloudy, and a clear image may not be obtained. In this case, the camera control unit 10 may perform image processing on the image captured by the camera head 21. For example, the camera control unit 10 adjusts the white balance of the captured image and changes the color tone of the captured image so as to suppress the white color. Further, the camera control unit 10 performs a process of reducing the white component in the frequency spectrum of the captured image. The camera control unit 10 may execute the image recognition function of the vessel on the image in which the white color is suppressed and become clear, and detect the vessel.
(実施の形態1の変形例2)
 カメラコントロールユニット10が脈管の画像認識機能を実行する場合、カメラコントロールユニット10が実行する処理の負荷が重くなる。また、実施の形態1に係る医用画像投影システム5では、開腹された被検体の術野に対応する部位に脈管投影画像を投影することになるので、患者に対しても相当の負担がかかる。また、施術される部位がマッピングされた状態となるので、医師等の術者は、マッピングされた画像を見続けることで、実際の部位を錯覚してしまうこともあり得る。従って、カメラコントロールユニット10による脈管の画像認識機能は、施術に必要な場合を除き、停止させておくことが望ましい。
(Modification 2 of Embodiment 1)
When the camera control unit 10 executes the image recognition function of the vessel, the load of the processing executed by the camera control unit 10 becomes heavy. Further, in the medical image projection system 5 according to the first embodiment, the vascular projection image is projected on the site corresponding to the surgical field of the laparotomized subject, which imposes a considerable burden on the patient. .. In addition, since the part to be treated is in a mapped state, an operator such as a doctor may have an illusion of the actual part by continuing to look at the mapped image. Therefore, it is desirable that the image recognition function of the vessel by the camera control unit 10 is stopped except when necessary for the treatment.
 そこで、実施の形態1の変形例2では、カメラコントロールユニット10は、カメラヘッド21により撮像された撮像画像を解析することで現時点の施術のシーン(状況)を判定し、その判定されたシーンを基に、脈管の画像認識機能を起動あるいは停止する。図8は、実施の形態1の変形例2に係る開腹手術を行う際のシーンを表すシーン判定テーブルTb1の登録内容を示す図である。 Therefore, in the second modification of the first embodiment, the camera control unit 10 determines the current treatment scene (situation) by analyzing the captured image captured by the camera head 21, and determines the determined scene. Based on this, the image recognition function of the vessel is activated or stopped. FIG. 8 is a diagram showing the registered contents of the scene determination table Tb1 representing a scene when performing a laparotomy according to the second modification of the first embodiment.
 シーン判定テーブルTb1には、No.1~NO.9のシーンが登録されている。具体的に、シーンNo.1は、「切開中」のシーンである。シーンNo.2は、「血管確保」のシーンである。シーンNo.3は、「肝臓をはがす(肝冠状間膜等)」のシーンである。シーンNo.4は、「胆嚢摘出」のシーンである。シーンNo.5は、「切除前準備」のシーンである。シーンNo.6は、「切除部マーキング」のシーンである。シーンNo.7は、「切除中」のシーンである。シーンNo.8は、「切除中(休憩)」のシーンである。シーンNo.9は、「切除後」のシーンである。 In the scene judgment table Tb1, No. 1 to NO. Nine scenes are registered. Specifically, the scene No. 1 is a scene of "incision". Scene No. 2 is a scene of "securing blood vessels". Scene No. Reference numeral 3 denotes a scene of "peeling the liver (hepatic coronary mesentery, etc.)". Scene No. 4 is a scene of "cholecystectomy". Scene No. 5 is a scene of "preparation before excision". Scene No. Reference numeral 6 is a scene of "marking the excised part". Scene No. 7 is a scene of "during excision". Scene No. 8 is a scene of "during excision (rest)". Scene No. 9 is a scene "after excision".
 シーンを判定する情報は、手順、処置および使用器具を含む。例えば、シーンNo.1では、手順は、「切開中」である。処置は、「皮膚を切開(開腹)」である。使用器具は、「メス」である。この場合、脈管が存在する臓器(例えば肝臓)が現れないので、カメラコントロールユニット10は、脈管の画像認識機能を停止させる。一方、シーンNo.7では、手順は、「切除中」である。処置は、「腫瘍部の摘出のため肝臓を切除」である。使用器具は、「キューサ」である。この場合、脈管が存在する臓器が現れているので、カメラコントロールユニット10は、脈管の画像認識機能を起動させる。 Information for determining the scene includes procedures, treatments, and instruments used. For example, the scene No. In 1, the procedure is "incision". The procedure is "cutting the skin (laparotomy)". The instrument used is a "female". In this case, since the organ (for example, the liver) in which the vessel exists does not appear, the camera control unit 10 stops the image recognition function of the vessel. On the other hand, the scene No. In 7, the procedure is "during excision". The procedure is "remove the liver to remove the tumor". The instrument used is "Cusa". In this case, since the organ in which the vessel exists appears, the camera control unit 10 activates the image recognition function of the vessel.
 カメラコントロールユニット10は、施術を補佐する看護師がカメラコントロールユニット10に対しシーンNo.を指示操作することで、シーンを判定してもよい。また、カメラコントロールユニット10,60は、カメラヘッド21や内視鏡ヘッド51によって撮像される映像に映る使用器具によって自動的にシーンを判定してもよい。この場合、カメラコントロールユニット10,60は、多くの使用器具を撮像した画像データを蓄積し、これらの画像データに対しディープラーニングによる機械学習を行い、撮像画像に含まれる使用器具からシーンを判定する学習済みモデルを事前に生成しておく。カメラコントロールユニット10,60は、この学習済みモデルに対し、カメラヘッド21や内視鏡ヘッド51によって撮像される画像データを入力し、使用器具からシーンを判定してもよい。 In the camera control unit 10, the nurse who assists the treatment sets the scene number for the camera control unit 10. The scene may be determined by instructing and operating. Further, the camera control units 10 and 60 may automatically determine the scene by the equipment used in the image captured by the camera head 21 or the endoscope head 51. In this case, the camera control units 10 and 60 accumulate image data obtained by capturing images of many used devices, perform machine learning by deep learning on these image data, and determine a scene from the used devices included in the captured images. Generate a trained model in advance. The camera control units 10 and 60 may input image data captured by the camera head 21 and the endoscope head 51 into the trained model and determine the scene from the equipment used.
 このように、カメラコントロールユニット10,60は、シーンを判定し、シーンに応じて脈管の画像認識機能を起動あるいは停止することができる。従って、カメラコントロールユニット10,60は、施術に際し、必要な時にだけ脈管の画像認識機能を実行させることができる。 In this way, the camera control units 10 and 60 can determine the scene and activate or stop the image recognition function of the vessel according to the scene. Therefore, the camera control units 10 and 60 can execute the image recognition function of the vascular tube only when necessary during the treatment.
 なお、カメラコントロールユニット10は、カメラヘッド21による撮像画像に医療器具が一時的に含まれない状況になる時、脈管の画像認識機能を起動させるシーンであっても、この状況の時は、必要でないとして、脈管の画像認識機能を停止させてもよい。これにより、脈管の画像認識機能の起動・停止をより細かく切り替えることができる。 The camera control unit 10 may activate the image recognition function of the vascular tube when the image captured by the camera head 21 temporarily does not include the medical device. The image recognition function of the vessel may be stopped if it is not necessary. As a result, the activation / stop of the image recognition function of the vessel can be switched more finely.
 また、カメラコントロールユニット10,60は、シーンの判定以外においても、脈管の画像認識機能を起動あるいは停止させてもよい。例えば、カメラコントロールユニット10は、カメラヘッド21の動きを検出し、カメラヘッド21が動いている場合、脈管の画像認識機能を停止させてもよい。 Further, the camera control units 10 and 60 may activate or stop the image recognition function of the vessel other than the determination of the scene. For example, the camera control unit 10 may detect the movement of the camera head 21 and stop the image recognition function of the vessel when the camera head 21 is moving.
 このように、実施の形態1の変形例2では、画像処理部12は、入力された撮像画像内に腫瘍部を摘出するための医療器具を検出すると、撮像画像を用いた脈管の画像認識機能を起動する(つまり、認識処理を開始する)。これにより、カメラコントロールユニット10は、手術の開始に合わせた適切なタイミングで脈管特定画像を表示できる。 As described above, in the second modification of the first embodiment, when the image processing unit 12 detects the medical device for removing the tumor portion in the input captured image, the image recognition of the vessel using the captured image is performed. Invokes the function (that is, starts the recognition process). As a result, the camera control unit 10 can display the vascular specific image at an appropriate timing according to the start of the operation.
 また、画像処理部12は、入力された撮像画像から医療器具を検出しなくなった場合に、脈管の画像認識機能を停止する。これにより、カメラコントロールユニット10は、手術の終了時、不要となるタイミングで脈管特定画像の表示を消去できる。従って、カメラコントロールユニット10の負荷を軽減できる。 Further, the image processing unit 12 stops the image recognition function of the vessel when the medical device is no longer detected from the input captured image. As a result, the camera control unit 10 can erase the display of the vessel specific image at the end of the operation at a timing when it is no longer needed. Therefore, the load on the camera control unit 10 can be reduced.
(実施の形態1の変形例3)
 実施の形態1の変形例3では、カメラコントロールユニット10は、手術の状況に合わせて、脈管特定画像の表示態様を変える。具体的に、カメラコントロールユニット10は、脈管の発見や処置の状況に応じて、脈管特定画像の表示を変化させる。例えば、カメラコントロールユニット10は、脈管の画像認識機能により、カメラヘッド21による撮像画像に含まれる脈管を最初に検出した場合、「第1発見(未確定)」に相応しい脈管特定画像を撮像画像に重畳した合成画像をモニタ30に表示する。
(Modification 3 of Embodiment 1)
In the third modification of the first embodiment, the camera control unit 10 changes the display mode of the vascular specific image according to the surgical situation. Specifically, the camera control unit 10 changes the display of the vascular specific image according to the situation of vascular discovery and treatment. For example, when the camera control unit 10 first detects a vessel included in an image captured by the camera head 21 by the image recognition function of the vessel, the camera control unit 10 obtains a vessel specific image suitable for "first discovery (undecided)". The composite image superimposed on the captured image is displayed on the monitor 30.
 図9は、実施の形態1の変形例3に係る脈管特定画像が臓器に重畳して表示されるモニタ30の画面を示す図である。脈管特定画像は、脈管の位置を中心とし、脈管を囲む径を有する円形のマーカmk1である。円形のマーカmk1は、点灯あるいは点滅して表示される。第1発見の段階では、カメラコントロールユニット10は、円形のマーカmk1を細い紫色の線で描画する。その後、カメラコントロールユニット10は、脈管の画像認識機能により撮像画像に含まれる脈管を繰り返し検出した場合、検出回数に合わせて円形のマーカmk1をより太い緑色の線で描画してもよい。 FIG. 9 is a diagram showing a screen of a monitor 30 in which a vessel specific image according to a modification 3 of the first embodiment is superimposed and displayed on an organ. The vascular specific image is a circular marker mk1 having a diameter centered on the position of the vascular and surrounding the vascular. The circular marker mk1 is lit or blinks. At the stage of the first discovery, the camera control unit 10 draws a circular marker mk1 with a thin purple line. After that, when the camera control unit 10 repeatedly detects the vessels included in the captured image by the image recognition function of the vessels, the camera control unit 10 may draw the circular marker mk1 with a thicker green line according to the number of detections.
 また、術者による処置が施された(処置済)場合、例えば腫瘍部が摘出された場合、カメラコントロールユニット10は、マーカmk1の外形を円形から四角形に変形し、同色あるいは別の色で描画してもよい。ここで、カメラコントロールユニット10は、「処置済」の判断を、例えば脈管が第1発見されてから、カメラヘッド21による撮像画像中、鉗子、電気メス、キューサ等が含まれることを確認し、その後、これらの医療器具が含まれなくなったことを確認すると、処置済と判断してもよい。 Further, when the treatment by the operator is performed (treated), for example, when the tumor portion is removed, the camera control unit 10 deforms the outer shape of the marker mk1 from a circle to a quadrangle and draws it in the same color or another color. You may. Here, the camera control unit 10 determines that "treated", for example, after the first discovery of the vessel, confirms that the image captured by the camera head 21 includes forceps, an electric knife, a cusa, and the like. After that, when it is confirmed that these medical devices are no longer included, it may be determined that the treatment has been completed.
 また、カメラコントロールユニット10は、人工知能を用いて「処置済」を判断してもよい。カメラコントロールユニット10は、多く処置済を含む臓器の画像を蓄積し、これらの画像を教師データとして、処置済の臓器についてディープラーニングによる機械学習を行い、撮像画像から処置済の臓器を検出する学習済みモデルを事前に生成しておく。カメラコントロールユニット10は、この学習済みモデルに対し、カメラヘッド21による撮像画像のデータを入力して「処置済」を判断してもよい。 Further, the camera control unit 10 may determine "treated" by using artificial intelligence. The camera control unit 10 accumulates images of organs including many treated organs, uses these images as teacher data, performs machine learning by deep learning on the treated organs, and learns to detect the treated organs from the captured images. Generate a completed model in advance. The camera control unit 10 may input the data of the image captured by the camera head 21 into the trained model to determine "treated".
 また、流血があった場合、カメラコントロールユニット10は、マーカmk1の外形を円形から三角形に変形したり、あるいはマーカmk1を点滅させたりするように、同色あるいは別の色で描画してもよい。ここで、「流血」の判断は、「処置済」の判断と同様、人工知能を用いてもよい。カメラコントロールユニット10は、多く流血した臓器の画像を蓄積し、これらの画像を教師データとして、流血した臓器についてディープラーニングによる機械学習を行い、撮像画像から流血した臓器を検出する学習済みモデルを事前に生成しておく。カメラコントロールユニット10は、この学習済みモデルに対し、カメラヘッドによる撮像画像のデータを入力して「流血」を判断してもよい。 Further, when there is bloodshed, the camera control unit 10 may draw in the same color or another color so that the outer shape of the marker mk1 is deformed from a circle to a triangle, or the marker mk1 is blinked. Here, the determination of "bloodshed" may use artificial intelligence in the same manner as the determination of "treated". The camera control unit 10 accumulates images of many bloody organs, uses these images as teacher data, performs machine learning by deep learning on the bloody organs, and prepares a trained model for detecting the bloody organs from the captured images in advance. Generate in. The camera control unit 10 may input data of an image captured by the camera head into the trained model to determine "bloodshed".
 また、カメラコントロールユニット10は、「流血」の判断を、例えば撮像画像に電気メスが含まれることを確認した後、雑像画像中の赤色成分が急激に増加したことを確認すると、「流血」と判断してもよい。 Further, the camera control unit 10 determines "bloodshed", for example, after confirming that the captured image contains an electric knife, and then confirming that the red component in the miscellaneous image image has increased sharply, "bloodshed". You may judge that.
 また、カメラコントロールユニット10は、時間の結果につれて、マーカmk1の表示形態を変化させてもよい。例えば、カメラコントロールユニット10は、脈管を発見した直後、マーカmk1を目立つように、例えば輝度を上げて表示する。カメラコントロールユニット10は、発見後、第1時間が経過すると、マーカmk1を徐々に目立たなくなるように、輝度を下げて表示する。このとき、ほとんどマーカmk1が消える程度にマーカmk1の輝度を下げてもよい。さらに、カメラコントロールユニット10は、第1時間経過後、さらに第2時間が経過すると、脈管の存在位置を忘れないように、マーカmk1を再び目立つように輝度を上げる。このように、カメラコントロールユニット10は、術者の状態を考慮し、マーカmk1の表示態様を可変できる。なお、マーカmk1を目立つようにする場合、カメラコントロールユニット10は、マーカmk1を点滅させてもよい。 Further, the camera control unit 10 may change the display form of the marker mk1 according to the result of time. For example, immediately after the camera control unit 10 discovers the vessel, the marker mk1 is displayed in a conspicuous manner, for example, by increasing the brightness. When the first time elapses after the discovery, the camera control unit 10 displays the marker mk1 at a reduced brightness so as to gradually become inconspicuous. At this time, the brightness of the marker mk1 may be lowered to the extent that the marker mk1 almost disappears. Further, after the lapse of the first time and the lapse of the second time, the camera control unit 10 raises the brightness of the marker mk1 so as to be conspicuous again so as not to forget the existence position of the vessel. In this way, the camera control unit 10 can change the display mode of the marker mk1 in consideration of the state of the operator. When making the marker mk1 stand out, the camera control unit 10 may blink the marker mk1.
 また、手術中、術者が患部等の臓器を裏返したり、ずらしたりすることで、一時的に脈管が視野から外れ、一時的に脈管があることを示すマーカmk1が消えることがある。カメラコントロールユニット10は、既に脈管が検出されている場合、モニタ30の画面枠(例えば画面の左右上下端)に、脈管検出済みであることを表す矩形の枠画像(例えば黄色の枠画像)wgを常に表示してもよい。矩形の枠画像wgは、点灯あるいは点滅で表示される。なお、カメラコントロールユニット10は、一時的に脈管があることを示すマーカmk1が消えた場合に限って、矩形の枠画像を表示してもよい。これにより、一時的に脈管があることを示すマーカmk1が消えた状態であっても、モニタ30の画面枠には、矩形の画像が表示されることから、医師等の術者は、脈管検出済であることを確認できる。 In addition, during surgery, when the surgeon turns over or shifts an organ such as the affected area, the vascular tube may temporarily disappear from the field of view, and the marker mk1 indicating that the vascular tube is temporarily present may disappear. When a vessel has already been detected, the camera control unit 10 has a rectangular frame image (for example, a yellow frame image) indicating that the vessel has been detected in the screen frame (for example, the upper left and right upper and lower ends of the screen) of the monitor 30. ) Wg may always be displayed. The rectangular frame image wg is displayed by lighting or blinking. The camera control unit 10 may display a rectangular frame image only when the marker mk1 indicating that there is a vessel temporarily disappears. As a result, even if the marker mk1 indicating that there is a vessel temporarily disappears, a rectangular image is displayed on the screen frame of the monitor 30, so that a surgeon such as a doctor can check the pulse. It can be confirmed that the tube has been detected.
 また、カメラコントロールユニット10は、矩形の枠画像の線幅を、手術の状況、あるいは時間の経過によって変化させてもよい。例えば、カメラコントロールユニット10は、医師等の術者が電気メスを使用して腫瘍部を切除する場合、矩形の枠画像の幅を広く表示してもよい。また、カメラコントロールユニット10は、矩形の枠画像の色を黄色から緑色等、他の色に変化させてもよい。また、カメラコントロールユニット10は、時間の経過と共に、矩形枠画像の幅を徐々に狭くしてもよい。 Further, the camera control unit 10 may change the line width of the rectangular frame image depending on the surgical situation or the passage of time. For example, the camera control unit 10 may display a wide rectangular frame image when a surgeon such as a doctor excises a tumor portion using an electric knife. Further, the camera control unit 10 may change the color of the rectangular frame image from yellow to another color such as green. Further, the camera control unit 10 may gradually narrow the width of the rectangular frame image with the passage of time.
 このように、実施の形態1の変形例3では、画像出力部13は、画像処理部12により脈管が検出されると、脈管が検出された旨を示す矩形の枠画像wg(色付き枠画像)が点滅するように矩形の枠画像wgをモニタ30に表示させる。これにより、医師等の術者は、脈管の存在に気付くことができる。 As described above, in the modified example 3 of the first embodiment, when the image processing unit 12 detects the vessel, the image output unit 13 has a rectangular frame image wg (colored frame) indicating that the vessel is detected. The rectangular frame image wg is displayed on the monitor 30 so that the image) blinks. As a result, a surgeon such as a doctor can notice the existence of a vessel.
 また、画像出力部13は、画像処理部12により脈管が検出されると、脈管が検出された撮像画像GZ内の位置を中心とした円形のマーカmk1(所定径の色付き円画像)が点滅するように円形のマーカmk1をモニタ30に表示させる。これにより、医師等の術者は、脈管の位置を把握し易くなる。 Further, in the image output unit 13, when the vessel is detected by the image processing unit 12, a circular marker mk1 (colored circular image having a predetermined diameter) centered on the position in the captured image GZ where the vessel is detected is generated. A circular marker mk1 is displayed on the monitor 30 so as to blink. This makes it easier for an operator such as a doctor to grasp the position of the vessel.
(実施の形態1の変形例4)
 脈管特定画像とICG蛍光画像(つまり、ICGの蛍光発光に基づいてカメラヘッド21により撮像される蛍光画像)とが重なってモニタ30に表示される場合、これらの重なった領域では、医師等の術者は、脈管の位置を判別しにくい。図10は、実施の形態1の変形例4に係るICG蛍光画像fgと脈管特定画像mgとが一部重なって臓器を含む撮像画像GZに重畳して表示されるモニタ30の画面を示す図である。ここでは、脈管特定画像mgは、緑色に塗り潰した円画像である。ICG蛍光画像fgは、青色で表示されるが、その裏側の臓器の色(赤色)と混色し、赤紫色(マゼンタ色)で表現される。
(Modification 4 of Embodiment 1)
When the vascular specific image and the ICG fluorescence image (that is, the fluorescence image captured by the camera head 21 based on the fluorescence emission of ICG) are overlapped and displayed on the monitor 30, in these overlapping regions, a doctor or the like It is difficult for the surgeon to determine the position of the vessel. FIG. 10 is a diagram showing a screen of a monitor 30 in which the ICG fluorescence image fg and the vessel specific image mg according to the modified example 4 of the first embodiment partially overlap and are superimposed on the captured image GZ including an organ. Is. Here, the vascular specific image mg is a circular image painted in green. Although the ICG fluorescence image fg is displayed in blue, it is mixed with the color of the organ behind it (red) and expressed in magenta (magenta).
 カメラコントロールユニット10は、撮像画像に対し、緊急度や重要度が高い画像を優先して判別できるように表示する。一例として、カメラコントロールユニット10は、カメラヘッド21による撮像画像GZ(可視光画像)、ICG蛍光画像fg、および脈管特定画像mgをモニタ30に表示する際、これらをそれぞれのレイヤに表示する。例えば、脈管特定画像mgが優先して表示される場合、カメラコントロールユニット10は、脈管特定画像mgを最上位レイヤに表示し、ICG蛍光画像fgを中間レイヤに表示し、撮像画像GZ(可視光画像)を最下位レイヤに表示する。脈管特定画像とICG蛍光画像が重ならない領域、つまり、脈管特定画像が可視光画像に重畳して表示される場合、カメラコントロールユニット10は、脈管特定画像の色を、その裏側である可視光画像(臓器の画像)の色と補色の関係となるように設定する。これにより、術者等は、脈管特定画像を視認し易くなる。なお、脈管特定画像の色は、撮像画像の周囲の背景色(ここでは黒色)と補色の関係となるように設定されてもよい。脈管特定画像の色には、血の色と見分けにくい赤色や目立たない黄色は使用されない。 The camera control unit 10 displays the captured image so that an image having a high degree of urgency or importance can be preferentially discriminated. As an example, when the camera control unit 10 displays the captured image GZ (visible light image) by the camera head 21, the ICG fluorescence image fg, and the vessel specific image mg on the monitor 30, these are displayed on the respective layers. For example, when the vascular specific image mg is preferentially displayed, the camera control unit 10 displays the vascular specific image mg on the uppermost layer, displays the ICG fluorescence image fg on the intermediate layer, and displays the captured image GZ ( Visible light image) is displayed on the lowest layer. When the vascular identification image and the ICG fluorescence image do not overlap, that is, when the vascular identification image is displayed superimposed on the visible light image, the camera control unit 10 sets the color of the vascular identification image on the back side thereof. Set so that there is a complementary color relationship with the color of the visible light image (image of the organ). This makes it easier for the operator or the like to visually recognize the vascular specific image. The color of the vessel-specific image may be set so as to have a complementary color relationship with the background color (black in this case) around the captured image. Red and inconspicuous yellow, which are difficult to distinguish from blood color, are not used as the color of the vascular specific image.
 実施の形態1の変形例4では、カメラコントロールユニット10は、脈管特定画像mgを最上位レイヤに表示し、ICG蛍光画像fgを中間レイヤに表示する。これにより、脈管特定画像mgがICG蛍光画像fgの上位レイヤに描画されることで、脈管特定画像mgとICG蛍光画像fgが重なった領域においても、医師等の術者は、脈管特定画像mg、つまり脈管の位置を視認し易い。 In the fourth modification of the first embodiment, the camera control unit 10 displays the vessel specific image mg on the uppermost layer and the ICG fluorescence image fg on the intermediate layer. As a result, the vessel specific image mg is drawn on the upper layer of the ICG fluorescence image fg, so that even in the region where the vessel specific image mg and the ICG fluorescence image fg overlap, the surgeon such as a doctor can identify the vessel. Image mg, that is, the position of the vessel is easy to see.
 なお、カメラコントロールユニット10は、脈管特定画像とICG蛍光画像を、上記レイヤを振り分けることなく、脈管特定画像とICG蛍光画像が重なった領域において、脈管特定画像の色を変更する、脈管特定画像を点滅表示する、あるいは脈管の周囲にあるICG蛍光画像を表示しない、等の画像処理を施してもよい。カメラコントロールユニット10は、脈管特定画像の色を変更する場合、ICG蛍光画像fgの色と補色となる関係を有する色に変更してもよい。これにより、脈管特定画像mgが判別し易くなる。なお、脈管特定画像の色は、撮像画像GZの背景色と補色となる関係を有する色に変更されてもよい。 The camera control unit 10 changes the color of the vascular specific image and the ICG fluorescent image in the region where the vascular specific image and the ICG fluorescent image overlap, without dividing the layer. Image processing such as blinking the tube specific image or not displaying the ICG fluorescent image around the vessel may be performed. When changing the color of the vessel specific image, the camera control unit 10 may change the color to have a complementary color with the color of the ICG fluorescence image fg. This makes it easier to discriminate the vascular specific image mg. The color of the vessel-specific image may be changed to a color that has a complementary color to the background color of the captured image GZ.
 このように、実施の形態1の変形例4では、カメラヘッド21は、術野内の腫瘍部に集積した蛍光薬剤の蛍光発光に基づく蛍光を撮像可能である。画像出力部13は、術野の撮像画像内においてICG蛍光画像fg(蛍光の撮像に基づく腫瘍部の蛍光画像)と脈管との両方が検出され、かつICG蛍光画像fgの一部と脈管特定画像mgの一部(脈管の位置を中心とする所定径の円領域の一部)とが重複する場合、撮像画像GZの背景色の補色を用いて、脈管100の位置を中心に点滅するように脈管特定画像mgをモニタ30に表示させる。これにより、カメラコントロールユニット10は、補色を用いて脈管特定画像を表示することで、撮像画像中にある脈管を目立たせることができる。 As described above, in the modified example 4 of the first embodiment, the camera head 21 can image the fluorescence based on the fluorescence emission of the fluorescent agent accumulated in the tumor portion in the surgical field. In the image output unit 13, both the ICG fluorescence image fg (fluorescence image of the tumor portion based on fluorescence imaging) and the vessel are detected in the captured image of the surgical field, and a part of the ICG fluorescence image fg and the vessel are detected. When a part of the specific image mg (a part of a circular region having a predetermined diameter centered on the position of the vessel) overlaps, the position of the vessel 100 is centered by using the complementary color of the background color of the captured image GZ. The vascular specific image mg is displayed on the monitor 30 so as to blink. As a result, the camera control unit 10 can make the vasculature in the captured image stand out by displaying the vascular specific image using complementary colors.
(実施の形態1の変形例5)
 図11は、実施の形態1の変形例5に係るICG蛍光画像fgと脈管特定画像mg1が撮像画像GZに含まれる臓器と重なる場合、モニタ30に表示されるICG蛍光画像fgと脈管特定画像mg1を模式的に示す図である。ここでは、脈管特定画像mg1は、脈管の外形に略一致する緑色の画像である。ICG蛍光画像fgは、青紫色の画像である。また、医療器具であるキューサの位置を特定するキューサ検出画像cgは、青色の円画像である。カメラコントロールユニット10は、医療器具であるキューサを検出可能である。ここでは、キューサ検出画像cgは、脈管特定画像mg1を覆うように配置されている場合を想定する。
(Modification 5 of Embodiment 1)
FIG. 11 shows the ICG fluorescence image fg and the vessel identification displayed on the monitor 30 when the ICG fluorescence image fg and the vessel identification image mg1 according to the modified example 5 of the first embodiment overlap with the organ included in the captured image GZ. It is a figure which shows the image mg1 schematically. Here, the vascular specific image mg1 is a green image that substantially matches the outer shape of the vascular. The ICG fluorescence image fg is a bluish-purple image. Further, the cusa detection image cg that identifies the position of the cusa, which is a medical instrument, is a blue circular image. The camera control unit 10 can detect a cusa, which is a medical device. Here, it is assumed that the cusa detection image cg is arranged so as to cover the vessel specific image mg1.
 カメラコントロールユニット10は、ICG蛍光画像fgと、キューサ検出画像cgと、脈管特定画像mg1とに対し、数式(1)に示す画像処理を施し、画像処理後の画像を臓器を含む撮像画像GZに重畳してモニタ30に表示する。また、カメラコントロールユニット10は、この画像処理において、キューサ検出画像cgに対し輪郭抽出を行い、キューサ検出画像cgの輪郭抽出画像cgsを得ておく。 The camera control unit 10 performs the image processing shown in the mathematical formula (1) on the ICG fluorescence image fg, the cusa detection image cg, and the vessel specific image mg1, and the image after the image processing is the captured image GZ including the organ. Is superimposed on the image and displayed on the monitor 30. Further, in this image processing, the camera control unit 10 performs contour extraction on the cucusa detection image cg to obtain contour extraction image cgs of the cucumber detection image cg.
 ICG蛍光画像fg - キューサ検出画像cg + 脈管特定画像mg1 + キューサ検出画像cgの輪郭抽出画像csg ……(1) ICG fluorescence image fg-Cusa detection image cg + Vessel specific image mg1 + Cusa detection image cg contour extraction image csg …… (1)
 モニタ30には、ICG蛍光画像fgとキューサ検出画像cgとが重なった領域では、ICG蛍光画像fgが中抜きされる。中抜きされた領域に、脈管特定画像mg1が現われる。従って、ICG蛍光画像fgに脈管特定画像mg1が重なることなく、医師等の術者は、脈管の位置を正確に把握できる。また、ICG蛍光画像fgに輪郭抽出画像csgが加わることで、ICG蛍光画像fgの領域も判別可能である。 On the monitor 30, the ICG fluorescence image fg is hollowed out in the region where the ICG fluorescence image fg and the Cusser detection image cg overlap. A vascular specific image mg1 appears in the hollowed out area. Therefore, the operator such as a doctor can accurately grasp the position of the vessel without the vessel specific image mg1 overlapping the ICG fluorescence image fg. Further, by adding the contour extraction image csg to the ICG fluorescence image fg, the region of the ICG fluorescence image fg can also be discriminated.
 実施の形態1の変形例5では、カメラヘッド21は、術野内の腫瘍部に集積した蛍光薬剤の蛍光発光に基づく蛍光を撮像可能である。画像出力部13は、術野の撮像画像内においてICG蛍光画像fgと脈管との両方が検出され、かつICG蛍光画像fgの一部と脈管特定画像mg1の一部とが重複する場合、脈管の位置の周辺のICG蛍光画像fgの一部の領域を除いた領域に脈管特定画像mg1(所定色の画像の一例)を表示させる。これにより、術者等は、ICG蛍光画像と脈管特定画像が重なっても、脈管の位置を見つけ易くなる。 In the fifth modification of the first embodiment, the camera head 21 can image the fluorescence based on the fluorescence emission of the fluorescent agent accumulated in the tumor portion in the surgical field. When both the ICG fluorescence image fg and the vessel are detected in the captured image of the surgical field, and a part of the ICG fluorescence image fg and a part of the vessel specific image mg1 overlap with each other, the image output unit 13 determines. A vessel specific image mg1 (an example of an image of a predetermined color) is displayed in a region excluding a part of the ICG fluorescence image fg around the position of the vessel. This makes it easier for the surgeon or the like to find the position of the vessel even if the ICG fluorescence image and the vessel specific image overlap.
 以上、図面を参照しながら各種の実施の形態について説明したが、本開示はかかる例に限定されないことは言うまでもない。当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例、修正例、置換例、付加例、削除例、均等例に想到し得ることは明らかであり、それらについても当然に本開示の技術的範囲に属するものと了解される。また、発明の趣旨を逸脱しない範囲において、上述した各種の実施の形態における各構成要素を任意に組み合わせてもよい。 Although various embodiments have been described above with reference to the drawings, it goes without saying that the present disclosure is not limited to such examples. It is clear that a person skilled in the art can come up with various modification examples, modification examples, replacement examples, addition examples, deletion examples, and equal examples within the scope of claims. It is understood that it naturally belongs to the technical scope of the present disclosure. In addition, each component in the various embodiments described above may be arbitrarily combined as long as the gist of the invention is not deviated.
 例えば、実施の形態1では、医用画像投影システム5は、患部等の臓器において検出された脈管の脈管特定画像を、この臓器の撮像画像に重畳してモニタ30に表示するとともに、術野に向けて投影する場合を示した。医用画像投影システム5は、脈管の脈管特定画像を臓器の撮像画像に重畳してモニタ30に表示することなく、術野に向けて投影するだけのシステムであってもよい。 For example, in the first embodiment, the medical image projection system 5 superimposes a vasa vasorum specific image of a vasa vasorum detected in an organ such as an affected organ on an image of this organ and displays it on a monitor 30, and also displays the surgical field. The case of projecting toward is shown. The medical image projection system 5 may be a system that simply projects the vasa vasorum specific image of the vasa vasorum toward the surgical field without superimposing it on the captured image of the organ and displaying it on the monitor 30.
 なお、本出願は、2019年3月27日出願の日本特許出願(特願2019-060805)に基づくものであり、その内容は本出願の中に参照として援用される。 This application is based on the Japanese patent application (Japanese Patent Application No. 2019-060805) filed on March 27, 2019, and the content of which is incorporated as a reference in this application.
 本開示は、術野に現れる脈管を一早く認識する熟練医師の暗黙知と同等の脈管の認識機能をリアルタイムかつ高精度に再現し、切離面に現れる脈管の箇所を報知し、安心安全な手術の提供を支援する脈管認識装置、脈管認識方法および脈管認識システムとして有用である。 This disclosure reproduces the vascular recognition function equivalent to the implicit knowledge of a skilled doctor who quickly recognizes the vascular appearing in the surgical field in real time and with high accuracy, and notifies the location of the vascular appearing on the dissection surface. It is useful as a vascular recognition device, a vascular recognition method, and a vascular recognition system that support the provision of safe and secure surgery.
5 医用画像投影システム
10、60 カメラコントロールユニット
11、61 画像入力部
12、62 画像処理部
13、63 画像出力部
21 カメラヘッド
22 プロジェクタ
23、53 光源
24A 可視光センサユニット
24B 赤外光センサユニット
25A、54A 可視光用イメージセンサ
25B、54B 赤外光用イメージセンサ
26A、56A IRカットフィルタ
26B、56B 可視光カットフィルタ
30、80 モニタ
40 内視鏡システム
51 内視鏡ヘッド
100 脈管
5 Medical image projection system 10, 60 Camera control unit 11, 61 Image input unit 12, 62 Image processing unit 13, 63 Image output unit 21 Camera head 22 Projector 23, 53 Light source 24A Visible light sensor unit 24B Infrared light sensor unit 25A , 54A Visible light image sensor 25B, 54B Infrared light image sensor 26A, 56A IR cut filter 26B, 56B Visible light cut filter 30, 80 Monitor 40 Endoscope system 51 Endoscope head 100 Vascular

Claims (12)

  1.  術野を撮像する撮像装置と接続される脈管認識装置であって、
     前記撮像装置からの前記術野の撮像画像を入力する画像入力部と、
     入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識する画像処理部と、
     認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成し、生成された前記合成画像をモニタに出力する画像出力部と、を備える、
     脈管認識装置。
    It is a vessel recognition device connected to an imaging device that images the surgical field.
    An image input unit that inputs an image captured from the surgical field from the imaging device,
    An image processing unit that recognizes the vessels reflected in the captured image based on the input captured image, and
    It includes an image output unit that generates a composite image in which the recognized information for teaching the vessel is superimposed on the captured image and outputs the generated composite image to a monitor.
    Vascular recognition device.
  2.  前記術野に投影可能に配置されたプロジェクタと接続され、
     前記画像出力部は、前記撮像画像内の前記脈管の位置情報と前記脈管を教示する情報とを含む脈管画像投影指示を生成して前記プロジェクタに送る、
     請求項1に記載の脈管認識装置。
    It is connected to a projector arranged so that it can be projected in the surgical field.
    The image output unit generates a vascular image projection instruction including the position information of the vascular in the captured image and the information for teaching the vascular, and sends it to the projector.
    The vessel recognition device according to claim 1.
  3.  前記画像処理部は、術野の撮像画像内に映る複数枚の脈管の教師データを用いた機械学習に基づいて生成された学習済みモデルを用いて、前記脈管を認識する、
     請求項1に記載の脈管認識装置。
    The image processing unit recognizes the vessels by using a trained model generated based on machine learning using teacher data of a plurality of vessels displayed in the captured image of the surgical field.
    The vessel recognition device according to claim 1.
  4.  前記画像入力部は、前記撮像画像として可視光画像を入力し、
     前記画像処理部は、前記可視光画像に基づいて、前記脈管を認識する、
     請求項3に記載の脈管認識装置。
    The image input unit inputs a visible light image as the captured image, and receives the image.
    The image processing unit recognizes the vessel based on the visible light image.
    The vessel recognition device according to claim 3.
  5.  前記画像処理部は、入力された前記撮像画像内に腫瘍部を摘出するための医療器具を検出すると、前記撮像画像を用いた前記脈管の認識処理を開始する、
     請求項1に記載の脈管認識装置。
    When the image processing unit detects a medical device for removing the tumor portion in the input captured image, the image processing unit starts the recognition process of the vessel using the captured image.
    The vessel recognition device according to claim 1.
  6.  前記画像処理部は、入力された前記撮像画像から前記医療器具を検出しなくなった場合に、前記認識処理を停止する、
     請求項5に記載の脈管認識装置。
    The image processing unit stops the recognition process when the medical device is no longer detected from the input captured image.
    The vessel recognition device according to claim 5.
  7.  前記画像出力部は、前記画像処理部により前記脈管が認識されると、前記脈管が認識された旨を示す色付き枠画像が点滅するように前記色付き枠画像を前記モニタに表示させる、
     請求項1に記載の脈管認識装置。
    When the image processing unit recognizes the vessel, the image output unit causes the monitor to display the colored frame image so that the colored frame image indicating that the vessel is recognized blinks.
    The vessel recognition device according to claim 1.
  8.  前記画像出力部は、前記画像処理部により前記脈管が認識されると、前記脈管が認識された前記撮像画像内の位置を中心とした所定径の色付き円画像が点滅するように前記色付き画像を前記モニタに表示させる、
     請求項1に記載の脈管認識装置。
    When the vessel is recognized by the image processing unit, the image output unit is colored so that a colored circle image having a predetermined diameter centered on the position in the captured image in which the vessel is recognized blinks. Display the image on the monitor,
    The vessel recognition device according to claim 1.
  9.  前記撮像装置は、前記術野内の腫瘍部に集積した蛍光薬剤の蛍光発光に基づく蛍光を撮像可能であり、
     前記画像出力部は、前記術野の撮像画像内において前記蛍光の撮像に基づく前記腫瘍部の蛍光画像と前記脈管との両方が認識され、かつ前記蛍光画像の一部と前記脈管の位置を中心とする所定径の円領域の一部とが重複する場合、前記撮像画像の背景色の補色を用いて、前記脈管の位置を中心とする円領域画像が点滅するように前記円領域画像を前記モニタに表示させる、
     請求項1に記載の脈管認識装置。
    The imaging device can image fluorescence based on the fluorescence emission of a fluorescent agent accumulated in the tumor portion in the surgical field.
    In the image output unit, both the fluorescence image of the tumor portion based on the imaging of the fluorescence and the vessel are recognized in the captured image of the surgical field, and a part of the fluorescence image and the position of the vessel are recognized. When a part of a circular region having a predetermined diameter centered on the image overlaps, the circular region image centered on the position of the vessel blinks by using the complementary color of the background color of the captured image. Display the image on the monitor,
    The vessel recognition device according to claim 1.
  10.  前記撮像装置は、前記術野内の腫瘍部に集積した蛍光薬剤の蛍光発光に基づく蛍光を撮像可能であり、
     前記画像出力部は、前記術野の撮像画像内において前記蛍光の撮像に基づく前記腫瘍部の蛍光画像と前記脈管との両方が認識され、かつ前記蛍光画像の一部と前記脈管の位置を中心とする所定径の円領域の一部とが重複する場合、前記脈管の位置の周辺の前記蛍光画像の一部の領域を除いた領域に所定色の画像を表示させる、
     請求項1に記載の脈管認識装置。
    The imaging device can image fluorescence based on the fluorescence emission of a fluorescent agent accumulated in the tumor portion in the surgical field.
    In the image output unit, both the fluorescence image of the tumor portion based on the imaging of the fluorescence and the vessel are recognized in the captured image of the surgical field, and a part of the fluorescence image and the position of the vessel are recognized. When a part of a circular region having a predetermined diameter centered on is overlapped, an image of a predetermined color is displayed in a region excluding a part of the fluorescent image around the position of the vessel.
    The vessel recognition device according to claim 1.
  11.  術野を撮像する撮像装置と接続される脈管認識装置により実行される脈管認識方法であって、
     前記撮像装置からの前記術野の撮像画像を入力し、
     入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識し、
     認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成し、
     生成された前記合成画像をモニタに出力する、
     脈管認識方法。
    It is a vascular recognition method executed by a vascular recognition device connected to an imaging device that images the surgical field.
    The captured image of the surgical field from the imaging device is input, and
    Based on the input image, the vessel reflected in the image is recognized.
    A composite image in which the recognized information for teaching the vessel is superimposed on the captured image is generated.
    Output the generated composite image to the monitor.
    Vascular recognition method.
  12.  術野を撮像する撮像装置と脈管認識装置とが互いに接続される脈管認識システムであって、
     前記脈管認識装置は、
     前記撮像装置からの前記術野の撮像画像を入力し、
     入力された前記撮像画像に基づいて、前記撮像画像内に映る脈管を認識し、
     認識された前記脈管を教示する情報を前記撮像画像に重畳した合成画像を生成し、生成された前記合成画像をモニタに出力する、
     脈管認識システム。
    It is a vascular recognition system in which an imaging device that images the surgical field and a vascular recognition device are connected to each other.
    The vessel recognition device is
    The captured image of the surgical field from the imaging device is input, and
    Based on the input image, the vessel reflected in the image is recognized.
    A composite image in which the recognized information teaching the vessel is superimposed on the captured image is generated, and the generated composite image is output to the monitor.
    Vascular recognition system.
PCT/JP2019/050141 2019-03-27 2019-12-20 Blood vessel recognition device, blood vessel recognition method, and blood vessel recognition system WO2020194942A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019060805A JP7312394B2 (en) 2019-03-27 2019-03-27 Vessel Recognition Device, Vessel Recognition Method and Vessel Recognition System
JP2019-060805 2019-03-27

Publications (1)

Publication Number Publication Date
WO2020194942A1 true WO2020194942A1 (en) 2020-10-01

Family

ID=72608794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/050141 WO2020194942A1 (en) 2019-03-27 2019-12-20 Blood vessel recognition device, blood vessel recognition method, and blood vessel recognition system

Country Status (2)

Country Link
JP (1) JP7312394B2 (en)
WO (1) WO2020194942A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2022250031A1 (en) * 2021-05-24 2022-12-01
WO2023161193A3 (en) * 2022-02-22 2023-11-30 Karl Storz Se & Co. Kg Medical imaging device, medical system, method for operating a medical imaging device, and method of medical imaging

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022145424A1 (en) * 2020-12-29 2022-07-07 アナウト株式会社 Computer program, method for generating learning model, and operation assisting apparatus
JP7081862B1 (en) * 2021-11-26 2022-06-07 株式会社Jmees Surgery support system, surgery support method, and surgery support program
JP7148193B1 (en) 2021-11-26 2022-10-05 株式会社Jmees Surgery support system, surgery support method, and surgery support program
WO2023112499A1 (en) * 2021-12-13 2023-06-22 富士フイルム株式会社 Endoscopic image observation assistance device and endoscope system
JP7223194B1 (en) 2022-05-27 2023-02-15 株式会社エクサウィザーズ Information processing method, computer program, information processing device and information processing system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015001806A1 (en) * 2013-07-05 2015-01-08 パナソニック株式会社 Projection system
JP2016013233A (en) * 2014-07-01 2016-01-28 富士通株式会社 Output control method, image processing system, output control program, and information processing apparatus
US20160278678A1 (en) * 2012-01-04 2016-09-29 The Trustees Of Dartmouth College Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
WO2017056775A1 (en) * 2015-09-28 2017-04-06 富士フイルム株式会社 Projection mapping apparatus
WO2018012080A1 (en) * 2016-07-12 2018-01-18 ソニー株式会社 Image processing device, image processing method, program, and surgery navigation system
WO2018167816A1 (en) * 2017-03-13 2018-09-20 株式会社島津製作所 Imaging apparatus
JP2018171177A (en) * 2017-03-31 2018-11-08 大日本印刷株式会社 Fundus image processing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996086B2 (en) * 2010-09-17 2015-03-31 OptimumTechnologies, Inc. Digital mapping system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160278678A1 (en) * 2012-01-04 2016-09-29 The Trustees Of Dartmouth College Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
WO2015001806A1 (en) * 2013-07-05 2015-01-08 パナソニック株式会社 Projection system
JP2016013233A (en) * 2014-07-01 2016-01-28 富士通株式会社 Output control method, image processing system, output control program, and information processing apparatus
WO2017056775A1 (en) * 2015-09-28 2017-04-06 富士フイルム株式会社 Projection mapping apparatus
WO2018012080A1 (en) * 2016-07-12 2018-01-18 ソニー株式会社 Image processing device, image processing method, program, and surgery navigation system
WO2018167816A1 (en) * 2017-03-13 2018-09-20 株式会社島津製作所 Imaging apparatus
JP2018171177A (en) * 2017-03-31 2018-11-08 大日本印刷株式会社 Fundus image processing device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2022250031A1 (en) * 2021-05-24 2022-12-01
JP7368922B2 (en) 2021-05-24 2023-10-25 アナウト株式会社 Information processing device, information processing method, and computer program
WO2023161193A3 (en) * 2022-02-22 2023-11-30 Karl Storz Se & Co. Kg Medical imaging device, medical system, method for operating a medical imaging device, and method of medical imaging

Also Published As

Publication number Publication date
JP7312394B2 (en) 2023-07-21
JP2020156860A (en) 2020-10-01

Similar Documents

Publication Publication Date Title
WO2020194942A1 (en) Blood vessel recognition device, blood vessel recognition method, and blood vessel recognition system
EP3471591B1 (en) Information processing apparatus, information processing method, program, and medical observation system
US9662042B2 (en) Endoscope system for presenting three-dimensional model image with insertion form image and image pickup image
WO2018163644A1 (en) Information processing device, assist system, and information processing method
CN113395928A (en) Enhanced medical vision system and method
CN114945314A (en) Medical image processing device, endoscope system, diagnosis support method, and program
JP7457415B2 (en) Computer program, learning model generation method, and support device
JP7194889B2 (en) Computer program, learning model generation method, surgery support device, and information processing method
US20230101192A1 (en) Methods and Systems for Controlling Cooperative Surgical Instruments with Variable Surgical Site Access Trajectories
JP7146318B1 (en) Computer program, learning model generation method, and surgery support device
US20230100989A1 (en) Surgical devices, systems, and methods using fiducial identification and tracking
WO2023052956A1 (en) Instrument control imaging systems for visualization of upcoming surgical procedure steps
WO2021044910A1 (en) Medical image processing device, endoscope system, medical image processing method, and program
WO2020009127A1 (en) Medical observation system, medical observation device, and medical observation device driving method
JP2022180177A (en) Endoscope system, medical image processing device, and operation method thereof
US20180140375A1 (en) Videoscopic surgery and endoscopic surgery procedures employing oscillating images
JP7480779B2 (en) Medical image processing device, driving method for medical image processing device, medical imaging system, and medical signal acquisition system
WO2020184228A1 (en) Medical image processing device, method for driving medical image processing device, and medical observation system
US20230096406A1 (en) Surgical devices, systems, and methods using multi-source imaging
US11992200B2 (en) Instrument control surgical imaging systems
WO2018225316A1 (en) Medical control device
WO2018220930A1 (en) Image processing device
JP2022180108A (en) Medical image processing apparatus, endoscope system, and method of operating medical image processing apparatus
WO2023052940A1 (en) Surgical devices, systems, and methods using multi-source imaging
WO2023052955A1 (en) Instrument control surgical imaging systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19921971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19921971

Country of ref document: EP

Kind code of ref document: A1