WO2022145424A1 - コンピュータプログラム、学習モデルの生成方法、及び手術支援装置 - Google Patents
コンピュータプログラム、学習モデルの生成方法、及び手術支援装置 Download PDFInfo
- Publication number
- WO2022145424A1 WO2022145424A1 PCT/JP2021/048592 JP2021048592W WO2022145424A1 WO 2022145424 A1 WO2022145424 A1 WO 2022145424A1 JP 2021048592 W JP2021048592 W JP 2021048592W WO 2022145424 A1 WO2022145424 A1 WO 2022145424A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- blood vessel
- surgical field
- learning model
- image
- field image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000004590 computer program Methods 0.000 title claims abstract description 35
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 288
- 238000003384 imaging method Methods 0.000 claims abstract description 25
- 238000001356 surgical procedure Methods 0.000 claims description 52
- 238000012549 training Methods 0.000 claims description 37
- 230000017531 blood circulation Effects 0.000 claims description 36
- 230000008569 process Effects 0.000 claims description 36
- 230000003287 optical effect Effects 0.000 claims description 26
- 230000000007 visual effect Effects 0.000 claims description 16
- 230000001678 irradiating effect Effects 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 9
- 230000000740 bleeding effect Effects 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 23
- 210000004088 microvessel Anatomy 0.000 description 106
- 239000010410 layer Substances 0.000 description 65
- 238000010586 diagram Methods 0.000 description 37
- 238000002357 laparoscopic surgery Methods 0.000 description 24
- 238000004364 calculation method Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 15
- 210000001519 tissue Anatomy 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000010336 energy treatment Methods 0.000 description 10
- 238000011176 pooling Methods 0.000 description 10
- 230000012447 hatching Effects 0.000 description 8
- 238000003780 insertion Methods 0.000 description 8
- 230000037431 insertion Effects 0.000 description 8
- 210000000056 organ Anatomy 0.000 description 6
- 210000004379 membrane Anatomy 0.000 description 5
- 239000012528 membrane Substances 0.000 description 5
- 238000000926 separation method Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000000701 chemical imaging Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- MOFVSTNWEDAEEK-UHFFFAOYSA-M indocyanine green Chemical compound [Na+].[O-]S(=O)(=O)CCCCN1C2=CC=C3C=CC=CC3=C2C(C)(C)C1=CC=CC=CC=CC1=[N+](CCCCS([O-])(=O)=O)C2=CC=C(C=CC=C3)C3=C2C1(C)C MOFVSTNWEDAEEK-UHFFFAOYSA-M 0.000 description 4
- 229960004657 indocyanine green Drugs 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 210000001367 artery Anatomy 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 230000006378 damage Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000002496 gastric effect Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000035699 permeability Effects 0.000 description 3
- 102000001554 Hemoglobins Human genes 0.000 description 2
- 108010054147 Hemoglobins Proteins 0.000 description 2
- 238000005452 bending Methods 0.000 description 2
- 230000001112 coagulating effect Effects 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000002674 endoscopic surgery Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000023597 hemostasis Effects 0.000 description 2
- 210000002767 hepatic artery Anatomy 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 210000002796 renal vein Anatomy 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000002834 transmittance Methods 0.000 description 2
- 230000002792 vascular Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 210000000683 abdominal cavity Anatomy 0.000 description 1
- 210000003815 abdominal wall Anatomy 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000015271 coagulation Effects 0.000 description 1
- 238000005345 coagulation Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000002073 fluorescence micrograph Methods 0.000 description 1
- 210000002989 hepatic vein Anatomy 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000002350 laparotomy Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004249 mesenteric artery inferior Anatomy 0.000 description 1
- 210000001363 mesenteric artery superior Anatomy 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004400 mucous membrane Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000000700 radioactive tracer Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000002563 splenic artery Anatomy 0.000 description 1
- 239000002344 surface layer Substances 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/0005—Display arrangement combining images e.g. side-by-side, superimposed or tiled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
- A61B5/489—Blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
- A61B5/7425—Displaying combinations of multiple images regardless of image source, e.g. displaying a reference anatomical image with a live image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/044—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for absorption imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2505/00—Evaluating, monitoring or diagnosing in the context of a particular type of medical care
- A61B2505/05—Surgical care
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
Definitions
- the present invention relates to a computer program, a learning model generation method, and a surgical support device.
- An object of the present invention is to provide a computer program capable of outputting a blood vessel recognition result from a surgical field image, a learning model generation method, and a surgical support device.
- the computer program in one aspect of the present invention learns to acquire an image of the surgical field obtained by imaging the surgical field of endoscopic surgery on a computer, and to output information on blood vessels when the surgical field image is input. It is a computer program for discriminating and recognizing a blood vessel included in an acquired surgical field image and a blood vessel which should call attention among the blood vessels by using the learned learning model.
- the method of generating a learning model in one aspect of the present invention includes a surgical field image obtained by imaging the surgical field of a microscopic operation by a computer, and first correct answer data showing a blood vessel portion included in the surgical field image. , When training data including the second correct answer data indicating the blood vessel part that should call attention is acquired and the surgical field image is input based on the acquired training data set, the information about the blood vessel is output. Generate a learning model to do.
- the surgical support device has an acquisition unit that acquires a surgical field image obtained by imaging the surgical field under a microscope, and outputs information about a blood vessel when the surgical field image is input.
- the recognition unit that distinguishes and recognizes the blood vessels included in the acquired surgical field image and the blood vessels that should call attention among the blood vessels, and the recognition unit based on the recognition result of the recognition unit. It is equipped with an output unit that outputs support information related to endoscopic surgery.
- the recognition result of blood vessels can be output from the surgical field image.
- FIG. It is a schematic diagram explaining the schematic structure of the laparoscopic surgery support system which concerns on Embodiment 1.
- FIG. It is a block diagram explaining the internal structure of the operation support device. It is a schematic diagram which shows an example of a surgical field image. It is a schematic diagram which shows the structural example of the 1st learning model. It is a schematic diagram which shows the recognition result by the 1st learning model. It is a schematic diagram which shows the structural example of the 2nd learning model. It is a schematic diagram which shows the recognition result by the 2nd learning model. It is a flowchart explaining the generation procedure of the 1st learning model. It is a flowchart explaining the execution procedure of the surgical support. It is a schematic diagram which shows the display example of a micro blood vessel.
- FIG. It is a schematic diagram which shows the display example of the attention blood vessel. It is explanatory drawing explaining the generation method of the training data for the 2nd learning model. It is explanatory drawing explaining the structure of the soft max layer of the learning model in Embodiment 3.
- FIG. It is a schematic diagram which shows the display example in Embodiment 3. It is a schematic diagram which shows the display example in Embodiment 4. It is explanatory drawing explaining the display method in Embodiment 5. It is a flowchart explaining the procedure of the process executed by the operation support apparatus which concerns on Embodiment 6. It is a schematic diagram which shows the display example in Embodiment 6. It is explanatory drawing explaining the structure of the soft max layer of the learning model in Embodiment 7.
- Embodiment 7 It is a schematic diagram which shows the display example in Embodiment 7. It is a schematic diagram which shows the structural example of the learning model for a special optical image. It is a flowchart explaining the procedure of the process executed by the operation support apparatus which concerns on Embodiment 8. It is explanatory drawing explaining the outline of the process performed by the operation support apparatus which concerns on Embodiment 9. It is a flowchart explaining the execution procedure of the operation support in Embodiment 10. It is a schematic diagram which shows the example of the enlarged display. It is a schematic diagram which shows the example of the warning display.
- FIG. 1 is a schematic diagram illustrating a schematic configuration of a laparoscopic surgery support system according to the first embodiment.
- trocca 10 In laparoscopic surgery, instead of performing laparotomy, a plurality of laparoscopic devices called trocca 10 are attached to the abdominal wall of the patient, and the laparoscope 11, energy treatment tool 12, and forceps 13 are attached through the laparoscopic holes provided in the trocca 10. Insert a device such as into the patient's body. The surgeon performs treatment such as excising the affected area using the energy treatment tool 12 while viewing the image of the inside of the patient (surgical field image) captured by the laparoscope 11 in real time. The surgical tools such as the laparoscope 11, the energy treatment tool 12, and the forceps 13 are held by an operator, a robot, or the like. A surgeon is a medical worker involved in laparoscopic surgery, including a surgeon, an assistant, a nurse, and a doctor who monitors the surgery.
- the laparoscope 11 includes an insertion portion 11A inserted into the patient's body, an image pickup device 11B built in the tip portion of the insertion portion 11A, an operation portion 11C provided in the rear end portion of the insertion portion 11A, and a camera control unit (CCU). )
- a universal cord 11D for connecting to the 110 or the light source device 120 is provided.
- the insertion portion 11A of the laparoscope 11 is formed by a rigid tube.
- a curved portion is provided at the tip of the rigid tube.
- the bending mechanism in the curved portion is a well-known mechanism incorporated in a general laparoscope, and is configured to be curved in four directions, for example, up, down, left, and right by pulling an operation wire linked to the operation of the operation portion 11C. ..
- the laparoscope 11 is not limited to the flexible mirror having a curved portion as described above, and may be a rigid mirror having no curved portion or an imaging device having no curved portion or a rigid tube. Further, the laparoscope 11 may be an omnidirectional camera that captures a range of 360 degrees.
- the image pickup device 11B includes a solid-state image pickup element such as CMOS (Complementary Metal Oxide Semiconductor), and a driver circuit including a timing generator (TG) and an analog signal processing circuit (AFE).
- CMOS Complementary Metal Oxide Semiconductor
- AFE analog signal processing circuit
- the driver circuit of the image pickup device 11B takes in the signals of each RGB color output from the solid-state image sensor in synchronization with the clock signal output from the TG, and performs necessary processing such as noise reduction, amplification, and AD conversion in AFE. , Generate digital format image data.
- the driver circuit of the image pickup apparatus 11B transmits the generated image data to the CCU 110 via the universal code 11D.
- the operation unit 11C is provided with an angle lever, a remote switch, etc. operated by the operator.
- the angle lever is an operating tool that receives an operation for bending a curved portion.
- a curved operation knob, a joystick, or the like may be provided.
- the remote switch includes, for example, a changeover switch for switching the observed image to a moving image display or a still image display, a zoom switch for enlarging or reducing the observed image, and the like.
- the remote switch may be assigned a specific predetermined function, or may be assigned a function set by the operator.
- the operation unit 11C may have a built-in oscillator composed of a linear resonance actuator, a piezo actuator, or the like.
- the CCU 110 vibrates the operation unit 11C by operating the vibrator built in the operation unit 11C, and causes the occurrence of the event. You may inform the surgeon.
- a light guide or the like that guides the illumination light emitted from the light source device 120 to the tip end portion of the insertion portion 11A is arranged.
- the illumination light emitted from the light source device 120 is guided to the tip portion of the insertion portion 11A through the light guide, and is irradiated to the surgical field through the illumination lens provided at the tip portion of the insertion portion 11A.
- the light source device 120 is described as an independent device, but the light source device 120 may be configured to be built in the CCU 110.
- the CCU 110 includes a control circuit for controlling the operation of the image pickup device 11B included in the laparoscope 11, an image processing circuit for processing image data from the image pickup device 11B input through the universal code 11D, and the like.
- the control circuit includes a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and responds to the operation of various switches provided in the CCU 110 and the operation of the operation unit 11C provided in the laparoscope 11.
- a control signal is output to the image pickup device 11B, and control such as start of image pickup, stop of image pickup, and zoom is performed.
- the image processing circuit is equipped with a DSP (Digital Signal Processor), image memory, etc., and is suitable for color separation, color interpolation, gain correction, white balance adjustment, gamma correction, etc. for image data input through the universal code 11D. Apply processing.
- the CCU 110 generates a frame image for moving images from the processed image data, and sequentially outputs each generated frame image to the surgery support device 200 described later.
- the frame rate of the frame image is, for example, 30 FPS (FramesPerSecond).
- the CCU 110 may generate video data conforming to a predetermined standard such as NTSC (National Television System Committee), PAL (Phase Alternate Line), DICOM (Digital Imaging and COmmunication in Medicine).
- NTSC National Television System Committee
- PAL Phase Alternate Line
- DICOM Digital Imaging and COmmunication in Medicine
- the CCU 110 can display the surgical field image (video) on the display screen of the display device 130 in real time.
- the display device 130 is a monitor provided with a liquid crystal panel, an organic EL (Electro-Luminescence) panel, and the like. Further, the CCU 110 may output the generated video data to the recording device 140 and have the recording device 140 record the video data.
- the recording device 140 includes a recording device such as an HDD (Hard Disk Drive) that records video data output from the CCU 110 together with an identifier that identifies each operation, an operation date and time, an operation place, a patient name, an operator name, and the like. It was
- the surgery support device 200 generates support information regarding laparoscopic surgery based on the image data input from the CCU 110 (that is, the image data of the surgical field image obtained by imaging the surgical field). Specifically, the surgical support device 200 distinguishes and recognizes all the microvessels included in the surgical field image and the microvessels that should call attention among these microvessels, and obtains information on the recognized microvessels. A process for displaying on the display device 130 is performed.
- the microblood vessel does not have a unique name and represents a small blood vessel that runs irregularly in the body.
- Blood vessels that have a unique name and can be easily recognized by the operator may be excluded from the recognition target. That is, unique names such as left gastric artery, right gastric artery, left hepatic artery, right hepatic artery, splenic artery, superior mesenteric artery, inferior mesenteric artery, hepatic vein, left renal vein, and right renal vein are given.
- the blood vessels may be excluded from the recognition target.
- Microvessels are blood vessels with a diameter of approximately 3 mm or less. Even a blood vessel having a diameter of more than 3 mm can be recognized if it is not given a unique name. On the contrary, even if the blood vessel has a diameter of 3 mm or less, the blood vessel that has a unique name and can be easily recognized by the operator may be excluded from the recognition target.
- the microvessels that should be alerted refer to the above-mentioned microvessels that require attention by the operator (hereinafter, also referred to as attention vessels).
- Attention blood vessels are blood vessels that may be damaged during surgery or that may not be noticed by the operator during surgery.
- the surgical support device 200 may recognize a microvessel existing in the central visual field of the operator as a caution blood vessel, or may recognize a microvessel not present in the central visual field of the operator as a caution blood vessel. Further, the surgical support device 200 may recognize microvessels in a tensioned state such as extension as attention vessels regardless of whether or not they are present in the central visual field.
- a configuration for executing the recognition processing of microvessels in the surgery support device 200 will be described, but as a configuration in which the CCU 110 is provided with the same function as the surgery support device 200 and the recognition processing for microvessels is executed in the CCU 110. May be good.
- FIG. 2 is a block diagram illustrating the internal configuration of the surgery support device 200.
- the surgery support device 200 is a dedicated or general-purpose computer including a control unit 201, a storage unit 202, an operation unit 203, an input unit 204, an output unit 205, a communication unit 206, and the like.
- the surgery support device 200 may be a computer installed in the operating room or a computer installed outside the operating room. Further, the surgery support device 200 may be a server installed in the hospital where the laparoscopic surgery is performed, or may be a server installed outside the hospital.
- the control unit 201 includes, for example, a CPU, a ROM, a RAM, and the like.
- the ROM included in the control unit 201 stores a control program or the like that controls the operation of each hardware unit included in the surgery support device 200.
- the CPU in the control unit 201 executes a control program stored in the ROM and various computer programs stored in the storage unit 202 described later, and controls the operation of each part of the hardware to support the operation of the entire device in the present application. Make it function as a device.
- the RAM included in the control unit 201 temporarily stores data and the like used during the execution of the calculation.
- control unit 201 is configured to include a CPU, ROM, and RAM, but the configuration of the control unit 201 is arbitrary, for example, GPU (Graphics Processing Unit), DSP (Digital Signal Processor), FPGA. It may be an arithmetic circuit or a control circuit provided with one or a plurality of (FieldProgrammableGateArray), a quantum processor, a volatile or non-volatile memory, and the like. Further, the control unit 201 may have functions such as a clock for outputting date and time information, a timer for measuring the elapsed time from the instruction for starting measurement to the instruction for ending measurement, and a counter for counting numbers. good.
- the storage unit 202 includes a storage device using a hard disk, a flash memory, or the like.
- the storage unit 202 stores a computer program executed by the control unit 201, various data acquired from the outside, various data generated inside the apparatus, and the like.
- the computer program stored in the storage unit 202 is a recognition processing program PG1 that causes the control unit 201 to execute a process for recognizing a microvessel portion included in the surgical field image, and displays support information based on the recognition result on the display device 130.
- the display processing program PG2 for causing the control unit 201 to execute the processing for causing the control, and the learning processing program PG3 for generating the learning models 310 and 320 are included.
- the recognition processing program PG1 and the display processing program PG2 do not have to be independent computer programs, and may be implemented as one computer program. These programs are provided, for example, by a non-temporary recording medium M in which a computer program is readablely recorded.
- the recording medium M is a portable memory such as a CD-ROM, a USB memory, or an SD (Secure Digital) card.
- the control unit 201 reads a desired computer program from the recording medium M using a reading device (not shown in the figure), and stores the read computer program in the storage unit 202.
- the computer program may be provided by communication using the communication unit 206.
- the learning models 310 and 320 used in the above-mentioned recognition processing program PG1 are stored in the storage unit 202.
- the learning model 310 is a learning model trained to output the recognition result of the microvessel portion included in the surgical field image in response to the input of the surgical field image.
- the learning model 320 is a learning model trained to output the recognition result of the microvessel portion that should call attention among the microvessels included in the surgeon image.
- the learning models 310 and 320 are described separately, the former is also described as the first learning model 310, and the latter is also described as the second learning model 320.
- the learning models 310 and 320 are described by definition information, respectively.
- the definition information of the learning models 310 and 320 includes information on the layers included in the learning models 310 and 320, information on the nodes constituting each layer, and parameters such as weighting and bias between the nodes.
- the learning model 310 stored in the storage unit 202 uses a predetermined learning algorithm with the surgical field image obtained by imaging the surgical field and the correct answer data indicating the microvascular portion in the surgical field image as training data. It is a learned and learned learning model.
- the learning model 320 has been trained using a predetermined learning algorithm using the surgical field image obtained by imaging the surgical field and the correct answer data indicating the attention vessel portion in the surgical field image as training data. It is a learning model of.
- the configuration of the learning models 310 and 320 and the procedure for generating the learning models 310 and 320 will be described in detail later.
- the operation unit 203 includes an operation device such as a keyboard, a mouse, a touch panel, and a stylus pen.
- the operation unit 203 receives an operation by an operator or the like, and outputs information regarding the received operation to the control unit 201.
- the control unit 201 executes appropriate processing according to the operation information input from the operation unit 203.
- the surgery support device 200 is provided with the operation unit 203, but the operation may be received through various devices such as the CCU 110 connected to the outside.
- the input unit 204 includes a connection interface for connecting input devices.
- the input device connected to the input unit 204 is the CCU 110.
- Image data of the surgical field image imaged by the laparoscope 11 and processed by the CCU 110 is input to the input unit 204.
- the input unit 204 outputs the input image data to the control unit 201.
- the control unit 201 may store the image data acquired from the input unit 204 in the storage unit 202.
- the configuration of acquiring the image data of the surgical field image from the CCU 110 through the input unit 204 will be described, but the image data of the surgical field image may be directly acquired from the laparoscope 11 and the laparoscope 11 may be obtained.
- Image data of the surgical field image may be acquired from an image processing device (not shown) that is detachably attached to the surgical field.
- the surgery support device 200 may acquire image data of the surgical field image recorded in the recording device 140.
- the output unit 205 includes a connection interface for connecting output devices.
- the output device connected to the output unit 205 is the display device 130.
- the control unit 201 When the control unit 201 generates information to be notified to the operator or the like, such as the recognition results by the learning models 310 and 320, the control unit 201 outputs the generated information from the output unit 205 to the display device 130 to provide information to the display device 130. Is displayed.
- the display device 130 is connected to the output unit 205 as an output device, but an output device such as a speaker that outputs sound may be connected to the output unit 205.
- the communication unit 206 includes a communication interface for transmitting and receiving various data.
- the communication interface included in the communication unit 206 is a communication interface conforming to a wired or wireless communication standard used in Ethernet (registered trademark) and WiFi (registered trademark).
- Ethernet registered trademark
- WiFi registered trademark
- the surgery support device 200 does not have to be a single computer, but may be a computer system including a plurality of computers and peripheral devices. Further, the surgery support device 200 may be a virtual machine virtually constructed by software.
- FIG. 3 is a schematic diagram showing an example of a surgical field image.
- the surgical field image in the present embodiment is an image obtained by imaging the inside of the abdominal cavity of the patient with a laparoscope 11.
- the surgical field image does not have to be a raw image output by the image pickup device 11B of the laparoscope 11, and may be an image (frame image) processed by CCU 110 or the like.
- the surgical field imaged by the laparoscope 11 includes tissues constituting organs, tissues including lesions such as tumors, membranes and layers covering the tissues, blood vessels existing around the tissues, and the like. While grasping the relationship between these anatomical structures, the surgeon uses an instrument such as forceps or an energy treatment tool to exfoliate or dissect the target tissue.
- the surgical field image shown as an example in FIG. 3 shows a scene in which the membrane covering the organ is pulled by using forceps 13 and the periphery of the target tissue including the membrane is to be peeled off by using the energy treatment tool 12. Bleeding occurs when blood vessels are damaged during the process of such traction or detachment. Bleeding blurs the tissue boundaries and makes it difficult to recognize the correct exfoliated layer. In particular, in situations where hemostasis is difficult, the visual field is significantly deteriorated, and unreasonable hemostasis operation poses a risk of secondary injury.
- the surgery support device 200 recognizes the microvessel portion included in the surgical field image using the learning models 310 and 320, and outputs the support information regarding the laparoscopic surgery based on the recognition result.
- FIG. 4 is a schematic diagram showing a configuration example of the first learning model 310.
- the first learning model 310 is a learning model for performing image segmentation, and is constructed by a neural network including a convolution layer such as SegNet.
- the first learning model 310 is constructed not only by SegNet but also by using an arbitrary neural network capable of image segmentation such as FCN (Fully Convolutional Network), U-Net (U-Shaped Network), PSPNet (Pyramid Scene Parsing Network). May be done.
- the first learning model 310 may be constructed by using a neural network for object detection such as YOLO (You Only Look Once) and SSD (Single Shot Multi-Box Detector) instead of the neural network for image segmentation. good.
- the input image to the first learning model 310 is a surgical field image obtained from the laparoscope 11.
- the first learning model 310 is trained to output an image showing the recognition result of the microvessel portion included in the surgical field image in response to the input of the surgical field image.
- the first learning model 310 includes, for example, an encoder 311, a decoder 312, and a softmax layer 313.
- the encoder 311 is configured by alternately arranging convolution layers and pooling layers.
- the convolutional layer is multi-layered into two or three layers. In the example of FIG. 4, the convolutional layer is shown without hatching, and the pooling layer is shown with hatching.
- the convolution operation is performed between the input data and the filter of the size (for example, 3 ⁇ 3 or 5 ⁇ 5) specified in each. That is, the input value input at the position corresponding to each element of the filter is multiplied by the weighting coefficient preset in the filter for each element, and the linear sum of the multiplication values for each element is calculated.
- the output in the convolution layer is obtained by adding the set bias to the calculated linear sum.
- the result of the convolution operation may be converted by the activation function. For example, ReLU (Rectified Linear Unit) can be used as the activation function.
- the output of the convolution layer represents a feature map that extracts the features of the input data.
- the local statistics of the feature map output from the convolution layer which is the upper layer connected to the input side, are calculated. Specifically, a window of a predetermined size (for example, 2 ⁇ 2, 3 ⁇ 3) corresponding to the position of the upper layer is set, and a local statistic is calculated from the input value in the window. As the statistic, for example, the maximum value can be adopted. The size of the feature map output from the pooling layer is reduced (downsampled) according to the size of the window. In the example of FIG.
- the operation in the convolution layer and the operation in the pooling layer are sequentially repeated in the encoder 311 to obtain an input image of 224 pixels ⁇ 224 pixels, 112 ⁇ 112, 56 ⁇ 56, 28 ⁇ 28, ..., 1. It is shown that the feature map of ⁇ 1 is sequentially downsampled.
- the output of the encoder 311 (1 ⁇ 1 feature map in the example of FIG. 4) is input to the decoder 312.
- the decoder 312 is configured by alternately arranging deconvolution layers and deconvolution layers.
- the deconvolution layer is multi-layered into two or three layers. In the example of FIG. 4, the deconvolution layer is shown without hatching, and the deconvolution layer is shown with hatching.
- the deconvolution operation is performed on the input feature map.
- the deconvolution operation is an operation that restores the feature map before the convolution operation under the presumption that the input feature map is the result of the convolution operation using a specific filter.
- a specific filter is represented by a matrix
- a feature map for output is generated by calculating the product of the transposed matrix for this matrix and the input feature map.
- the calculation result of the deconvolution layer may be converted by an activation function such as ReLU described above.
- the reverse pooling layer included in the decoder 312 is individually associated with the pooling layer included in the encoder 311 on a one-to-one basis, and the associated pairs have substantially the same size.
- the reverse pooling layer again enlarges (upsamples) the size of the downsampled feature map in the pooling layer of the encoder 311.
- the operation in the convolution layer and the operation in the pooling layer are sequentially repeated in the decoder 312 to sequentially upsample to a feature map of 1 ⁇ 1, 7 ⁇ 7, 14 ⁇ 14, ..., 224 ⁇ 224. It shows that it is.
- the output of the decoder 312 (the feature map of 224 ⁇ 224 in the example of FIG. 4) is input to the softmax layer 313.
- the softmax layer 313 outputs the probability of the label that identifies the portion at each position (pixel) by applying the softmax function to the input value from the deconvolution layer connected to the input side.
- a label for identifying the microvessel may be set, and whether or not the label belongs to the microvessel may be identified on a pixel-by-pixel basis.
- a threshold value for example, 70% or more
- an image of 224 pixels ⁇ 224 pixels is used as an input image for the first learning model 310, but the size of the input image is not limited to the above, and the processing capacity of the surgery support device 200 is not limited to the above.
- the size of the surgical field image obtained from the laparoscope 11 can be appropriately set.
- the input image to the first learning model 310 does not have to be the entire surgical field image obtained from the laparoscope 11, and may be a partial image generated by cutting out a region of interest of the surgical field image.
- the area of interest that includes the treatment target is often located near the center of the surgical field image, for example, a partial image obtained by cutting out the vicinity of the center of the surgical field image in a rectangular shape so as to be about half the original size is used. You may use it.
- the recognition accuracy can be improved while increasing the processing speed.
- FIG. 5 is a schematic diagram showing the recognition result by the first learning model 310.
- the microvessel portion recognized using the first learning model 310 is shown by a thick solid line (or a region painted in black), and the other organs, membranes, and surgical instruments are referred to by broken lines. Shows.
- the control unit 201 of the surgical support device 200 generates a recognition image of the microvessel in order to display the recognized microvessel portion discriminatingly.
- the recognition image is an image having the same size as the surgical field image and assigning a specific color to the pixels recognized as microvessels.
- the color assigned to the microvessels is arbitrarily set.
- the surgical support device 200 can display the microvessel portion as a structure having a specific color on the surgical field image by superimposing the recognition image thus generated on the surgical field image and displaying it.
- FIG. 6 is a schematic diagram showing a configuration example of the second learning model 320.
- the second learning model 320 includes an encoder 321 and a decoder 322, and a softmax layer 323 so as to output an image showing the recognition result of the attention blood vessel portion included in the surgical field image in response to the input of the surgical field image. It is composed of. Since the configurations of the encoder 321 and the decoder 322 and the softmax layer 323 included in the second learning model 320 are the same as those of the first learning model 310, detailed description thereof will be omitted.
- FIG. 7 is a schematic diagram showing the recognition result by the second learning model 320.
- the attention vessel portion recognized by using the second learning model 320 is shown by a thick solid line (or a region painted in black), and the other organs, membranes, and surgical instruments are referred to by broken lines. Shows.
- the control unit 201 of the surgical support device 200 generates a recognition image of the attention blood vessel in order to display the recognized attention blood vessel portion discriminatingly.
- the recognition image is an image having the same size as the surgical field image and assigning a specific color to the pixels recognized as attention blood vessels. Attention The color assigned to blood vessels is different from the color assigned to microvessels, and is preferably a color that can be distinguished from surrounding tissues.
- the color assigned to the attention blood vessel may be a cool color (blue) such as blue or light blue, or may be a green color such as green or yellowish green.
- information indicating transparency is added to each pixel constituting the recognition image, an opaque value is set for the pixel recognized as a caution blood vessel, and a transmission value is set for the other pixels.
- Annotation is performed on the captured surgical field image as a preparatory step for generating the first learning model 310 and the second learning model 320.
- a worker In the preparatory stage for generating the first learning model 310, a worker (expert such as a doctor) displays a surgical field image recorded on the recording device 140 on the display device 130, and a mouse or stylus pen provided as an operation unit 203. Annotation is performed by designating the part corresponding to the microvessel in pixel units. A set of a large number of surgical field images used for annotation and data indicating the positions of pixels corresponding to microvessels specified in each surgical field image (first correct answer data) generates a first learning model 310. It is stored in the storage unit 202 of the operation support device 200 as training data for the purpose.
- the training data may include a set of a surgical field image generated by applying fluoroscopic transformation, reflection processing, or the like and correct answer data for the surgical field image. Further, if the learning progresses, a set of the surgical field image and the recognition result (correct answer data) of the first learning model 310 obtained by inputting the surgical field image may be included in the training data.
- the operator is in a state where microvessels existing in the central visual field of the operator (or microvessels not present in the central visual field of the operator) or tension are applied.
- Annotation is performed by designating the part corresponding to the microvessel in pixel units.
- the central visual field is, for example, a rectangular or circular region set in the center of the surgical field image, and is set to have a size of about 1/4 to 1/3 of the surgical field image.
- a set of a large number of surgical field images used for annotation and data indicating the positions of pixels corresponding to the attention blood vessels specified in each surgical field image (second correct answer data) generates a second learning model 320.
- the training data for this purpose, it is stored in the storage unit 202 of the surgery support device 200.
- the training data may include a set of a surgical field image generated by applying fluoroscopic transformation, reflection processing, or the like and correct answer data for the surgical field image. Further, if the learning progresses, a set of the surgical field image and the recognition result (correct answer data) of the second learning model 320 obtained by inputting the surgical field image may be included in the training data.
- the surgery support device 200 generates the first learning model 310 and the second learning model 320 using the above-mentioned training data.
- FIG. 8 is a flowchart illustrating the generation procedure of the first learning model 310.
- the control unit 201 of the surgery support device 200 reads the learning processing program PG3 from the storage unit 202 and executes the following procedure to generate the first learning model 310.
- the definition information describing the first learning model 310 is given an initial value.
- the control unit 201 accesses the storage unit 202 and selects a set of training data from the training data prepared in advance for generating the first learning model 310 (step S101).
- the control unit 201 inputs the surgical field image included in the selected training data into the first learning model 310 (step S102), and executes the calculation by the first learning model 310 (step S103). That is, the control unit 201 generates a feature map from the input surgical field image, and a calculation by the encoder 311 that sequentially downsamples the generated feature map, and a calculation by the decoder 312 that sequentially upsamples the feature map input from the encoder 311. , And the softmax layer 313 that identifies each pixel of the feature map finally obtained from the decoder 312 is executed.
- the control unit 201 acquires the calculation result from the first learning model 310 and evaluates the acquired calculation result (step S104). For example, the control unit 201 may evaluate the calculation result by calculating the degree of similarity between the image data of the microblood vessel obtained as the calculation result and the correct answer data included in the training data.
- the degree of similarity is calculated by, for example, the Jaccard index.
- the Jaccard index is given by A ⁇ B / A ⁇ B ⁇ 100 (%), where A is the microvessel portion extracted by the first learning model 310 and B is the microvessel portion included in the correct answer data.
- the Dice coefficient or the Simpson coefficient may be calculated, or the similarity may be calculated using other existing methods.
- the control unit 201 determines whether or not the learning is completed based on the evaluation of the calculation result (step S105). The control unit 201 can determine that the learning is completed when the similarity equal to or higher than the preset threshold value is obtained.
- control unit 201 When it is determined that the learning is not completed (S105: NO), the control unit 201 inputs the weighting coefficient and the bias in each layer of the first learning model 310 from the output side of the learning model 310 by using the inverse error propagation method. It is updated sequentially toward the side (step S106). After updating the weighting coefficient and the bias of each layer, the control unit 201 returns the process to step S101, and executes the process from step S101 to step S105 again.
- step S105 When it is determined in step S105 that the learning is completed (S105: YES), the learned first learning model 310 is obtained, so that the control unit 201 ends the process according to this flowchart.
- the procedure for generating the first learning model 310 has been described in the flowchart of FIG. 8, the same applies to the procedure for generating the second learning model 320. That is, the surgery support device 200 uses the training data prepared to generate the second learning model 320, and repeatedly executes the calculation by the second learning model 320 and the evaluation of the calculation result, so that the second training support device 200 is second.
- the learning model 320 may be generated.
- the surgery support device 200 is configured to generate the learning models 310 and 320, but the learning models 310 and 320 may be generated using an external computer such as a server device.
- the surgery support device 200 may acquire the learning models 310 and 320 generated by an external computer by means such as communication, and store the acquired learning models 310 and 320 in the storage unit 202.
- the surgery support device 200 provides surgery support in the operation phase after the learning models 310 and 320 are generated.
- FIG. 9 is a flowchart illustrating a procedure for executing surgical support.
- the control unit 201 of the surgery support device 200 reads and executes the recognition processing program PG1 and the display processing program PG2 from the storage unit 202, thereby executing the following procedure.
- the surgical field image obtained by imaging the surgical field with the imaging device 11B of the laparoscope 11 is output to the CCU 110 at any time via the universal code 11D.
- the control unit 201 of the surgery support device 200 acquires the surgical field image output from the CCU 110 at the input unit 204 (step S121).
- the control unit 201 executes the processes of steps S122 to S127 each time the surgical field image is acquired.
- the control unit 201 inputs the acquired surgical field image to the first learning model 310, executes an operation by the first learning model 310 (step S122), and recognizes a microvascular portion included in the surgical field image (step S123). ). That is, the control unit 201 generates a feature map from the input surgical field image, performs an operation by the encoder 311 that sequentially downsamples the generated feature map, and a decoder 312 that sequentially upsamples the feature map input from the encoder 311. The calculation and the calculation by the softmax layer 313 that identifies each pixel of the feature map finally obtained from the decoder 312 are executed. Further, the control unit 201 recognizes a pixel whose label probability output from the softmax layer 313 is equal to or greater than a threshold value (for example, 70% or more) as a microvessel portion.
- a threshold value for example, 70% or more
- the control unit 201 generates a recognition image of the microvessels in order to discriminately display the microvessel portion recognized by using the first learning model 310 (step S124). As described above, the control unit 201 may assign a specific color to the pixels recognized as micro blood vessels, and set the transmittance so that the background can be transmitted to the pixels other than the micro blood vessels.
- the control unit 201 inputs the acquired surgical field image to the second learning model 320, executes an operation by the second learning model 320 (step S125), and recognizes the attention blood vessel portion included in the surgical field image. (Step S126).
- the second learning model 320 is generated, if annotations are made to recognize the microvessels in the central visual field of the operator, in step S126, the microvessels existing in the central visual field of the operator are attention vessels. Is recognized as. Further, when the annotation is performed so as to recognize the microvessels that are not in the central visual field of the operator, in step S126, the microvessels that are not in the central visual field of the operator are recognized as attention blood vessels.
- step S126 the microvessel is recognized as a caution blood vessel at the stage when the microvessel shifts from the pre-tensioned state to the tense state.
- control unit 201 generates a recognition image of the attention blood vessel in order to discriminately display the attention blood vessel portion recognized by using the second learning model 320 (step S127).
- control unit 201 assigns a color different from other microvessels such as blue and green to the pixels recognized as attention blood vessels, and allows the background to pass through the pixels other than the attention blood vessels. Transparency should be set.
- control unit 201 determines whether or not a microblood vessel display instruction has been given (step S128).
- the control unit 201 may determine whether or not the display instruction has been given by determining whether or not the operator's instruction has been received through the operation unit 203.
- the control unit 201 outputs the recognition image of the micro blood vessel generated at that time from the output unit 205 to the display device 130, and the micro blood vessel is displayed on the surgical field image.
- the recognition image of the blood vessel is superimposed and displayed on the display device 130 (step S129).
- the recognition image of the attention blood vessel When the recognition image of the attention blood vessel is superimposed and displayed in the immediately preceding frame, the recognition image of the microvessel may be superimposed and displayed instead of the recognition image of the attention blood vessel. As a result, the microvessel portion recognized using the learning model 310 is displayed on the surgical field image as a structure shown in a specific color.
- FIG. 10 is a schematic diagram showing a display example of microblood vessels.
- the microvessel portion is indicated by a thick solid line or a region painted in black.
- the portion corresponding to the microvessel is painted with a predetermined color in pixel units, so that the operator can recognize the microvessel portion by checking the display screen of the display device 130.
- the control unit 201 determines whether or not the display instruction of the attention blood vessel is given (step S130).
- the control unit 201 may determine whether or not the display instruction has been given by determining whether or not the operator's instruction has been received through the operation unit 203.
- a caution blood vessel display instruction is given (S130: YES)
- the control unit 201 outputs the recognition blood vessel recognition image generated at that time from the output unit 205 to the display device 130, and pays attention to the surgical field image.
- the recognition image of the blood vessel is superimposed and displayed on the display device 130 (step S131).
- the recognition image of the microvessel When the recognition image of the microvessel is superimposed and displayed in the immediately preceding frame, the recognition image of the attention blood vessel may be superimposed and displayed instead of the recognition image of the microvessel.
- the attention blood vessel portion recognized by using the learning model 320 is displayed on the surgical field image as a structure shown in a specific blue or green color.
- FIG. 11 is a schematic diagram showing a display example of a caution blood vessel.
- the attention blood vessel portion is indicated by a thick solid line or a region painted in black.
- the part corresponding to the attention blood vessel is painted in a color that does not exist inside the human body such as blue or green on a pixel-by-pixel basis, so that the operator can see the attention blood vessel by looking at the display screen of the display device 130. It can be clearly identified.
- the surgeon can suppress the occurrence of bleeding by coagulating and dissecting using, for example, an energy treatment tool 12.
- step S132 determines whether or not to end the display of the surgical field image.
- the control unit 201 determines that the display of the surgical field image is terminated.
- the control unit 201 returns the process to step S128.
- the control unit 201 ends the process according to this flowchart.
- the control unit 201 may switch to the other recognition image and display it by being given a display switching instruction.
- the pixels corresponding to the micro blood vessels and the attention blood vessels are colored and displayed in a color that does not exist inside the blue or green human body, but the pixels existing around those pixels are displayed. May be configured to be colored and displayed in the same color or different colors.
- the micro blood vessel portion and the attention blood vessel portion can be highlighted (thickened), and the visibility can be improved.
- only one of the microvessel portion and the caution blood vessel portion may be highlighted, or both portions may be highlighted.
- the display color (blue or greenish color) set for the microvessel part or the caution blood vessel part is averaged with the display color of the background surgical field image.
- the control unit 201 controls the blood vessel.
- the portion may be colored with the color (R2 / 2, G2 / 2, (B1 + B2) / 2) and displayed.
- the weighting coefficients W1 and W2 may be introduced and the recognized blood vessel portion may be colored and displayed in the color (W2 ⁇ R2, W2 ⁇ G2, W1 ⁇ B1 + W2 ⁇ B2).
- At least one of the microvessel portion and the caution blood vessel portion may be blinking and displayed. That is, the control unit 201 alternately alternates between displaying the recognized blood vessel portion for the first set time (for example, 2 seconds) and hiding the recognized blood vessel portion for the second set time (for example, 2 seconds).
- the display and non-display of the blood vessel portion may be periodically switched by repeatedly executing the procedure.
- the display time and non-display time of the blood vessel portion may be appropriately set.
- the display / non-display of the blood vessel portion may be switched in synchronization with the biological information such as the patient's heartbeat and pulse.
- the operation unit 203 of the surgery support device 200 gives a display instruction or a switching instruction
- the operation unit 11C of the laparoscope 11 may give a display instruction or a switching instruction.
- a display instruction or a switching instruction may be given by a foot switch, a voice input device, or the like not shown in 1.
- the surgical support device 200 recognizes the attention blood vessel by the second learning model 320
- the predetermined area including the attention blood vessel may be enlarged and displayed.
- the enlarged display may be performed on the surgical field image or may be performed on a separate screen.
- the display device 130 is configured to superimpose and display the microvessels and attention blood vessels on the surgical field image, but the detection of the microvessels and attention blood vessels may be notified to the operator by sound or voice. ..
- the control unit 201 when the attention blood vessel is recognized by the second learning model 320, the control unit 201 outputs a control signal for controlling a medical device such as an energy treatment tool 12 or a surgical robot (not shown). It may be configured to generate and output the generated control signal to the medical device. For example, the control unit 201 may supply an electric current to the energy treatment tool 12 and output a control signal instructing the coagulation / cutting so that the attention blood vessel can be cut while coagulating.
- a medical device such as an energy treatment tool 12 or a surgical robot (not shown). It may be configured to generate and output the generated control signal to the medical device.
- the control unit 201 may supply an electric current to the energy treatment tool 12 and output a control signal instructing the coagulation / cutting so that the attention blood vessel can be cut while coagulating.
- the structures of the microvessels and the attention blood vessels are recognized using the learning models 310 and 320, and the recognized microvessel portion and the attention blood vessel portion can be displayed in a discriminable manner on a pixel-by-pixel basis. Because it can be done, it is possible to provide visual support in laparoscopic surgery.
- the image generated from the surgical support device 200 may be used not only for surgical support but also for educational support of residents and the like, and may be used for evaluation of laparoscopic surgery. .. For example, by comparing the image recorded on the recording device 140 during the surgery with the image generated by the surgery support device 200, it is determined whether or not the traction operation and the peeling operation in the laparoscopic surgery were appropriate. , Laparoscopic surgery can be evaluated.
- FIG. 12 is an explanatory diagram illustrating a training data generation method for the second learning model 320.
- the worker in the preparatory stage for generating the second learning model 320, the worker annotates by designating the portion corresponding to the attention blood vessel in pixel units.
- the recognition result of the microvessels by the first learning model 310 is displayed, and the operator selects and excludes the recognized microvessels that do not correspond to the attention blood vessels. By doing the work of leaving only the attention blood vessel, annotation is performed.
- the control unit 201 of the surgery support device 200 refers to the recognition result of the first learning model 310 and labels the adjacent pixels that are microvessels to obtain a series of pixels corresponding to the microvessels. Recognize as an area.
- the control unit 201 excludes blood vessels other than the attention blood vessel by accepting a selection operation (click operation or tap operation by the operation unit 203) for the recognized microvessel region that does not correspond to the attention blood vessel.
- the control unit 201 designates the pixels of the selected microvessel region as the pixels corresponding to the attention blood vessels.
- the set of the data (second correct answer data) indicating the position of the pixel corresponding to the attention blood vessel designated in this way and the original surgical field image is used as training data for generating the second learning model 320, and surgical support is provided. It is stored in the storage unit 202 of the device 200.
- the control unit 201 generates the second learning model 320 using the training data stored in the storage unit 202. Since the method of generating the second learning model 320 is the same as that of the first embodiment, the description thereof will be omitted.
- the recognition result of the first learning model 310 can be diverted to generate the training data for the second learning model 320, so that the work load of the worker can be reduced.
- the attention blood vessel is specified by selecting the micro blood vessel to be excluded, but the selection operation for the micro blood vessel recognized by the first learning model 310 that corresponds to the attention blood vessel. It may be configured to specify the attention blood vessel by accepting.
- FIG. 13 is an explanatory diagram illustrating the configuration of the softmax layer 333 of the learning model 330 in the third embodiment.
- the softmax layer 333 outputs the probability for the label set corresponding to each pixel.
- a label for identifying microvessels, a label for identifying attention blood vessels, and a label for identifying other blood vessels are set.
- the control unit 201 of the surgical support device 200 recognizes that the pixel is a microvessel if the probability of the label identifying the microvessel is equal to or greater than the threshold value, and if the probability of the label identifying the attention blood vessel is equal to or greater than the threshold value.
- the control unit 201 Recognize that the pixel is a attention blood vessel. Further, the control unit 201 recognizes that the pixel is neither a microvessel nor a caution blood vessel if the probability of the label identifying other than that is equal to or more than the threshold value.
- the learning model 330 for obtaining such a recognition result uses a set including the surgical field image and correct answer data indicating the positions (pixels) of the microvessel portion and the attention blood vessel portion included in the surgical field image as training data. It is generated by learning. Since the method of generating the learning model 330 is the same as that of the first embodiment, the description thereof will be omitted.
- FIG. 14 is a schematic diagram showing a display example in the third embodiment.
- the surgical support device 200 in the third embodiment recognizes the micro blood vessel portion and the attention blood vessel portion included in the surgical field image by using the learning model 330, and displays them on the display device 130 so that they can be discriminated.
- the microvessel portion recognized by using the learning model 330 is shown by a thick solid line or a blackened region, and the caution blood vessel portion is shown by hatching.
- the part corresponding to the attention blood vessel is colored with a color that does not exist inside the human body such as bluish color or greenish color on a pixel-by-pixel basis, and the part corresponding to the microvessel other than the attention blood vessel is colored with other colors.
- the permeability may be changed between the attention blood vessel and the microvessel other than the attention blood vessel. In this case, the permeability may be set relatively low for the attention blood vessel and relatively high for the microvessels other than the attention blood vessel.
- the third embodiment in order to discriminately display the micro blood vessel portion and the attention blood vessel portion recognized by the learning model 330, useful information is provided to the operator when performing a traction operation, a peeling operation, or the like. Can be presented accurately.
- the softmax layer 333 of the learning model 330 outputs the probability for the label set corresponding to each pixel. This probability represents the certainty of the recognition result.
- the control unit 201 of the surgical support device 200 changes the display mode of the microvessel portion and the attention blood vessel portion depending on the certainty of the recognition result.
- FIG. 15 is a schematic diagram showing a display example in the fourth embodiment.
- FIG. 15 shows an enlarged area including the attention blood vessel.
- the concentration is increased when the certainty is 70% to 80%, 80% to 90%, 90% to 95%, and 95% to 100%, respectively.
- the attention vessel part is displayed differently.
- the display mode may be changed so that the higher the certainty, the higher the concentration.
- the display mode of the attention blood vessel is configured to be different depending on the certainty, but similarly, the display mode of the micro blood vessel may be different according to the certainty.
- the density is made different according to the certainty, but the color and the transmittance may be made different according to the certainty.
- the higher the certainty the more the color does not exist in the blue or green human body, and the higher the certainty, the more the color exists in the red human body.
- the display mode may be changed so that the higher the certainty, the lower the transparency.
- the transparency is changed in four stages according to the certainty, but the transparency may be set more finely and the gradation display may be performed according to the certainty. Further, instead of the configuration in which the transparency is changed, the configuration in which the color is changed may be used.
- FIG. 16 is an explanatory diagram illustrating the display method according to the fifth embodiment.
- the surgical support device 200 recognizes the microvessel portion included in the surgical field image by using the learning models 310 and 320 (or the learning model 330).
- the surgical support device 200 is assumed to use the learning models 310 and 320 (or the learning model 330).
- the microvascular part hidden behind the object cannot be recognized from the surgical field image. Therefore, when the recognition image of the microvessel portion is superimposed on the surgical field image and displayed, the microvessel portion hidden behind the object cannot be displayed discriminatingly.
- the surgical support device 200 holds a recognition image of the microvessel portion recognized in a state of not being hidden behind the object in the storage unit 202, and the microvessel portion is behind the object.
- the recognition image held in the storage unit 202 is read out and superimposed and displayed on the surgical field image.
- time T1 shows an image of the surgical field in which the microvessels are not hidden behind the surgical instrument
- time T2 shows an image of the surgical field in which a part of the microvessels is hidden behind the surgical instrument. Shows. However, it is assumed that the laparoscope 11 is not moved between the time T1 and the time T2, and there is no change in the imaged region.
- the generated recognition image of the microvessel is stored in the storage unit 202.
- the surgery support device 200 reads out the recognition image of the microblood vessel generated from the surgical field image at time T1 from the storage unit 202, and superimposes and displays it on the surgical field image at time T2.
- the portion shown by the broken line is a microvessel portion that is hidden by the surgical instrument and cannot be visually recognized.
- the surgical support device 200 includes the portion by diverting the recognition image recognized at the time T1. It is possible to display it in a discriminative manner.
- the presence of microvessels hidden behind an object such as a surgical tool and cannot be visually recognized can be notified to the operator, so that the safety at the time of surgery can be enhanced. can.
- FIG. 17 is a flowchart illustrating a procedure of processing executed by the surgery support device 200 according to the sixth embodiment.
- the control unit 201 of the surgical support device 200 acquires the surgical field image (step S601), inputs the acquired surgical field image to the first learning model 310, and uses the first learning model 310, as in the first embodiment.
- the operation is executed (step S602).
- the control unit 201 predicts the running pattern of the blood vessel based on the calculation result of the first learning model 310 (step S603).
- a recognition image of a microvessel portion is generated by extracting pixels having a label probability output from the softmax layer 313 of the first learning model 310 of the first threshold value or more (for example, 70% or more).
- the running pattern of the blood vessel is predicted by lowering the threshold value.
- the probability of the label output from the softmax layer 313 of the first learning model 310 is less than the first threshold value (for example, less than 70%), and the pixels are equal to or more than the second threshold value (for example, 50% or more).
- the running pattern of blood vessels is predicted.
- the control unit 201 displays the blood vessel portion estimated by the predicted running pattern so as to be discriminating (step S604).
- FIG. 18 is a schematic diagram showing a display example in the sixth embodiment.
- the recognized microvessel portion is shown by a thick solid line (or a region painted in black), and the blood vessel portion estimated by the predicted running pattern is shown by hatching.
- the microvessel portion is shown by a thick solid line (or the region painted in black), and the blood vessel portion estimated by the traveling pattern is shown by hatching. Display modes such as transparency may be different.
- the blood vessel portion estimated by the running pattern of the blood vessel can be displayed together, so that visual support in laparoscopic surgery can be performed.
- the blood vessel travels by extracting pixels having a label output from the softmax layer 313 having a probability of less than the first threshold value (for example, less than 70%) and a second threshold value or more (for example, 50% or more).
- a learning model for predicting the running pattern of the blood vessel may be prepared. That is, it is sufficient to prepare a learning model trained using the surgical field image obtained by imaging the surgical field and the correct answer data showing the running pattern of the blood vessel in the surgical field image as training data.
- the correct answer data may be generated by a specialist such as a doctor judging the running pattern of the blood vessel while checking the surgical field image and annotating the surgical field image.
- FIG. 19 is an explanatory diagram illustrating the configuration of the softmax layer 343 of the learning model 340 according to the seventh embodiment.
- the softmax layer 343 outputs the probability for the label set corresponding to each pixel.
- a label for identifying a blood vessel having blood flow, a label for identifying a blood vessel without blood flow, and a label for identifying other than that are set. If the probability of the label that identifies the blood vessel with blood flow is equal to or greater than the threshold value, the control unit 201 of the surgery support device 200 recognizes that the pixel is a blood vessel with blood flow and identifies the blood vessel without blood flow.
- the pixel is recognized as a blood vessel without blood flow. Further, the control unit 201 recognizes that the pixel is not a blood vessel if the probability of the label that identifies other than that is equal to or more than the threshold value.
- the learning model 340 for obtaining such a recognition result includes a surgical field image and correct answer data indicating the positions (pixels) of the blood vessel portion with blood flow and the blood vessel portion without blood flow included in the surgical field image. It is generated by learning using the set as training data.
- an ICG (Indocyanine Green) fluorescence image may be used as a surgical field image including a blood vessel portion having blood flow. That is, a fluorescent image is generated by injecting a tracer such as an ICG having an absorption wavelength in the near-infrared region into an artery or a vein and observing the fluorescence emitted when the near-infrared light is irradiated, and this has blood flow.
- Correct answer data may be used as correct answer data indicating the position of the blood vessel portion.
- the color, shape, temperature, blood concentration, oxygen saturation, etc. of the blood vessel differ between the blood vessel with blood flow and the blood vessel without blood flow.
- Correct answer data may be prepared by specifying the position and the position of the blood vessel portion where there is no blood flow. Since the method of generating the learning model 340 is the same as that of the first embodiment, the description thereof will be omitted.
- the probability that there is blood flow, the probability that there is no blood flow, and other probabilities are output from the softmax layer 343, but it depends on the blood flow volume or blood flow velocity. It may be configured to output the probability.
- FIG. 20 is a schematic diagram showing a display example in the seventh embodiment.
- the surgical support device 200 according to the seventh embodiment uses the learning model 340 to recognize the blood vessel portion having blood flow and the blood vessel portion without blood flow, and display them on the display device 130 so that they can be discriminated from each other.
- the blood vessel portion with blood flow is shown by a thick solid line or a black-painted area, and the blood vessel portion without blood flow is shown with hatching, but there is blood flow.
- Blood vessels may be colored with a specific color, and blood vessels without blood flow may be colored with another color for display.
- the permeability may be changed between the blood vessel having blood flow and the blood vessel having no blood flow. Further, either a blood vessel having blood flow or a blood vessel having no blood flow may be displayed so as to be discriminable.
- the blood vessels having blood flow and the blood vessels having no blood flow are displayed in a distinguishable manner, so that visual support in laparoscopic surgery can be performed.
- the laparoscope 11 in the eighth embodiment has a function of irradiating normal light to image the surgical field and a function of irradiating special light to image the surgical field. Therefore, the laparoscopic surgery support system according to the eighth embodiment may be separately provided with a light source device (not shown) for emitting special light, and is used for normal light with respect to the light emitted from the light source device 120. By switching between the optical filter of the above and the optical filter for special light, a configuration may be made in which normal light and special light are switched and irradiated.
- Normal light is, for example, light having a wavelength band of white light (380 nm to 650 nm).
- the illumination light described in the first embodiment or the like corresponds to normal light.
- the special light is different illumination light from the normal light, and corresponds to narrow band light, infrared light, excitation light and the like. In this specification, the distinction between normal light and special light is for convenience only, and does not emphasize that special light is special as compared with normal light.
- narrow band imaging Narrow Band Imaging
- the observation target is irradiated with light in two narrowed wavelength bands (for example, 390 to 445 nm / 530 to 550 nm) that are easily absorbed by hemoglobin in blood. This makes it possible to highlight the capillaries and the like on the surface layer of the mucous membrane.
- IRI InfraRed Imaging
- two infrared lights (790 to 820 nm / 905 to 970 nm) are irradiated to the observation target after intravenously injecting an infrared indicator drug that easily absorbs infrared light. do.
- ICG Infrared light observation
- ICG is used as the infrared indicator agent.
- excitation light 390 to 470 nm
- light with a wavelength 540 to 560 nm
- the observation method using special light is not limited to the above, and may be HSI (Hyper Spectral Imaging), LSCI (Laser Spectral Imaging), FICE (Flexible Spectral Imaging Color Enhancement), or the like.
- the surgical field image obtained by irradiating the surgical field with normal light is also described as a normal light image
- the surgical field image obtained by irradiating the surgical field with special light is referred to as special light. Also described as an image.
- the surgical support device 200 includes a learning model 350 for a special optical image in addition to the first learning model 310 and the second learning model 320 described in the first embodiment.
- FIG. 21 is a schematic diagram showing a configuration example of a learning model 350 for a special optical image.
- the learning model 350 includes an encoder 351 and a decoder 352, and a softmax layer 353, and is configured to output an image showing the recognition result of the blood vessel portion appearing in the special light image in response to the input of the special light image. ..
- Such a learning model 350 includes an image taken by irradiating a special light to image the surgical field (special light image) and data on the position of a blood vessel designated by a doctor or the like for the special light image (correct answer). It is generated by using a data set containing (data) and as training data and performing training according to a predetermined learning algorithm.
- the surgery support device 200 provides surgery support in the operation phase after the learning model 350 for the special optical image is generated.
- FIG. 22 is a flowchart illustrating a procedure of processing executed by the surgery support device 200 according to the eighth embodiment.
- the control unit 201 of the surgery support device 200 acquires a normal light image (step S801), inputs the acquired normal light image to the first learning model 310, and executes an operation by the first learning model 310 (step S802). ..
- the control unit 201 Based on the calculation result of the first learning model 310, the control unit 201 recognizes the small blood vessel portion included in the normal light image (step S803), and predicts the traveling pattern of the blood vessel which is difficult to see in the normal light image. (Step S804).
- the method for recognizing microvessels is the same as in the first embodiment.
- the control unit 201 recognizes a pixel whose label probability output from the softmax layer 313 of the first learning model 310 is equal to or greater than a threshold value (for example, 70% or more) as a microvessel portion.
- the driving pattern prediction method is the same as that of the sixth embodiment.
- the control unit 201 extracts pixels having a label probability output from the softmax layer 313 of the first learning model 310 less than the first threshold value (for example, less than 70%) and a second threshold value or more (for example, 50% or more). Therefore, the traveling pattern of blood vessels, which is difficult to visually recognize with a normal optical image, is predicted.
- the control unit 201 executes the following processing in parallel with the processing of steps S801 to S804.
- the control unit 201 acquires a special light image (step S805), inputs the acquired special light image to the learning model 350 for the special light image, and executes an operation by the learning model 350 (step S806).
- the control unit 201 recognizes the blood vessel portion appearing in the special optical image based on the calculation result of the learning model 350 (step S807).
- the control unit 201 can recognize a pixel whose label probability output from the softmax layer 353 of the learning model 350 is equal to or greater than a threshold value (for example, 70% or more) as a blood vessel portion.
- a threshold value for example, 70% or more
- control unit 201 determines whether or not the presence of a blood vessel, which is difficult to see with a normal optical image, is detected by the prediction in step S803 (step S808).
- control unit 201 When it is determined that the presence of a blood vessel that is difficult to see is not detected (S807: NO), the control unit 201 outputs a normal optical image from the output unit 205 to the display device 130 and displays it, and in step S803, the micro blood vessel is displayed. When is recognized, the recognition image of the microblood vessel portion is superimposed and displayed on the normal optical image (step S809).
- control unit 201 When it is determined that the presence of a blood vessel that is difficult to see is detected (S807: YES), the control unit 201 outputs a normal optical image from the output unit 205 to the display device 130 and displays it, and recognizes it from the special optical image. The recognition image of the blood vessel portion is superimposed on the normal optical image and displayed (step S810).
- the recognition image of the blood vessel portion recognized from the special light image is displayed. Therefore, for example, in the deep part of the organ.
- the position of the existing blood vessel can be notified to the operator, and the safety in laparoscopic surgery can be enhanced.
- the recognition image of the blood vessel portion recognized from the special light image is automatically displayed.
- the structure may be such that the blood vessel portion recognized from the special optical image is displayed instead of the display of the microvascular portion recognized from the normal optical image.
- the microvascular portion is recognized from the normal light image and the blood vessel portion is recognized from the special light image.
- the second learning model 320 is used to recognize the caution blood vessel portion from the normal light image. It may be configured to recognize the blood vessel portion from the special optical image while recognizing.
- the recognition result by the normal light image and the recognition result by the special light image are switched and displayed by one display device 130, but the recognition result by the normal light image is displayed on the display device 130.
- the recognition result of the special optical image may be displayed on another display device (not shown).
- control unit 201 is configured to perform recognition of the microvessel portion by the normal optical image and recognition of the blood vessel portion by the special optical image, but the hardware (GPU) different from that of the control unit 201. Etc.) may be provided, and the recognition of the blood vessel portion in the special optical image may be performed in the background in this hardware.
- FIG. 23 is an explanatory diagram illustrating an outline of the process executed by the surgical support device 200 according to the ninth embodiment.
- the control unit 201 of the surgery support device 200 captures a normal light image obtained by irradiating the surgical field with normal light and an image obtained by irradiating the surgical field with special light. get.
- the normal optical image is, for example, a full HD (High-Definition) RGB image
- the special optical image is, for example, a full HD grayscale image.
- the control unit 201 generates a combined image by combining the acquired normal light image and the special light image.
- the normal light image is an image having three color information (RGB3 channel)
- the special light image is an image having one color information (grayscale 1 channel)
- the control unit 201 has four color information (RGB3).
- a combined image is generated as an image in which channels (channel + grayscale 1 channel) are combined into one.
- the control unit 201 inputs the generated combined image into the learning model 360 for the combined image, and executes the calculation by the learning model 360.
- the training model 360 includes an encoder, a decoder, and a softmax layer (not shown in the figure), and is configured to output an image showing the recognition result of the blood vessel portion appearing in the combined image in response to the input of the combined image. ..
- the learning model 360 uses a data set including a combined image and data (correct answer data) of the position of a blood vessel designated by a doctor or the like for the combined image as training data, and executes learning according to a predetermined learning algorithm. Generated by.
- the control unit 201 superimposes and displays the recognition image of the blood vessel portion obtained by using the learning model 360 on the original surgical field image (normal image).
- the blood vessel portion is recognized by using the combined image, it is possible to inform the operator of the existence of the blood vessel which is difficult to visually recognize by the normal optical image, and it is safe in laparoscopic surgery. It can enhance the sex.
- the number of special light images to be combined with the normal light image is not limited to one, and a plurality of special light images having different wavelength bands may be combined with the normal light image.
- FIG. 24 is a flowchart illustrating the procedure for executing the surgical support in the tenth embodiment.
- the control unit 201 of the surgical support device 200 determines whether or not the surgical tool has approached the attention blood vessel (step S1001). For example, when the control unit 201 calculates the separation distance between the attention blood vessel and the tip of the surgical instrument in chronological order on the surgical field image and determines that the separation distance is shorter than a predetermined value, the surgical instrument Should be judged to have approached the attention blood vessel. When it is determined that the surgical tool is not close to the attention blood vessel (S1001: NO), the control unit 201 executes the processes after step S1003 described later.
- FIG. 25 is a schematic diagram showing an example of enlarged display.
- the area including the attention blood vessel is enlarged and displayed, and the character information indicating that the surgical instrument has approached the attention blood vessel is displayed.
- the control unit 201 determines whether or not the surgical instrument has come into contact with the attention blood vessel (step S1003).
- the control unit 201 determines whether or not the surgical instrument has come into contact with the attention blood vessel, for example, by calculating the separation distance between the attention blood vessel and the tip of the surgical instrument in chronological order on the surgical field image. When the control unit 201 determines that the calculated separation distance has become zero, it may determine that the surgical instrument has come into contact with the attention blood vessel. Further, when the contact sensor is provided at the tip of the surgical instrument, the control unit 201 may determine whether or not the surgical instrument has contacted the attention blood vessel by acquiring the output signal from the contact sensor. good. If it is determined that they are not in contact (S1003: NO), the control unit 201 ends the process according to this flowchart.
- FIG. 26 is a schematic diagram showing an example of a warning display.
- the contacted surgical tool is illuminated and text information indicating that the surgical tool has touched the attention blood vessel is displayed.
- a warning by sound or vibration may be given together with the warning display or instead of the warning display.
- a warning is displayed when the surgical instrument comes into contact with the attention blood vessel, but if it is determined that there is bleeding due to damage to the attention blood vessel, a warning is issued. It may be configured to be performed.
- the control unit 201 counts the red pixels in a predetermined region including the attention blood vessel in time series, and when the number of red pixels increases by a certain amount or more, it can be determined that there is bleeding.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Vascular Medicine (AREA)
- Image Analysis (AREA)
- Endoscopes (AREA)
Abstract
Description
(実施の形態1)
図1は実施の形態1に係る腹腔鏡手術支援システムの概略構成を説明する模式図である。腹腔鏡手術では、開腹手術を実施する代わりに、トロッカ10と呼ばれる開孔器具を患者の腹壁に複数個取り付け、トロッカ10に設けられた開孔から、腹腔鏡11、エネルギ処置具12、鉗子13などの器具を患者の体内に挿入する。術者は、腹腔鏡11によって撮像された患者体内の画像(術野画像)をリアルタイムに見ながら、エネルギ処置具12を用いて患部を切除するなどの処置を行う。腹腔鏡11、エネルギ処置具12、鉗子13などの術具は、術者又はロボットなどにより保持される。術者とは、腹腔鏡手術に関わる医療従事者であり、執刀医、助手、看護師、手術をモニタしている医師などを含む。
図3は術野画像の一例を示す模式図である。本実施の形態における術野画像は、患者の腹腔内を腹腔鏡11により撮像して得られる画像である。術野画像は、腹腔鏡11の撮像装置11Bが出力する生の画像である必要はなく、CCU110などによって処理が施された画像(フレーム画像)であればよい。
図4は第1学習モデル310の構成例を示す模式図である。第1学習モデル310は、画像セグメンテーションを行うための学習モデルであり、例えばSegNetなどの畳み込み層を備えたニューラルネットワークにより構築される。第1学習モデル310は、SegNetに限らず、FCN(Fully Convolutional Network)、U-Net(U-Shaped Network)、PSPNet(Pyramid Scene Parsing Network)など、画像セグメンテーションが行える任意のニューラルネットワークを用いて構築されてもよい。また、第1学習モデル310は、画像セグメンテーション用のニューラルネットワークに代えて、YOLO(You Only Look Once)、SSD(Single Shot Multi-Box Detector)など物体検出用のニューラルネットワークを用いて構築されてもよい。
実施の形態2では、第2学習モデル320用の訓練データを生成する際、第1学習モデル310による認識結果を流用する構成について説明する。
なお、腹腔鏡手術支援システムの全体構成、手術支援装置200の内部構成等については実施の形態1と同様であるため、その説明を省略することとする。
実施の形態3では、1つの学習モデルを用いて微小血管及び注意血管の双方を認識する構成について説明する。
なお、腹腔鏡手術支援システムの全体構成、手術支援装置200の内部構成等については実施の形態1と同様であるため、その説明を省略することとする。
実施の形態4では、微小血管及び注意血管に対する認識結果の確信度に応じて、表示態様を変更する構成について説明する。
実施の形態5では、術具などの物体の陰に隠れて視認することができない微小血管部分の推定位置を表示する構成について説明する。
実施の形態6では、血管の走行パターンを予測し、予測した血管の走行パターンにより推定される血管部分を判別可能に表示する構成について説明する。
実施の形態7では、術野画像に基づき血流を認識し、血流の多少に応じた表示態様にて血管を表示する構成について説明する。
実施の形態8では、特殊光を照射して撮像される特殊光画像を用いて血管部分を認識し、特殊光画像を用いて認識した血管部分の画像を必要に応じて表示する構成について説明する。
実施の形態9では、通常光画像と特殊光画像との結合画像を用いて血管部分を認識する構成について説明する。
実施の形態10では、術具が注意血管に接近した場合や接触した場合において、術者に報知する構成について説明する。
11 腹腔鏡
12 エネルギ処置具
13 鉗子
110 カメラコントロールユニット(CCU)
120 光源装置
130 表示装置
140 録画装置
200 手術支援装置
201 制御部
202 記憶部
203 操作部
204 入力部
205 出力部
206 通信部
310,320,330 学習モデル
PG1 認識処理プログラム
PG2 表示処理プログラム
PG3 学習処理プログラム
Claims (25)
- コンピュータに、
鏡視下手術の術野を撮像して得られる術野画像を取得し、
術野画像を入力した場合、血管に関する情報を出力するように学習された学習モデルを用いて、取得した術野画像に含まれる血管と、該血管のうち注意喚起を促すべき血管とを区別して認識する
処理を実行させるためのコンピュータプログラム。 - 前記コンピュータに、
前記術野画像から認識した血管部分と、注意喚起を促すべき血管部分とを、前記術野画像上で判別可能に表示する
処理を実行させるための請求項1に記載のコンピュータプログラム。 - 前記コンピュータに、
両血管部分を切替可能に表示する
処理を実行させるための請求項2に記載のコンピュータプログラム。 - 前記コンピュータに、
両血管部分を異なる表示態様にて表示する
処理を実行させるための請求項2に記載のコンピュータプログラム。 - 前記コンピュータに、
認識した少なくとも一方の血管部分の表示及び非表示を周期的に切り替える
処理を実行させるための請求項2から請求項4の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
認識した少なくとも一方の血管部分の表示に対して所定のエフェクトを施す
処理を実行させるための請求項2から請求項5の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記学習モデルによる認識結果の確信度を算出し、
算出した確信度に応じた表示態様にて少なくとも一方の血管部分を表示する
処理を実行させるための請求項2から請求項6の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記学習モデルによる認識結果を参照し、他の物体の陰に隠れた血管部分の推定位置を表示する
処理を実行させるための請求項1に記載のコンピュータプログラム。 - 前記コンピュータに、
前記学習モデルを用いて血管の走行パターンを推定し、
推定した血管の走行パターンに基づき、前記術野画像に現れない血管部分の推定位置を表示する
処理を実行させるための請求項1に記載のコンピュータプログラム。 - 前記学習モデルは、注意喚起を促すべき血管の認識結果として、術者の中心視野に存在しない血管に関する情報を出力するよう学習してある
請求項1から請求項9の何れか1つに記載のコンピュータプログラム。 - 前記学習モデルは、注意喚起を促すべき血管の認識結果として、術者の中心視野に存在する血管に関する情報を出力するよう学習してある
請求項1から請求項10の何れか1つに記載のコンピュータプログラム。 - 前記学習モデルは、緊張した状態にある血管に関する情報を出力するよう学習してあり、
前記学習モデルから出力される情報に基づき、緊張した状態にある血管部分を注意喚起を促すべき血管として認識する
処理を前記コンピュータに実行させるための請求項1から請求項11の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像の入力に応じて、血流に関する情報を出力するよう学習された血流認識用の学習モデルを用いて、前記術野画像に含まれる血管に流れる血流を認識し、
前記学習モデルによる血流の認識結果を参照して、血管認識用の学習モデルを用いて認識した血管を、血流の多少に応じた表示態様にて表示する
処理を実行させるための請求項1から請求項12の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像用の照明光とは異なる別の照明光を照射して前記術野を撮像することにより得られる特殊光画像を取得し、
特殊光画像を入力した場合、前記特殊光画像に現れる血管に関する情報を出力するように学習された特殊光画像用の学習モデルを用いて、前記特殊光画像に現れる血管部分を認識し、
認識した前記血管部分を前記術野画像上に重畳して表示する
処理を実行させるための請求項1から請求項13の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像から認識した血管部分と、前記特殊光画像から認識した血管部分とを切替可能に表示する
処理を実行させるための請求項14に記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像用の照明光とは異なる別の照明光を照射して前記術野を撮像することにより得られる特殊光画像を取得し、
前記術野画像と前記特殊光画像との結合画像を生成し、
結合画像を入力した場合、前記結合画像に現れる血管に関する情報を出力するように学習された結合画像用の学習モデルを用いて、前記結合画像に現れる血管部分を認識し、
認識した前記血管部分を前記術野画像上に重畳して表示する
処理を実行させるための請求項1から請求項13の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像に基づき出血を検出し、
出血を検出した場合、警告情報を出力する
処理を実行させるための請求項1から請求項16の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
前記術野画像に基づき術具の接近を検出し、
前記術具の接近を検出した場合に注意喚起を促すべき血管を判別可能に表示する
処理を実行させるための請求項1から請求項17の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
注意喚起を促すべき血管として認識した血管部分を拡大表示する
処理を実行させるための請求項1から請求項18の何れか1つに記載のコンピュータプログラム。 - 前記コンピュータに、
認識した血管に基づき、医療機器に対する制御情報を出力する
処理を実行させるための請求項1から請求項19の何れか1つに記載のコンピュータプログラム。 - コンピュータが、
鏡視下手術の術野を撮像して得られる術野画像と、該術野画像に含まれる血管部分を示す第1正解データと、前記血管部分のうち注意喚起を促すべき血管部分を示す第2正解データとを含む訓練データを取得し、
取得した訓練データのセットに基づき、術野画像を入力した場合、血管に関する情報を出力する学習モデルを生成する
学習モデルの生成方法。 - 前記コンピュータが、
術野画像を入力した場合、前記術野画像に含まれる血管に関する情報を出力する第1学習モデルと、
術野画像を入力した場合、前記術野画像に含まれる血管のうち、注意喚起を促すべき血管に関する情報を出力する第2学習モデルと
を個別に生成する請求項21に記載の学習モデルの生成方法。 - コンピュータが、
鏡視下手術の術野を撮像して得られる術野画像と、該術野画像に含まれる血管部分を示す第1正解データとを含む訓練データを取得し、
取得した訓練データのセットに基づき、術野画像を入力した場合、血管に関する情報を出力する第1学習モデルを生成し、
前記第1学習モデルを用いて認識した術野画像の血管部分のうち、注意喚起を促すべき血管部分についての指定を受付けることにより、第2正解データを生成し、
前記術野画像と前記第2正解データとを含む訓練データのセットに基づき、術野画像を入力した場合、注意喚起を促すべき血管に関する情報を出力する第2学習モデルを生成する
学習モデルの生成方法。 - 術野画像に含まれる血管部分のうち注意喚起を促すべき血管部分は、緊張状態にある血管部分である
請求項21から請求項23の何れか1つに記載の学習モデルの生成方法。 - 鏡視下手術の術野を撮像して得られる術野画像を取得する取得部と、
術野画像を入力した場合、血管に関する情報を出力するように学習された学習モデルを用いて、取得した術野画像に含まれる血管と、該血管のうち注意喚起を促すべき血管とを区別して認識する認識部と、
該認識部の認識結果に基づき、前記鏡視下手術に関する支援情報を出力する出力部と
を備える手術支援装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022501024A JP7146318B1 (ja) | 2020-12-29 | 2021-12-27 | コンピュータプログラム、学習モデルの生成方法、及び手術支援装置 |
CN202180088036.8A CN116724334A (zh) | 2020-12-29 | 2021-12-27 | 计算机程序、学习模型的生成方法、以及手术辅助装置 |
US18/268,889 US20240049944A1 (en) | 2020-12-29 | 2021-12-27 | Recording Medium, Method for Generating Learning Model, and Surgery Support Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020219806 | 2020-12-29 | ||
JP2020-219806 | 2020-12-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022145424A1 true WO2022145424A1 (ja) | 2022-07-07 |
Family
ID=82260776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/048592 WO2022145424A1 (ja) | 2020-12-29 | 2021-12-27 | コンピュータプログラム、学習モデルの生成方法、及び手術支援装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240049944A1 (ja) |
JP (1) | JP7146318B1 (ja) |
CN (1) | CN116724334A (ja) |
WO (1) | WO2022145424A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024053698A1 (ja) * | 2022-09-09 | 2024-03-14 | 慶應義塾 | 手術支援プログラム、手術支援装置、および手術支援方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013507182A (ja) * | 2009-10-07 | 2013-03-04 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | 臨床画像上に強調された画像化データを表示するための方法および装置 |
US20150230875A1 (en) * | 2014-02-17 | 2015-08-20 | Children's National Medical Center | Method and system for providing recommendation for optimal execution of surgical procedures |
JP2018108173A (ja) * | 2016-12-28 | 2018-07-12 | ソニー株式会社 | 医療用画像処理装置、医療用画像処理方法、プログラム |
WO2019092950A1 (ja) * | 2017-11-13 | 2019-05-16 | ソニー株式会社 | 画像処理装置、画像処理方法および画像処理システム |
JP2020156860A (ja) * | 2019-03-27 | 2020-10-01 | 学校法人兵庫医科大学 | 脈管認識装置、脈管認識方法および脈管認識システム |
JP2021029979A (ja) * | 2019-08-29 | 2021-03-01 | 国立研究開発法人国立がん研究センター | 教師データ生成装置、教師データ生成プログラム及び教師データ生成方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6265627B2 (ja) * | 2013-05-23 | 2018-01-24 | オリンパス株式会社 | 内視鏡装置及び内視鏡装置の作動方法 |
US11754712B2 (en) * | 2018-07-16 | 2023-09-12 | Cilag Gmbh International | Combination emitter and camera assembly |
US20200289228A1 (en) * | 2019-03-15 | 2020-09-17 | Ethicon Llc | Dual mode controls for robotic surgery |
-
2021
- 2021-12-27 CN CN202180088036.8A patent/CN116724334A/zh active Pending
- 2021-12-27 US US18/268,889 patent/US20240049944A1/en active Pending
- 2021-12-27 WO PCT/JP2021/048592 patent/WO2022145424A1/ja active Application Filing
- 2021-12-27 JP JP2022501024A patent/JP7146318B1/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013507182A (ja) * | 2009-10-07 | 2013-03-04 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | 臨床画像上に強調された画像化データを表示するための方法および装置 |
US20150230875A1 (en) * | 2014-02-17 | 2015-08-20 | Children's National Medical Center | Method and system for providing recommendation for optimal execution of surgical procedures |
JP2018108173A (ja) * | 2016-12-28 | 2018-07-12 | ソニー株式会社 | 医療用画像処理装置、医療用画像処理方法、プログラム |
WO2019092950A1 (ja) * | 2017-11-13 | 2019-05-16 | ソニー株式会社 | 画像処理装置、画像処理方法および画像処理システム |
JP2020156860A (ja) * | 2019-03-27 | 2020-10-01 | 学校法人兵庫医科大学 | 脈管認識装置、脈管認識方法および脈管認識システム |
JP2021029979A (ja) * | 2019-08-29 | 2021-03-01 | 国立研究開発法人国立がん研究センター | 教師データ生成装置、教師データ生成プログラム及び教師データ生成方法 |
Non-Patent Citations (2)
Title |
---|
HASEGAWA, HIROSHI ET AL.: "WS15-4 New Efforts for Surgical System Development Using AI in Endoscopic Surgery Systems", JAPANESE JOURNAL OF GASTROENTEROLOGICAL SURGERY, vol. 53, no. Suppl. 1, 1 December 2020 (2020-12-01), JP , pages 290, XP009538024, ISSN: 0386-9768 * |
MORIMITSU, SHINTARO: "OP3-12 Extraction of Vessel Regions from Laparoscopic Video Using Fully Convolutional Network", JAMIT ANNUAL MEETING 2019; NARA, JAPAN; 24-26 JULY 2019, vol. 38, 24 July 2019 (2019-07-24), pages 50 - 398, XP009538037 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024053698A1 (ja) * | 2022-09-09 | 2024-03-14 | 慶應義塾 | 手術支援プログラム、手術支援装置、および手術支援方法 |
Also Published As
Publication number | Publication date |
---|---|
US20240049944A1 (en) | 2024-02-15 |
JP7146318B1 (ja) | 2022-10-04 |
CN116724334A (zh) | 2023-09-08 |
JPWO2022145424A1 (ja) | 2022-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220095903A1 (en) | Augmented medical vision systems and methods | |
JP6834184B2 (ja) | 情報処理装置、情報処理装置の作動方法、プログラム及び医療用観察システム | |
JP7312394B2 (ja) | 脈管認識装置、脈管認識方法および脈管認識システム | |
JP7289373B2 (ja) | 医療画像処理装置、内視鏡システム、診断支援方法及びプログラム | |
JP7194889B2 (ja) | コンピュータプログラム、学習モデルの生成方法、手術支援装置、及び情報処理方法 | |
WO2020183770A1 (ja) | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム | |
JP7457415B2 (ja) | コンピュータプログラム、学習モデルの生成方法、及び支援装置 | |
WO2020090729A1 (ja) | 医療画像処理装置、医療画像処理方法及びプログラム、診断支援装置 | |
JP2024051041A (ja) | 情報処理装置、情報処理方法、及びコンピュータプログラム | |
WO2022145424A1 (ja) | コンピュータプログラム、学習モデルの生成方法、及び手術支援装置 | |
JPWO2020184257A1 (ja) | 医用画像処理装置及び方法 | |
JP7387859B2 (ja) | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理装置の作動方法及びプログラム | |
WO2021044910A1 (ja) | 医療画像処理装置、内視鏡システム、医療画像処理方法及びプログラム | |
WO2022054400A1 (ja) | 画像処理システム、プロセッサ装置、内視鏡システム、画像処理方法及びプログラム | |
JP7311936B1 (ja) | コンピュータプログラム、学習モデルの生成方法、及び情報処理装置 | |
JP7147890B2 (ja) | 情報処理システム、情報処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022501024 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21915283 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18268889 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180088036.8 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21915283 Country of ref document: EP Kind code of ref document: A1 |