WO2024004013A1 - Program, information processing method, and information processing device - Google Patents

Program, information processing method, and information processing device Download PDF

Info

Publication number
WO2024004013A1
WO2024004013A1 PCT/JP2022/025668 JP2022025668W WO2024004013A1 WO 2024004013 A1 WO2024004013 A1 WO 2024004013A1 JP 2022025668 W JP2022025668 W JP 2022025668W WO 2024004013 A1 WO2024004013 A1 WO 2024004013A1
Authority
WO
WIPO (PCT)
Prior art keywords
bleeding
area
image
images
bleeding area
Prior art date
Application number
PCT/JP2022/025668
Other languages
French (fr)
Japanese (ja)
Inventor
雅昭 伊藤
建夫 大金
大地 北口
一幸 林
悠貴 古澤
Original Assignee
国立研究開発法人国立がん研究センター
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人国立がん研究センター filed Critical 国立研究開発法人国立がん研究センター
Priority to PCT/JP2022/025668 priority Critical patent/WO2024004013A1/en
Publication of WO2024004013A1 publication Critical patent/WO2024004013A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • the present disclosure relates to a program, an information processing method, and an information processing device.
  • points at which a change exceeding a certain level occurs are detected from a medical video or medical information as feature points, and the point at which the video is When displayed, the video is displayed based on the detected feature points.
  • Patent Document 1 a region whose area changes or a region with movement in a series of videos is detected as a motion region, and among the detected motion regions, the area of the region increases with the passage of time, In addition, a moving region whose area continues to increase for a certain period of time or more and whose chromaticity is similar to that of blood is detected as a bleeding point.
  • the field of view and target organs are constantly moving, and blood vessels and blood pools are visible, so it is difficult to accurately determine bleeding points from the images. difficult.
  • the present invention has been made in view of the above circumstances, and its purpose is to provide a program etc. that can detect bleeding occurrence with high accuracy from images taken during surgery. There is a particular thing.
  • a program acquires a plurality of photographed images having a plurality of color components, which are photographed in time series of a treatment region, and calculates a difference between the plurality of color components for each of the plurality of acquired photographic images. Based on this, a bleeding area image indicating the bleeding area in the photographed image is acquired, a difference in the bleeding area between the bleeding area images acquired for each of the preceding and succeeding photographed images in chronological order is calculated, and the difference in the bleeding area is calculated. Based on the difference, the computer is caused to execute a process of determining the presence or absence of bleeding from the treatment site.
  • occurrence of bleeding can be detected with high accuracy from images taken during surgery.
  • FIG. 1 is a block diagram showing a configuration example of an endoscopic surgery system.
  • 3 is a flowchart illustrating an example of a bleeding detection processing procedure. It is an explanatory diagram of bleeding detection processing. It is an explanatory diagram of bleeding detection processing. It is an explanatory diagram of bleeding detection processing. It is an explanatory diagram of bleeding detection processing. It is an explanatory diagram of bleeding detection processing. 3 is a flowchart illustrating an example of an alert output processing procedure. 12 is a flowchart illustrating an example of a bleeding detection processing procedure of Modification 1.
  • FIG. 6 is an explanatory diagram of bleeding detection processing according to modification 1; FIG.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 2.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 2.
  • FIG. 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 3.
  • FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 3;
  • 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 4.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 4.
  • FIG. 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 5.
  • FIG. 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 5.
  • FIG. 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 6.
  • FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6;
  • FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6;
  • FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6;
  • 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 7.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification Example 7; 12 is a flowchart illustrating an example of a bleeding detection processing procedure in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8.
  • FIG. 2 is an explanatory diagram showing a configuration example of a learning model.
  • 3 is a flowchart illustrating an example of a learning model generation process procedure.
  • 7 is a flowchart illustrating an example of a bleeding detection processing procedure according to the second embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment.
  • FIG. 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to the third embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the third embodiment.
  • FIG. 7 is an explanatory diagram of bleeding detection processing according to the third embodiment.
  • 12 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fourth embodiment.
  • 13 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fifth embodiment.
  • FIG. 1 is a block diagram showing an example of the configuration of an endoscopic surgery system.
  • the endoscopic surgery system shown in FIG. 1 includes an endoscope 20 (laparoscope), an endoscope control device 21 that drives the endoscope 20, an information processing device 10 that controls the operation of the endoscope control device 21, and a display device 40, etc.
  • the endoscopic surgery system also includes a treatment instrument 30, a treatment instrument control device 31 that drives the treatment instrument 30, and an operation section 32 that receives operation inputs for the treatment instrument 30.
  • the operator inserts an endoscope 20, a treatment tool 30, forceps (not shown), etc. into the patient's body through a hole made in the patient's body, and images are taken using the endoscope 20. While checking images inside the body on the display device 40, a treatment such as resection of the treatment site is performed using the treatment instrument 30.
  • the endoscope 20 is a rigid scope having a rigid lens barrel, and has an imaging section and a light source section at the tip of the lens barrel.
  • the light source section includes a light source such as an LED (Light Emitting Diode) and an illumination lens, and illumination light (visible light) emitted from the light source is collected by the illumination lens and irradiated onto the subject.
  • the light source of the light source section may be a light source provided in the endoscope control device 21, in addition to the light source provided at the distal end of the endoscope 20.
  • the endoscope 20 has a light guide that guides illumination light emitted from the light source of the endoscope control device 21 to the distal end of the endoscope 20, and the light guided by the ride guide is directed toward the subject. irradiated.
  • the photographing section has a photographing lens and an image sensor, and the light reflected from the subject by the light emitted by the light source section is received by the image sensor via the photographing lens, and is converted into an image signal by photoelectric conversion by the image sensor. Ru.
  • the endoscope 20 acquires an image signal (captured image) of a treatment site inside the body by photographing the inside of the body with the imaging unit while the light source unit illuminates the inside of the body.
  • the photographing unit is configured to obtain, for example, 30 frames (30 images) of image signals (video signals) per second.
  • the endoscope 20 is connected to an endoscope control device 21, and the endoscope control device 21 transmits a control signal for controlling the operation of the endoscope 20 to the endoscope 20. , transmits the image signal photographed by the photographing unit to the endoscope control device 21.
  • the control signal that the endoscope control device 21 transmits to the endoscope 20 includes information regarding photographing conditions, such as zoom magnification (variable magnification), focal length, and photographing direction during photographing, for example.
  • the endoscope control device 21 performs various signal processing on image signals acquired from the endoscope 20 to generate image data.
  • One frame of image data generated here includes an R (red) component, a G (green) component, and a B (blue) component, and the endoscope control device 21 generates an image of the R component from the image signal.
  • data, G component image data, and B component image data Therefore, through one imaging process by the endoscope 20 (imaging unit), the endoscope control device 21 generates one image including R component image data, G component image data, and B component image data.
  • the image data generated by the endoscope control device 21 is output to the information processing device 10, and the information processing device 10 outputs it to the display device 40. Thereby, a photographed image (video) photographed by the photographing section of the endoscope 20 is displayed on the display device 40, and the range illuminated by the light source section of the endoscope 20 can be optically observed.
  • Signal processing for generating image data from image signals acquired by the imaging unit of the endoscope 20 may be executed by the information processing device 10 in addition to the configuration executed by the endoscope control device 21.
  • the endoscope control device 21 outputs the image signal acquired from the imaging unit of the endoscope 20 as it is to the information processing device 10, and the information processing device 10 performs predetermined signal processing on the acquired image signal. is executed to generate image data, and the generated image data is sequentially output to the display device 40 to be displayed on the display device 40.
  • the light emitted by the light source section is not limited to visible light, but may also be near-infrared light, special light controlled to a predetermined wavelength, etc., and each light source section irradiates different types of light. A configuration having a plurality of light sources may be used.
  • the treatment tool 30 is an energy device for incising or resecting living tissue, sealing blood vessels, stopping bleeding, etc. using energy such as high frequency waves, ultrasound waves, and microwave waves.
  • the treatment tool 30 includes, for example, a high-frequency knife, high-frequency scissors, forceps, an electric scalpel, an ultrasonic scalpel, and the like.
  • the treatment tool 30 is connected to a treatment tool control device 31, and the treatment tool control device 31 controls the operation of the treatment tool 30.
  • the treatment instrument control device 31 is a high frequency output device that outputs a high frequency current to the treatment instrument 30.
  • An operation section 32 is connected to the treatment instrument control device 31, and the operation section 32 is an input device that receives operation input from medical personnel including, for example, an endoscopic surgery operator. A corresponding control signal is sent to the treatment instrument control device 31.
  • the operator manually operates scissors, forceps, and dissection forceps to incise, excise, and detach the tissue of the diseased area.
  • the information processing device 10 is a device capable of various information processing and transmission/reception of information, and is, for example, a personal computer, a server computer, a workstation, or the like.
  • the information processing device 10 may have a configuration in which distributed processing is performed by a multicomputer including a plurality of computers, or may be implemented by a virtual machine virtually constructed using software within one device.
  • the information processing device 10 may be a local server installed in a medical institution or the like, or a cloud server communicatively connected via a network such as the Internet. It's okay.
  • the information processing device 10 will be described as one computer.
  • the information processing device 10 includes a control section 11, a storage section 12, an operation section 13, a communication section 14, a reading section 15, etc., and these sections are interconnected via a bus.
  • the control unit 11 includes one or more processors such as a CPU (Central Processing Unit), an MPU (Micro-Processing Unit), and a GPU (Graphics Processing Unit).
  • the control unit 11 performs various information processing, control processing, etc. that the information processing device 10 should perform by appropriately executing the program 12P stored in the storage unit 12.
  • the storage unit 12 includes a RAM (Random Access Memory), a flash memory, a hard disk, an SSD (Solid State Drive), and the like.
  • the storage unit 12 stores in advance a program 12P (program product) to be executed by the control unit 11 and various data necessary for executing the program 12P.
  • the storage unit 12 also temporarily stores data and the like generated when the control unit 11 executes the program 12P.
  • the program 12P may be written into the storage unit 12 during the manufacturing stage of the information processing device 10.
  • the operation unit 13 is an input device that accepts operation input from medical personnel, including, for example, an endoscopic surgery operator, and sends a control signal corresponding to the input operation details to the control unit 11.
  • the operation unit 13 may be a keyboard, a mouse, a trackball, a foot switch, a microphone, etc., or may be a sensor that accepts gesture input, gaze input, etc. Further, the operation unit 13 may be a touch panel configured integrally with a display unit (not shown) provided in the information processing device 10. The operation unit 13 accepts operations on the endoscope 20 in addition to operations on the information processing device 10 .
  • the operator's operation on the endoscope 20 is not limited to the configuration in which the operation unit 13 of the information processing device 10 receives and outputs the operation to the endoscope control device 21.
  • the endoscope control device 21 may be configured to accept an operator's operation on the endoscope 20.
  • the communication unit 14 has a communication module for connecting the endoscope control device 21, transmits control signals from the control unit 11 to the endoscope control device 21, and transmits image data from the endoscope control device 21. Receive (photographed image). Note that the image data received from the endoscope control device 21 is sequentially stored in the storage unit 12, for example. Further, the communication unit 14 has a communication module for connecting the display device 40, and transmits image data to be displayed on the display device 40 to the display device 40 under control from the control unit 11.
  • the communication unit 14, each of the endoscope control device 21 and the display device 40 may be configured to perform wired communication via a cable, or may be configured to perform wireless communication.
  • the communication unit 14 also includes a communication module for connecting to a network such as the Internet or a LAN (Local Area Network) through wired or wireless communication, and transmits and receives information to and from other devices via the network. It may be configured to do so.
  • the reading unit 15 reads information stored in a portable storage medium 10a including a CD (Compact Disc)-ROM, a DVD (Digital Versatile Disc)-ROM, a USB (Universal Serial Bus) memory, an SD (Secure Digital) card, etc. read.
  • the program 12P (program product) and various data stored in the storage unit 12 may be read by the control unit 11 from the portable storage medium 10a via the reading unit 15 and stored in the storage unit 12. Further, the program 12P and various data stored in the storage unit 12 may be distributed by a remote server device, which the control unit 11 may download via the communication unit 14 and store in the storage unit 12.
  • the display device 40 is a liquid crystal display, an organic EL display, or the like, and displays various information appropriately transmitted from the information processing device 10.
  • the information processing device 10 configured as described above performs a process of transmitting a control signal for controlling the operation of the endoscope 20 to the endoscope control device 21 in accordance with an operation by an operator via the operation unit 13. Further, the information processing device 10 performs a process of acquiring a captured image (image data) captured by the endoscope 20 from the endoscope control device 21 and displaying it on the display device 40 . Further, the information processing device 10 performs a process of detecting the occurrence of bleeding from the treatment site based on image data sequentially acquired from the endoscope control device 21 and outputting an alert as necessary.
  • the endoscopic surgery system configured as described above is installed and used in, for example, an operating room of a medical institution.
  • the endoscopic surgery system includes a pneumoperitoneum device that sends air into the patient's body (abdomen) to inflate the patient's body cavity, a recording device that records images taken with the endoscope 20, etc.
  • the information processing device 10 may have a configuration in which the insufflation device, the recording device, and the like are connected to the information processing device 10.
  • FIG. 2 is a flowchart showing an example of a bleeding detection processing procedure
  • FIGS. 3A to 4B are explanatory diagrams of the bleeding detection processing.
  • the following processing is executed by the control unit 11 of the information processing device 10 according to the program 12P stored in the storage unit 12.
  • the endoscope 20 performs imaging by the imaging unit under the control of the endoscope control device 21, and acquires image signals of, for example, 30 frames per second. Output to the control device 21.
  • the endoscope control device 21 performs signal processing on image signals sequentially acquired from the endoscope 20 and outputs processed image data (hereinafter referred to as a photographed image) to the information processing device 10.
  • the control unit 11 (captured image acquisition unit) of the information processing device 10 acquires captured images sequentially output from the endoscope control device 21 via the communication unit 14 (S11).
  • the control unit 11 acquires a photographed image as shown in FIG. 3A, for example.
  • the photographed image shown in FIG. 3A is a photographed image of a treatment region including the treatment instrument 30 and the forceps 30a, and includes a photographed image of an R component, a photographed image of a G component, and a photographed image of a B component.
  • the control unit 11 calculates a difference image by subtracting each pixel value of the G component captured image from each pixel value of the R component captured image (S12).
  • the difference image between the R component photographed image and the G component photographed image can indicate the degree of redness in the photographed image, and in this embodiment, the bleeding area in the photographed image is identified based on this difference image. do.
  • the control unit 11 calculates a difference image by subtracting each pixel value of the B component photographed image from each pixel value of the R component photographed image, and generates a difference image indicating the degree of redness in the photographed image. Good too.
  • the control unit 11 calculates a difference image as shown in FIG. 3B, for example.
  • 3B is an example of a difference image between a photographed image of the R component and a photographed image of the G component.
  • the area indicated by the broken line in FIGS. 3A and 3B is a bleeding area with strong redness.
  • the control unit 11 generates an image (R/G) obtained by dividing each pixel value of the G component from each pixel value of the R component, or the logarithm (log(R/G) of the division result).
  • An image representing a feature amount such as G) may be generated.
  • the control unit 11 may detect a bleeding area by combining a plurality of images.
  • the control unit 11 (bleeding area acquisition unit) generates a bleeding area image indicating the bleeding area in the captured image based on the color component difference image (S13).
  • the control unit 11 compares each pixel value of the difference image with a predetermined threshold, and assigns white (for example, 1) to pixels whose pixel value is equal to or greater than the predetermined threshold, and to pixels whose pixel value is less than the predetermined threshold.
  • a bleeding area image is generated by assigning black (for example, 0) to each of the images.
  • the control unit 11 generates a bleeding area image as shown in FIG. 3C, for example. In the bleeding area image shown in FIG. 3C, a white area is identified as a bleeding area.
  • the threshold value for determining whether or not it is a bleeding area may be dynamically set based on each pixel value (brightness of each pixel) in the captured image. This makes it possible to set a threshold value in consideration of the imaging conditions, etc., and it is possible to more appropriately identify a bleeding area in a captured image.
  • the control unit 11 performs the above-described processes of steps S11 to S13 on the captured images of each frame sequentially transmitted from the endoscope control device 21, and generates a bleeding area image from each captured image.
  • the control unit 11 calculates the difference (increase amount) in the bleeding area between successive frames in time series, based on the bleeding area in the captured images of each frame sequentially transmitted from the endoscope control device 21. is calculated (S14). Specifically, the control unit 11 identifies, among the bleeding areas in the current frame (frame to be processed), an area that was not a bleeding area in the immediately previous frame (hereinafter referred to as the previous frame). An image showing the area (increased bleeding area) is generated, and the number of pixels included in the increased bleeding area is counted. Then, the control unit 11 stores the counted number of pixels as an increase amount (difference) in the bleeding area in the storage unit 12 in association with the imaging time of the current frame.
  • the control unit 11 acquires time-series data as shown by the solid line in FIG. 4A, for example, by sequentially calculating the difference (increase amount) in the bleeding area between frames that precede and follow in time series.
  • the solid line in FIG. 4A shows the time-series change in the difference in the bleeding area between each frame.
  • the imaging time may be the elapsed time since the endoscope 20 starts the imaging process, and may be, for example, the imaging date and time indicated by a clock included in the endoscope control device 21.
  • the control unit 11 calculates a moving average of the differences in the bleeding area between each frame (S15). For example, the control unit 11 calculates a simple moving average every predetermined time period of about 3 seconds. Since 30 frames of captured images are captured per second, the control unit 11 calculates a moving average of the differences in bleeding areas in the previous and subsequent captured images, for example, in 90 frames captured in 3 seconds. Specifically, the control unit 11 calculates the average value of the difference in the bleeding area between each frame from the time to a predetermined time for a certain time, associates the calculated average value with the time, and stores it in the storage unit. Store in 12. The control unit 11 calculates the average value of the difference in the bleeding area in a predetermined time for each imaging time, and calculates the moving average of the difference in the bleeding area based on all the frames photographed by the endoscope 20.
  • the control unit 11 sets a threshold value to be used when determining the presence or absence of bleeding, based on the moving average of the difference in bleeding area between each frame calculated in step S15 up to each imaging time.
  • the control unit 11 calculates a value of +4 ⁇ (+4 standard deviation) with respect to the average value of the measured values, using the moving average of the differences in bleeding areas calculated up to each imaging time as a measurement value, and Set the judgment threshold in .
  • the control unit 11 determines whether the bleeding area difference between frames is If the moving average of the area differences is greater than or equal to the range of -4 ⁇ to +4 ⁇ (predetermined range), it is determined that bleeding has occurred. Therefore, the control unit 11 sets each value of the upper (+4 ⁇ ) time series data indicated by the broken line in FIG. 4A as a threshold value used when determining the presence or absence of bleeding occurrence for each imaging time.
  • the control unit 11 determines the presence or absence of bleeding for this imaging time based on the difference in bleeding area between frames calculated in step S14 and the determination threshold calculated in step S16 (S17 ). Specifically, for each imaging time, the control unit 11 determines whether the difference in the bleeding area calculated in step S14 is greater than or equal to the set determination threshold, and if it is greater than or equal to the determination threshold, it is determined that bleeding has occurred. If it is less than the determination threshold, it is determined that no bleeding has occurred. That is, in this embodiment, the presence or absence of bleeding is determined depending on whether the amount of increase (difference) in the bleeding area is greater than or equal to the determination threshold, and the determination threshold is determined based on the moving average of the amount of increase in the bleeding area. Set based on
  • the reason why a value of +4 ⁇ is used for the moving average of the difference in the bleeding area between each frame as the threshold for determining the occurrence of bleeding will be explained.
  • the solid line shows the time-series change in the bleeding area difference between the previous and subsequent frames
  • the broken line shows the time-series change of the -2 ⁇ value and +2 ⁇ value with respect to the moving average of the bleeding area difference. It shows a series change. For example, if each value of the upper (+2 ⁇ ) time series data indicated by the broken line in FIG. 4B is used as a threshold for determining the presence or absence of bleeding, the difference in the bleeding area is greater than or equal to the determination threshold at times T1 to T5 in FIG. 4B.
  • the value of the moving average of the differences in bleeding areas + 4 ⁇ instead of the value of the moving average of differences in bleeding areas + 2 ⁇ as the threshold for determining the presence or absence of bleeding, it is possible to prevent sudden bleeding even in the state shown in FIG. 4B, for example. It can be determined that bleeding has occurred only during the time T1 when the area is increasing, and it is possible to detect bleeding occurrence with high accuracy.
  • step S17 If it is determined that bleeding has occurred (S17: YES), the control unit 11 specifies that bleeding has started (S18). Note that if it is determined in step S17 that no bleeding has occurred (S17: NO), the control unit 11 skips the process in step S18 and proceeds to the process in step S19.
  • the control unit 11 determines whether or not to end the above-described processing (S19). For example, the control unit 11 receives an instruction to end the above-described process from the operator via the operation unit 13, and determines to end the process when the end instruction is received. When the control unit 11 determines that the above-described process is not to be ended (S19: NO), the process returns to step S11.
  • control unit 11 sequentially acquires photographed images from the endoscope control device 21 (S11), and executes the processes of steps S12 to S18 based on the acquired photographed images. If the control unit 11 determines to end the above-described processing (S19: YES), it ends the series of processing.
  • FIG. 5 is a flowchart showing an example of an alert output processing procedure.
  • the control unit 11 of the information processing device 10 executes the following process while executing the above-described bleeding detection process.
  • the control unit 11 determines whether the start of bleeding has been identified (S21), and if it is determined that the start of bleeding has been identified (S21: YES), it starts outputting an alert. (S22).
  • the control unit 11 displays a message on the display device 40 notifying that bleeding has occurred, and notifies the operator's doctor and medical personnel such as nurses.
  • the control unit 11 stores the imaging time at the time when it is determined that bleeding has started (bleeding has occurred) in the storage unit 12 as the bleeding occurrence time (S23).
  • control unit 11 determines whether the alert termination conditions are met (S24).
  • the condition for ending the alert is, for example, when the bleeding has stopped (ended), and the control unit 11 may determine that the bleeding has stopped when the bleeding area between the previous and subsequent frames does not increase in time series.
  • the control unit 11 performs processes similar to steps S161 to S165 in the bleeding detection process in FIG. 30, which will be described in Embodiment 5, and terminates the alert when it is detected that the bleeding has ended due to the hemostasis operation. It may be determined that the conditions are met.
  • the control unit 11 determines that the alert end condition is not satisfied (S24: NO)
  • the control unit 11 continues outputting the alert started in step S22 (S25).
  • control unit 11 determines that the alert termination condition is satisfied (S24: YES), it terminates the output of the alert started in step S22 (S26). Further, the control unit 11 stores the imaging time at the time when it is determined that the alert end condition is satisfied in the storage unit 12 as the bleeding end time (S27). After that, the control unit 11 returns to the process of step S21. After the process of step S25, the control unit 11 determines whether or not to end the process described above (S28). For example, the control unit 11 receives an instruction to end the above-described process from the operator via the operation unit 13, and determines to end the process when the end instruction is received.
  • control unit 11 determines that the above-described process is not to be ended (S28: NO)
  • the process returns to step S24. If the control unit 11 determines to end the above-described processing (S28: YES), it ends the series of processing. Thereby, when the occurrence of bleeding is identified through the bleeding detection process, an alert can be output to notify medical personnel of the occurrence of bleeding. Further, after the alert output is started, if the bleeding has ended and the alert termination condition is satisfied, the alert output can be terminated to notify the medical staff that the bleeding has stopped.
  • bleeding areas can be detected with high accuracy in images taken with the endoscope 20 based on the difference image between the R component image and the G or B component image.
  • the presence or absence of bleeding is determined based on the amount of increase in the bleeding area detected with high accuracy between the images taken before and after the time series, so the occurrence of bleeding can be detected early and objectively.
  • medical personnel can be notified. Therefore, medical personnel can promptly treat the occurrence of bleeding, and can also determine whether preparations for blood transfusion are necessary or not at an early stage.
  • the present embodiment has been described using as an example a configuration that detects the occurrence of bleeding based on an image captured by the endoscope 20 during surgery, the configuration is not limited to such a configuration, and the configuration of the present disclosure can be used, for example. It can also be used in verification processing after surgery.
  • the information processing device 10 displays a graph as shown in FIG. 40 (display unit) may be displayed.
  • the graph shown in FIG. 4A is generated by plotting the difference in bleeding area between frames and the bleeding determination threshold at each imaging time in association with each imaging time.
  • the generated graph may be output to the display device 40 and displayed.
  • the information processing device 10 may indicate the timing (imaging time) at which the occurrence of bleeding was determined in the displayed graph. For example, as shown by the arrow in FIG. 4A, the imaging time at which the occurrence of bleeding was detected can be presented.
  • the information processing device 10 performs processing for determining the presence or absence of bleeding after surgery based on the photographed images taken with the endoscope 20 during the surgery, the bleeding area between each frame is determined for all the photographed images.
  • a graph as shown in FIG. 4A may be displayed on the display device 40. In this case, in the post-surgery verification process, it becomes possible to verify the timing of bleeding occurrence, etc.
  • the process for determining the presence or absence of bleeding based on the photographed image taken by the endoscope 20 is not limited to a configuration in which the information processing device 10 performs it locally.
  • a server may be provided that executes a process for determining whether or not bleeding has occurred.
  • the information processing device 10 transmits the photographed images sequentially acquired from the endoscope control device 21 to the server, and the server determines the presence or absence of bleeding based on the photographed images and sends the determination result to the information processing device 10. configured to send to. Even in the case of such a configuration, the same processing as in this embodiment described above is possible and the same effects can be obtained.
  • Modification 1 A first modification of the first embodiment described above will be described.
  • the imaging unit of the endoscope 20 performs imaging, correction is performed to correct the deviation between each frame that follows in time based on the amount of movement (amount of change) of the imaging position by the imaging unit.
  • the information processing device 10 that performs processing will be described.
  • the imaging position and imaging direction of the imaging unit of the endoscope 20 are changed depending on, for example, a change in the state in which the operator holds (grasps) the endoscope 20 .
  • the presence or absence of bleeding can be determined more accurately by correcting the shift that occurs between frames in accordance with changes in the imaging position and imaging direction.
  • FIG. 6 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 1
  • FIGS. 7A and 7B are explanatory diagrams of the bleeding detection processing according to modification 1.
  • the process shown in FIG. 6 is the process shown in FIG. 2 with steps S31 and S32 added between steps S11 and S12. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 when the control unit 11 acquires a photographed image photographed by the endoscope 20 from the endoscope control device 21 (S11), the control unit 11 determines the amount of change in the photographing position by the endoscope 20. (S31).
  • the endoscopic surgery system has a holding mechanism that holds the endoscope 20, and the endoscope control device 21 is configured to control the holding state of the endoscope 20 by controlling the operation of the holding mechanism.
  • the control unit 11 can acquire the amount of change in the imaging position from the endoscope control device 21.
  • the control section 11 determines the amount of change in the imaging position based on the detection result by the sensor. may be obtained.
  • the amount of change in the photographing position includes the amount of translation and rotation on the surface facing the subject, and the zoom magnification (scale).
  • the control unit 11 performs correction processing based on the amount of change in the imaging position on the bleeding area image (the bleeding area image of the previous frame) generated from the immediately previous captured image (S32). For example, the control unit 11 moves each pixel by the amount of translation on the surface facing the subject, rotates each pixel by the amount of rotation, and calculates a transformation matrix for performing magnification processing according to the zoom magnification. do. Then, the control unit 11 moves each pixel of the bleeding area image of the previous frame based on the calculated transformation matrix, and generates a bleeding area image after the conversion process.
  • coordinate transformation can be performed on the bleeding area image of the previous frame using the transformation matrix, and a bleeding area image in which each pixel of the bleeding area image of the previous frame is aligned with the current frame (frame to be processed) is created. can get.
  • FIG. 7A shows a superimposed display of bleeding area images generated from two frames before and after in chronological order, where the area indicated by a broken line indicates the bleeding area in the previous frame, and the area indicated by a solid line indicates the bleeding area in the subsequent frame (current Figure 2 shows the bleeding area in the frame.
  • the control unit 11 can correct the deviation based on the amount of change in the imaging position between the previous frame and the subsequent frame by performing correction processing on the bleeding area image of the previous frame based on the amount of change in the imaging position. . Therefore, by performing such correction processing on the bleeding area image of the previous frame, it is possible to obtain the bleeding area image of the previous frame in which the deviation based on the amount of change in the imaging position has been corrected, as shown in FIG. 7B. .
  • control unit 11 performs the processing from step S12 onwards.
  • step S14 the control unit 11 calculates the difference between the bleeding area image of the previous frame subjected to the correction process in step S32 and the bleeding area image of the current frame generated in step S13 (S14).
  • Steps S31 to S32 in FIG. 6 may be executed after the process of step S11 and before the process of step S14.
  • this modification it is possible to correct the deviation based on the amount of change in the imaging position in frames that follow in time series, so the difference in bleeding area can be corrected between frames by eliminating the deviation based on the amount of change in the imaging position. can be calculated with high accuracy.
  • the deviation of each pixel position due to the amount of change in the imaging position is corrected for the bleeding region image generated from the imaging image, but the present invention is not limited to such a configuration. For example, correction processing based on the amount of change in the photographing position may be performed on the photographed image.
  • step S32 the control unit 11 performs a correction process on the captured image of the previous frame, generates a bleeding area image from the captured image based on the captured image after the correction process, and generates a bleeding area image from the captured image.
  • the process in step S14 may be performed using the image as the bleeding area image of the previous frame.
  • Modification 2 In this modification, an information processing device 10 that performs a correction process to correct a shift between frames that follow each other in time series based on images taken with an endoscope 20 will be described.
  • Modification 1 described above has a configuration that mechanically acquires the amount of change in the imaging position of the endoscope 20 (imaging unit), whereas this modification uses image processing for the captured image to determine the amount of change between each frame. This configuration calculates the amount of deviation and corrects the calculated amount of deviation.
  • FIG. 8 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 2
  • FIGS. 9A and 9B are explanatory diagrams of bleeding detection processing in Modification 2.
  • the process shown in FIG. 8 is the process shown in FIG. 6 with steps S41 to S43 added instead of steps S31 to S32. Description of the same steps as in FIG. 6 will be omitted.
  • the control unit 11 when the control unit 11 acquires a photographed image photographed by the endoscope 20 from the endoscope control device 21 (S11), the control unit 11 selects the previous frame for the acquired photographic image. A feature point corresponding to each feature point in is extracted (S41).
  • the control unit 11 uses image processing such as optical flow based on captured images sequentially acquired from the endoscope control device 21 to generate a motion vector (movement amount and direction). Therefore, the control unit 11 can extract feature points corresponding to the feature points extracted in the previous frame from the current frame.
  • FIG. 9A is an example in which the feature points extracted from the captured image are indicated by black circles, and as shown in FIG.
  • the control unit 11 controls the feature points in the subsequent frame based on the feature points in the previous frame (n-1 frame). Obtain feature points in (n frames). Note that the feature points whose movements are to be tracked may be specified using any method. Thereby, the amount of shift (the amount of movement of each feature point) between the frames that follow in time series can be obtained by image processing on the captured image.
  • the control unit 11 changes the position of each feature point of the previous frame to the position of each feature point of the current frame based on the position of each feature point of the previous frame and the position of each feature point of the current frame extracted in step S41.
  • a transformation matrix is calculated to bring the value closer to (S42).
  • the control unit 11 calculates a transformation matrix that minimizes the difference between each position after moving each feature point of the previous frame and the position of each feature point of the current frame by a transformation process based on the transformation matrix. do.
  • the control unit 11 performs a correction process on the bleeding area image of the previous frame based on the calculated transformation matrix (S43), and corrects the amount of deviation between it and the subsequent frame.
  • S43 calculated transformation matrix
  • steps S41 to S43 may be executed after the process of step S11 and before the process of step S14.
  • the same effects as in the first embodiment and modification 1 described above can be obtained. Furthermore, in this modification, the amount of shift between frames is calculated by image processing based on the captured images, so there is no need to mechanically obtain the amount of change in the imaging position by the endoscope 20. Therefore, even if, for example, post-surgery detection processing is performed on the occurrence of bleeding based only on images taken with the endoscope 20, it is possible to eliminate the discrepancy between frames. The difference in bleeding areas between frames can be calculated with high accuracy. Further, also in this modification, the modifications described in the above-mentioned Embodiment 1 and Modification 1 can be applied.
  • Modification 3 In this modification, when the amount of change in the zoom magnification (enlargement rate) during imaging by the endoscope 20 is greater than or equal to a predetermined magnification, the determination result of bleeding occurrence based on the photographed images taken before and after the change in zoom magnification is determined.
  • An information processing device 10 that disables the following will be described.
  • the photographing unit of the endoscope 20 is configured to be able to change the zoom magnification during photographing, and when photographing with zoom-in (enlargement), the bleeding area in the photographed image before zooming in will change to the photographed image after zooming in. It will be greatly expanded.
  • the size (number of pixels) of the bleeding area in the captured image increases before and after zooming in, and if the presence or absence of bleeding is detected based on this increase, erroneous detection will occur. Therefore, in this modified example, when the amount of change in the zoom magnification (enlargement rate) is greater than or equal to a predetermined magnification, the determination result of bleeding occurrence based on the captured images before and after changing the zoom magnification is invalidated. false positive detection can be suppressed.
  • FIG. 10 is a flowchart illustrating an example of a procedure for bleeding detection processing according to modification 3
  • FIG. 11 is an explanatory diagram of the bleeding detection processing according to modification 3.
  • the process shown in FIG. 10 is the process shown in FIG. 2 with steps S51 to S53 added between YES in step S17 and step S18. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S17 in FIG. Acquire the magnification (S51). For example, the control unit 11 acquires from the endoscope control device 21 the zoom magnification at which the endoscope control device 21 has controlled the endoscope 20 . The control unit 11 acquires the zoom magnification of the endoscope 20 each time an image is taken by the endoscope 20, thereby acquiring time-series data of the zoom magnification as shown by the solid line in FIG. 11, for example. The solid line in FIG. 11 shows the time-series change in zoom magnification by the endoscope 20. In the graph shown in FIG.
  • the horizontal axis shows the imaging time
  • the vertical axis shows the zoom magnification at each imaging time.
  • the control unit 11 calculates the amount of change in the zoom magnification (magnification rate) from the previous frame by image processing on the previous and subsequent frames in chronological order. It may be calculated.
  • the control unit 11 sets a threshold value (magnification threshold value) is calculated (S52). For example, the control unit 11 calculates the moving average of the zoom magnification shown by the solid line in FIG. 11, and sets the moving average of the zoom magnification for each imaging time as the magnification threshold for the imaging time.
  • the moving average of the zoom magnification is, for example, a simple moving average every predetermined period of about 3 seconds.
  • the broken line in FIG. 11 indicates a time series change in the moving average of the zoom magnification at each imaging time, and the control unit 11 changes each value of the time series data shown by the broken line in FIG. Set as the scaling threshold used when determining whether zoom-in processing has been performed.
  • the control unit 11 determines whether the zoom magnification obtained in step S51 is greater than or equal to the magnification threshold set in step S52 (S53). If the control unit 11 determines that the zoom magnification is equal to or greater than the magnification threshold (S53: YES), the control unit 11 skips the process of step S18 and proceeds to the process of step S19. As a result, if the zoom magnification is equal to or higher than the magnification threshold, that is, if it is determined that the zoom-in process that should invalidate the bleeding occurrence determination process has been performed, the control unit 11 controls the bleeding occurrence determined in the most recent step S17. The determination result of "Yes" is invalidated and it is determined that there is no bleeding.
  • the control unit 11 determines that the zoom magnification is less than the magnification threshold (S53: NO), it determines that the zoom-in process that should invalidate the bleeding occurrence determination process is not performed, and the process in step S18 is performed. Execute. As a result, if the zoom magnification is less than the magnification threshold, that is, if it is determined that the zoom-in process that should invalidate the bleeding occurrence determination process has not been performed, the control unit 11 Based on the determination result that bleeding has occurred, it is determined that bleeding has started. In the time-series change in zoom magnification shown in FIG. 11, the section indicated by the double-headed arrow indicates the time when it is determined that the zoom-in process has been performed.
  • Steps S51 to S53 in FIG. 10 only need to be executed after the processing of step S11 and before the processing of step S18.
  • the processing of steps S12 to S18 may be skipped.
  • each process for determining the occurrence of bleeding is not executed, thereby making it possible to reduce the processing load.
  • this modification the same effects as in the first embodiment described above can be obtained.
  • it is determined whether zoom-in processing that should invalidate the bleeding occurrence determination processing is performed, and when the zoom-in processing is performed, the captured images taken before and after the zoom-in processing are By invalidating the determination result of bleeding occurrence based on this, it is possible to suppress erroneous detection of bleeding occurrence that occurs when the amount of change in zoom magnification is large.
  • the configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this modification, the modifications described in the above-described first embodiment and each modification can be applied.
  • Modification 4 For a bleeding area detected in a captured image, the center of gravity of the bleeding area is traced back for a predetermined period of time, and if the tracked trajectory is included in a predetermined area around the periphery of the captured image, the bleeding area is detected.
  • the following describes an information processing apparatus 10 that excludes objects used in the bleeding occurrence determination process as objects that have entered the imaging range from outside the imaging range due to movement of the imaging position or the like. As the imaging position of the endoscope 20 moves, the imaging range moves, and a bleeding area that was outside the imaging range when imaging the previous frame may come within the imaging range of the current frame. .
  • the purpose is to accurately detect bleeding that has occurred within the imaging range of the endoscope 20, so the presence or absence of bleeding is detected based on the bleeding area that has entered the imaging range from outside the imaging range. If detected, a false positive will occur. Therefore, in this modification, by excluding the above-mentioned bleeding area from the targets when detecting the occurrence of bleeding, it is possible to suppress erroneous detection of the occurrence of bleeding.
  • FIG. 12 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 4, and FIG. 13 is an explanatory diagram of bleeding detection processing in Modification 4.
  • the process shown in FIG. 12 is the process shown in FIG. 2 with steps S61 to S63 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13).
  • the control unit 11 identifies the center of gravity of the bleeding area in the generated bleeding area image (S61).
  • the control unit 11 calculates the coordinate values of the center of gravity of the bleeding area in the bleeding area image. Note that the coordinate value of each pixel in the bleeding area image is expressed, for example, with the upper left of the captured image as the origin (0, 0), and the number of pixels from the origin to the right and the number of pixels from the origin to the bottom. .
  • the coordinate value of the center of gravity of the bleeding area may be expressed, for example, by the average value of the coordinate values of all pixels included in the bleeding area, and may be expressed as the coordinate value of the center position in the left and right direction of the leftmost pixel and the rightmost pixel of the bleeding area. , the coordinate value of the center position in the vertical direction of the upper end pixel and the lower end pixel of the bleeding region, or the average value of the coordinate values of each pixel on the outline of the bleeding region.
  • FIG. 13 shows an example of a bleeding area image, and for the bleeding area shown in FIG. 13, the position indicated by the + mark is specified as the position of the center of gravity C of the bleeding area.
  • the control unit 11 Based on the bleeding area in the current frame (bleeding area image), the control unit 11 traces the center of gravity of the bleeding area going back to a frame a predetermined time ago, and obtains the locus of the center of gravity of the bleeding area (S62). .
  • the centroids of bleeding areas traced back to frames a predetermined time ago are each indicated by a + mark.
  • the coordinate values of the center of gravity of the bleeding area may already be calculated and stored in the storage unit 12. In this case, the control unit 11 can acquire the locus of the center of gravity of the bleeding area by reading out the coordinate values of the center of gravity of the bleeding area in each frame stored in the storage unit 12.
  • control unit 11 extracts the bleeding area in the previous frame based on the bleeding area in the bleeding area image of the current frame, and performs the process of calculating the center of gravity of the extracted bleeding area sequentially up to the frame a predetermined time ago. By executing this, the locus of the center of gravity of the bleeding area may be obtained.
  • the control unit 11 determines whether the locus of the center of gravity of the acquired bleeding area is included in a predetermined area at the periphery of the photographed image (S63). For example, the control unit 11 determines whether the trajectory of the center of gravity of the bleeding area is included in the peripheral area indicated by hatching in the photographed image (bleeding area image) shown in FIG. 13 . If it is determined that the trajectory of the center of gravity of the bleeding area is included in the peripheral area of the photographed image (S63: YES), the control unit 11 skips the processes of steps S14 to S18 and proceeds to the process of step S19.
  • the peripheral area of the photographed image includes the locus of the center of gravity of the bleeding area, it is considered that the bleeding area has come into the photographing range from outside the photographing range due to movement of the photographing position or the like. Therefore, when such a bleeding area is detected, erroneous detection is avoided by not performing a process for determining the occurrence of bleeding based on this bleeding area. Since the locus of the center of gravity in the bleeding area shown in Figure 13 is included in the lower edge area of the photographed image, it is determined that this bleeding area has come into the photographing range from outside the photographing range due to movement of the photographing position, etc. The bleeding occurrence determination process is skipped.
  • the control unit 11 determines that the locus of the center of gravity of the bleeding area is not included in the peripheral area of the photographed image (S63: NO), it determines that the bleeding area has occurred within the photographing range, and steps The processes of S14 to S18 are executed. As a result, for bleeding that occurs within the photographing range of the endoscope 20, the determination process of bleeding occurrence is performed based on the difference in the bleeding area between the frames that follow in time series. Note that when the bleeding area image includes a plurality of bleeding areas, the control unit 11 executes the processes of steps S61 to S63 for each bleeding area.
  • this modification the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, a bleeding area that has come into the imaging range from outside the imaging range due to movement of the imaging position or the like is not used in the process of determining the occurrence of bleeding, thereby suppressing erroneous detection of the occurrence of bleeding.
  • the configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
  • Mode 5 an information processing apparatus 10 will be described in which, when a bleeding area in a photographed image rapidly decreases due to the generation of mist, the photographed image (frame) at this time is not subjected to the bleeding occurrence determination process.
  • mist mist, steam
  • the redness of the bleeding area becomes diluted, so that it is not determined to be a bleeding area, and a situation may occur where the area of the bleeding area suddenly decreases. Therefore, in this modification, when mist is generated, the images taken before and after the mist generation are excluded from the targets for determining the presence or absence of bleeding, and the bleeding occurrence determination processing is not executed. Suppress detection.
  • FIG. 14 is a flowchart showing an example of a procedure for bleeding detection processing according to modification 5, and FIG. 15 is an explanatory diagram of the bleeding detection processing according to modification 5.
  • the process shown in FIG. 14 is the process shown in FIG. 2 with steps S71 and S72 added between steps S15 and S16. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S15 in FIG.
  • step S14 the control unit 11 acquires time-series data of differences in bleeding areas as shown by solid lines in FIG. 15, for example, by sequentially calculating differences in bleeding areas between frames that follow in time series.
  • the horizontal axis shows the imaging time
  • the vertical axis shows the difference in the bleeding area at each imaging time.
  • the control unit 11 sets a threshold ( mist determination threshold value) is calculated (S71).
  • the control unit 11 uses the moving average of the differences in bleeding areas calculated up to each imaging time as a measurement value, calculates a value of ⁇ 2 ⁇ with respect to the average value of the measurement values, and sets the mist determination threshold at the imaging time.
  • the broken line in FIG. 15 indicates a time-series change between the ⁇ 2 ⁇ value and the +2 ⁇ value with respect to the moving average of the bleeding area difference, and the control unit 11 determines that the bleeding area difference between frames is When the moving average of the difference is less than -2 ⁇ , it is determined that mist has occurred. Therefore, the control unit 11 sets each value of the lower (-2 ⁇ ) time series data indicated by the broken line in FIG. 15 as a threshold value used when determining whether mist is generated for each imaging time.
  • the control unit 11 determines whether mist has occurred in the photographed image based on the difference in bleeding area between frames calculated in step S14 and the mist determination threshold calculated in step S71 (S72). Specifically, for each imaging time, the control unit 11 determines whether the difference in the bleeding area calculated in step S14 is less than the set mist determination threshold, and if it is less than the mist determination threshold, If it is determined that mist has occurred, and if the mist determination threshold value is greater than or equal to the mist determination threshold, it is determined that mist has not occurred. That is, in this embodiment, the presence or absence of mist generation is determined depending on whether the amount of increase (difference) in the bleeding area is less than the mist determination threshold.
  • the presence or absence of mist generation can be determined not only by color changes in captured images (specifically, differences in bleeding areas) but also by, for example, whether the contrast (brightness ratio) of the entire image has suddenly decreased. good. For example, it may be determined that mist has occurred when the contrast of the entire image has decreased by a predetermined value or more.
  • step S72 If it is determined that no mist is generated (S72: NO), the control unit 11 executes the processes from step S16 onwards, and performs the process of determining the occurrence of bleeding. On the other hand, if it is determined that mist is generated (S72: YES), the control unit 11 skips the processes of steps S16 to S18 and proceeds to the process of step S19. Thereby, when mist is generated, the control unit 11 avoids erroneous detection by not performing determination processing of bleeding occurrence based on the photographed image (bleeding area). In the time-series change in the difference in the bleeding area shown in FIG. 15, the section indicated by the arrow indicates the time when it is determined that mist has occurred.
  • Modification 6 when a bleeding area that has been hidden by the treatment instrument 30 and gauze in the photographed image is exposed as the treatment instrument 30 and gauze move, the bleeding area is used as a target for determining the occurrence of bleeding.
  • the information processing devices 10 to be excluded from the following will be explained.
  • bleeding is stopped using the treatment instrument 30 and gauze, and when the treatment instrument 30 and gauze are removed after the bleeding has stopped, the bleeding area that was hidden by the treatment instrument 30 and gauze is exposed, and the bleeding area is exposed. Situations may arise where the area increases significantly. Therefore, in this modification, the bleeding area exposed due to the movement of the treatment tool 30 and the gauze is excluded from the target when detecting the occurrence of bleeding, thereby suppressing the erroneous detection of the occurrence of bleeding.
  • the information processing device 10 of this modification stores a learning model for detecting a predetermined object in an image in the storage unit 12.
  • the learning model here is a learning model that has been trained by machine learning to output the region of the treatment instrument 30 and gauze in the photographed image when the photographed image is input.
  • Such learning models include CNN (Convolution Neural Network), R-CNN (Regions with CNN), Fast R-CNN, Faster R-CNN, Mask R-CNN, SSD (Single Shot Multibook Detector), and YOLO (You Only It may be configured with any object detection algorithm such as Look Once), or it may be configured by combining several of these models.
  • the control unit 11 of the information processing device 10 inputs the captured image captured by the endoscope 20 into the learning model described above, and determines the area of the treatment instrument 30 and gauze in the captured image based on the output information from the learning model. can be identified.
  • detection of gauze in a photographed image may be performed by pattern matching using a template, in addition to using a learning model.
  • a template indicating the image feature amount of the gauze surface is stored in advance in the storage unit 12, and the control unit 11 determines whether or not there is an area matching the template from the captured image. Presence can be detected.
  • FIG. 16 is a flowchart illustrating an example of a procedure for bleeding detection processing in Modification 6, and FIGS. 17A to 17D are explanatory diagrams of bleeding detection processing in Modification 6.
  • the process shown in FIG. 16 is the process shown in FIG. 2 with steps S81 to S83 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13). Further, the control unit 11 inputs the photographed image acquired in step S11 to the learning model, and detects the treatment tool 30 and gauze in the photographed image based on the output information from the learning model (S81). For example, as shown in FIG. 17A, when a photographed image in which three treatment instruments 30 are captured is input to the learning model, as shown in FIG. 17B, the learning model Output the image. Therefore, the control unit 11 can grasp the area of each treatment tool 30 based on the treatment tool image output from the learning model. Note that the left side of FIG. 17B shows a treatment tool image showing the area of the treatment tool 30 in the previous frame (n-1 frame), and the right side shows the area of the treatment tool 30 in the current frame (n frame). A treatment tool image is shown.
  • the control unit 11 When the control unit 11 detects the treatment instrument 30 or gauze in the current frame, the control unit 11 specifies the movement area of the detected treatment instrument 30 and gauze from the position in the previous frame to the position in the current frame (S82). . Specifically, the control unit 11 compares the area of the treatment instrument 30 and gauze in the previous frame with the area of the treatment instrument 30 and gauze in the current frame, and replaces the area existing only in the previous frame with the area of the treatment instrument 30 and gauze. and specific areas of gauze movement. For example, in FIG. 17C, a solid line indicates the area of the treatment instrument 30 and gauze in the current frame, and a broken line indicates the area of the treatment instrument 30 and gauze in the previous frame. In this case, as shown in FIG.
  • the control unit 11 specifies a region that exists only in the front frame as a movement region of the treatment instrument 30 and the gauze. Note that if a treatment instrument image showing the area of the treatment instrument 30 and gauze in the previous frame has already been generated and stored in the storage unit 12, the control unit 11 stores the area of the treatment instrument 30 and gauze in the previous frame from the storage unit 12. You can get the location. Furthermore, the control unit 11 may obtain a treatment tool image indicating the region of the treatment tool 30 and gauze in the previous frame from the learning model by inputting the captured image of the previous frame into the learning model.
  • the control unit 11 excludes pixels included in the identified movement area of the treatment instrument 30 and gauze from the bleeding area image in the current frame generated in step S13 (S83 ). Thereby, the pixels of the bleeding area included in the moving area of the treatment instrument 30 and the gauze can be excluded from the bleeding occurrence determination process. After that, the control unit 11 performs the processing from step S14 onwards. In addition, in step S14, the control unit 11 performs the following based on the remaining bleeding area from which pixels included in the movement area of the treatment instrument 30 and gauze are excluded from the bleeding area included in the bleeding area image generated in step S13. The difference between the bleeding area in the previous frame and the bleeding area in the current frame is calculated (S14). Thereby, the bleeding area exposed by the movement of the treatment instrument 30 and the gauze can be excluded from the bleeding occurrence determination process, and erroneous detection of bleeding occurrence can be suppressed.
  • the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, the bleeding area exposed due to the movement of the treatment tool 30 and the gauze is not used in the process of determining the occurrence of bleeding, thereby suppressing erroneous detection of the occurrence of bleeding.
  • the configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
  • Modification 7 In this modified example, the movement of each pixel is detected between frames that precede and follow in time series, and pixels within the bleeding area that have a large amount of movement (movement amount) are excluded from the targets used in the process to determine the occurrence of bleeding.
  • the information processing device 10 will be explained.
  • the procedure may be performed on the treatment area while moving the organ. For example, if an organ with blood is moved, the area of blood on the organ will move and the area after the movement will be different. There is a possibility that it will be mistakenly detected as a new bleeding area. Therefore, in this modification, by excluding pixels included in a bleeding area that have a large amount of movement between frames that precede and follow in time series from the targets when detecting bleeding occurrence, it is possible to detect the occurrence of bleeding incorrectly. Suppress detection.
  • FIG. 18 is a flowchart illustrating an example of a procedure for bleeding detection processing according to modification 7, and FIG. 19 is an explanatory diagram of the bleeding detection processing according to modification 7.
  • the process shown in FIG. 18 is the process shown in FIG. 2 with steps S91 to S93 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13).
  • the control unit 11 detects the amount of movement of each pixel from the previous frame to the current frame based on the captured image acquired in step S11 and the captured image of the previous frame (S91).
  • the control unit 11 calculates a motion vector (amount and direction of motion) indicating the movement (movement) of each pixel from the previous frame to the current frame by image processing such as optical flow.
  • FIG. 19 is an example showing a motion vector from the previous frame for each pixel in the current frame. In the motion vector, the direction of the arrow indicates the direction of movement, and the length of the arrow indicates the amount of movement.
  • the control unit 11 Based on the motion vector of each pixel, the control unit 11 identifies pixels with a large movement amount, specifically, pixels whose movement amount is a predetermined amount or more (S92). As a result, pixels that have partially moved in the photographed image are identified. Note that the control unit 11 determines whether the amount of movement is equal to or greater than a predetermined amount only for each pixel included in the bleeding area in the bleeding area image generated in step S13, and determines whether the amount of movement is equal to or greater than a predetermined amount within the bleeding area. Large pixels may also be identified. Then, the control unit 11 excludes pixels that have a large amount of movement from the pixels included in the bleeding area in the bleeding area image in the current frame generated in step S13 (S93).
  • step S14 the control unit 11 determines the bleeding area in the previous frame and the bleeding area in the previous frame based on the remaining bleeding area from which pixels with a large movement amount are removed from the bleeding area included in the bleeding area image generated in step S13.
  • the difference from the bleeding area in the current frame is calculated (S14).
  • this modification the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, for example, by not using a bleeding area that has moved due to the movement of a bloody organ in the process of determining the occurrence of bleeding, it is possible to suppress erroneous detection of the occurrence of bleeding.
  • the configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
  • Modification 8 an information processing apparatus 10 will be described in which an area in which blur or blur occurs in a captured image captured by an endoscope 20 is excluded from targets used in a process for determining the occurrence of bleeding. Since the endoscope 20 uses an image sensor to take pictures, blur or blur may occur, and it is difficult to accurately detect bleeding regions in areas where blur or blur occurs. Therefore, in this modification, the area where blurring or blurring occurs in the captured image is identified, the area where blurring or blurring occurs is excluded from the bleeding area in the captured image, and the occurrence of bleeding is detected. By excluding the blood from the target, false detection of bleeding occurrence can be suppressed.
  • the high-frequency components are extracted from the captured image, and the areas in the captured image that do not include the high-frequency components are treated as low-frequency areas. Identify the area where the blur is occurring.
  • FIG. 20 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 8, and FIGS. 21A to 21E are explanatory diagrams of bleeding detection processing in Modification 8.
  • the process shown in FIG. 20 is the process shown in FIG. 2 with steps S101 to S104 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13).
  • the control unit 11 performs smoothing processing using a smoothing filter such as a Gaussian filter on the captured image acquired in step S11 (S101).
  • a smoothed image as shown in FIG. 21B is generated from the photographed image shown in FIG. 21A, for example.
  • the control unit 11 subtracts the smoothed image from the photographed image to extract high frequency components in the photographed image (S102).
  • the control unit 11 subtracts the pixel value of each pixel of the smoothed image from the pixel value of each pixel of the captured image, and generates an image having only high frequency components as shown in FIG. 21C. .
  • the control unit 11 identifies a low frequency region in the captured image based on the image having only high frequency components (S103). For example, the control unit 11 performs binarization processing on an image having only high frequency components, and generates a binary image as shown in FIG. 21D. As a result, a binary image in which high frequency components are converted to pixels with a pixel value of 1 and low frequency components are converted to pixels with a pixel value of 0 is obtained.
  • control unit 11 divides each pixel of the binary image into cells of, for example, 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc., and in each cell, a pixel with a pixel value of 1 (a pixel value indicating a high frequency component) If the ratio is above a predetermined ratio (for example, 50% to 80%), the pixel values of all pixels included in the cell are converted to 1, and if it is less than the predetermined ratio, the pixel values of all pixels included in the cell are converted to 1. The value is converted to 0 to generate an image as shown in FIG. 21E. Then, based on the image shown in FIG.
  • a predetermined ratio for example, 50% to 80%
  • the control unit 11 specifies an area where there is no high frequency component as an area with a low frequency component in the photographed image. Note that since there is a possibility that a high frequency component is not included in an appropriate bleeding area, the control unit 11 considers the distribution of areas of low frequency components in the image shown in FIG. 21E, and selects an appropriate bleeding area. It is determined whether there is any blurring or blurring in the area. For example, in the image shown in FIG. 21E, the control unit 11 determines whether the outline of the low frequency component area is blurred or not, and if the outline is not blurred, it is determined that it is an appropriate bleeding area; It is determined that this is an area where blur or blur is occurring. As a result, in the image shown in FIG. 21E, the area surrounded by the broken line is specified as a low-frequency component area where blurring or blurring occurs.
  • the control unit 11 excludes the region of the low frequency component identified in step S103 from the bleeding region in the bleeding region image in the current frame generated in step S13 (S104). Thereby, among the pixels included in the bleeding area, the pixels in the area where blurring or blurring occurs can be excluded from the target of the bleeding occurrence determination process. After that, the control unit 11 performs the processing from step S14 onwards. In addition, in step S14, the control unit 11 determines the bleeding area in the previous frame based on the remaining bleeding area from which the blurred or blurred area is excluded from the bleeding area included in the bleeding area image generated in step S13. The difference between the bleeding area and the bleeding area in the current frame is calculated (S14). As a result, a region in which blur or blur has occurred among the bleeding regions can be excluded from the bleeding occurrence determination processing, and erroneous detection of bleeding occurrence can be suppressed.
  • this modification the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, by not using a bleeding area where blurring or blurring occurs in the bleeding occurrence determination process, it is possible to suppress erroneous detection of bleeding occurrence.
  • the configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
  • the control unit 11 determines whether the bleeding area is true based on the size or shape of the bleeding area in the bleeding area image generated in step S13. Processing may be performed to determine whether the area is a bleeding area or a non-bleeding area. For example, the bleeding area tends to spread from the bleeding point, so immediately after the bleeding occurs, the area is small and is not widely scattered.
  • the control unit 11 determines whether or not each bleeding area in the bleeding area image is in any of the states described above, and determines that a bleeding area that is not in any of the states is a true bleeding area. Then, the control unit 11 executes the processes from step S14 onward for the bleeding area determined to be a true bleeding area.
  • the control unit 11 determines that a bleeding area in any of the above states is a non-bleeding area, and excludes the bleeding area from the bleeding occurrence determination target.
  • the true bleeding area can be identified from the red area in the photographed image, so erroneous detection of the bleeding area is suppressed, and as a result, erroneous detection of the occurrence of bleeding is suppressed.
  • the configurations of the above-described modifications can be used in appropriate combinations as necessary, thereby making it possible to further improve the accuracy of determining the occurrence of bleeding.
  • the information processing apparatus of this embodiment has the same configuration as the information processing apparatus 10 of Embodiment 1, so a description of the configuration will be omitted.
  • the information processing device 10 of this embodiment stores in the storage unit 12 a learning model 12M (see FIG. 22) that has been trained with training data, for example, by machine learning.
  • the learning model 12M of this embodiment when the pixel value of each pixel of a captured image captured by the endoscope 20 is input, whether this pixel is a pixel included in a bleeding area (bleeding pixel) or not.
  • the learning model 12M is assumed to be used as a program module that functions as part of artificial intelligence software.
  • the learning model 12M performs predetermined calculations on input values and outputs the calculation results, and the storage unit 12 stores data such as coefficients and threshold values of functions that define this calculation as the learning model 12M. be done.
  • FIG. 22 is an explanatory diagram showing a configuration example of the learning model 12M.
  • the learning model 12M shown in FIG. 22 includes pixel values of the R component (R component value), pixel values of the G component (G component value), and pixel values of the B component (B component value) is input, and based on the input RGB component values, a calculation is performed to classify whether the pixel is included in a bleeding area or a pixel included in a non-bleeding area, and the calculated result is calculated. It has been learned to output .
  • the learning model 12M may be any function that outputs the probability that a pixel is included in a bleeding area from the RGB component values of one pixel, and may be an algorithm such as logistic regression or linear regression, or a neural network, etc. Constructed using
  • the learning model 12M determines whether the pixel is included in a bleeding area or in a non-bleeding area based on an input layer into which RGB component values of one pixel are input and the input RGB component values.
  • An intermediate layer that classifies whether the pixel is a pixel, and an output layer that outputs information indicating whether the pixel is included in a bleeding area or a non-bleeding area based on the calculation results of the intermediate layer. and has.
  • the intermediate layer calculates an output value from the RGB component values input via the input layer using various functions, threshold values, and the like.
  • the output layer has two output nodes associated with each of the bleeding area and non-bleeding area, and from each output node, the probability (confidence) that the pixel should be classified as being included in the bleeding area, and outputs the probability (confidence) that the pixel should be classified as being included in a non-bleeding area.
  • the output value from each output node of the output layer is, for example, a value between 0 and 1, and the sum of the probabilities output from each output node is 1.0 (100%).
  • the learning model 12M of this embodiment when the RGB component values of each pixel in an image is input, determines whether the pixel is included in a bleeding area or a non-bleeding area. Outputs an output value (confidence) that indicates.
  • the information processing device 10 identifies the output node that outputs the larger output value (confidence) among the output values from each output node, and determines that the bleeding area corresponds to the identified output node.
  • the pixel of the input pixel value is classified as a pixel included in a bleeding area or a pixel included in a non-bleeding area, depending on whether the input pixel value is a pixel included in a bleeding area or a pixel included in a non-bleeding area.
  • the output layer of the learning model 12M has a plurality of output nodes that output the probability that a pixel should be classified as a pixel included in a bleeding region and the probability that a pixel should be classified as a pixel included in a non-bleeding region.
  • a configuration having one output node that outputs information indicating a region (bleeding region or non-bleeding region) with high classification probability (confidence) may be used.
  • the learning model 12M includes RGB component values (pixel values) for training and information (correct label) indicating whether this pixel is included in a bleeding area or a non-bleeding area. It can be generated by machine learning using training data.
  • the training data is generated by an expert such as a doctor from an annotation image in which a bleeding area or a non-bleeding area is assigned to each pixel of an image taken with the endoscope 20.
  • the training data is generated by adding a correct label indicating a bleeding area to the RGB component values of pixels to which a bleeding area is assigned in an annotation image, and to the RGB component values of pixels to which a non-bleeding area is assigned. It is generated by adding a correct label indicating a non-bleeding area to the RGB component values of .
  • the learning model 12M when the RGB component values included in the training data are input, the output value from the output node corresponding to the correct label (bleeding area or non-bleeding area) included in the training data approaches 1, Learn so that the output values from other output nodes approach 0.
  • the learning model 12M performs calculations in the intermediate layer based on input RGB component values, and calculates output values from each output node.
  • the learning model 12M compares the calculated output value of each output node with the value corresponding to the correct label (1 for the output node corresponding to the correct label, 0 for other output nodes), and compares both.
  • the parameters used for calculation processing in the intermediate layer are optimized so that The parameters include weights (coupling coefficients) between nodes in the intermediate layer.
  • the parameter optimization method is not particularly limited, but an error backpropagation method, steepest descent method, etc. can be used.
  • an RGB pixel value is input, a learning model 12M that predicts whether this pixel is in a bleeding area or a non-bleeding area and outputs a prediction result is obtained.
  • Learning of the learning model 12M may be performed by another learning device.
  • the trained learning model 12M generated by performing learning on another learning device is downloaded from the learning device to the information processing device 10 via the Internet or the portable storage medium 10a, and is stored in the storage unit 12, for example.
  • the learning model 12M is not limited to the configuration shown in FIG. 22.
  • the value obtained by subtracting the G component value from the R component value (RG), and the value obtained by dividing the G component value from the R component value A configuration in which feature quantities such as a value obtained by dividing the B component value from the R component value (R/B), etc., may be input, and a combination of multiple feature quantities among these feature quantities may be input.
  • the configuration may be input.
  • the learning model 12M may have a configuration in which each feature amount of one pixel in a photographed image is inputted, or a configuration in which in addition to the target pixel, each feature amount of pixels in the vicinity of the target pixel is inputted.
  • the learning model 12M may be configured using algorithms such as SVM (support vector machine), decision tree, random forest, etc., or may be configured by combining multiple algorithms. .
  • FIG. 23 is a flowchart showing an example of the procedure for generating the learning model 12M.
  • the following process is executed by the control unit 11 of the information processing device 10 according to the program 12P stored in the storage unit 12, but may be executed by another learning device.
  • the control unit 11 first generates training data based on the annotation image, and performs a learning process on the learning model 12M using the generated training data. It is assumed that the annotation image is stored in the storage unit 12 in advance, for example.
  • the control unit 11 of the information processing device 10 reads one annotation image from the storage unit 12 (S111).
  • the control unit 11 extracts the pixel value of one pixel in the annotation image (S112), and determines whether the extracted pixel is a pixel to which a bleeding area has been assigned (S113). If it is determined that the pixel is a pixel to which a bleeding area is assigned (S113: YES), the control unit 11 generates training data by adding a correct label (bleeding label) indicating a bleeding area to the extracted pixel value ( S114).
  • control unit 11 If it is determined that the pixel is not assigned to a bleeding area (S113: NO), the control unit 11 generates training data by adding a correct label (non-bleeding label) indicating a non-bleeding area to the extracted pixel value. (S115). The control unit 11 stores the generated training data in, for example, a training DB (not shown) prepared in the storage unit 12 (S116).
  • the control unit 11 determines whether there are any pixels (unprocessed pixels) that are not used in the training data generation process among the pixels in the annotation image read out in step S111 (S117). If it is determined that there are unprocessed pixels (S117: YES), the control unit 11 returns to the process of step S112 and performs the processes of steps S112 to S116 for the unprocessed pixels. As a result, training data is accumulated in which the pixel value of each pixel in the annotation image is given a correct label (information regarding the bleeding area) indicating whether each pixel is a pixel in a bleeding area or a pixel in a non-bleeding area. be done.
  • control unit 11 selects images that are not used in the training data generation process (unprocessed pixels) among the annotation images stored in the storage unit 12. image) is present (S118). If it is determined that there is an unprocessed image (S118: YES), the control unit 11 returns to the process of step S111 and performs the processes of steps S111 to S117 for the unprocessed annotation image. Thereby, training data used for learning the learning model 12M can be generated and stored in the training DB based on the prepared annotation image.
  • the control unit 11 determines that there are no unprocessed images (S118: NO)
  • the control unit 11 performs learning of the learning model 12M using the training data accumulated in the training DB as described above. Specifically, the control unit 11 reads out one of the training data accumulated in the training DB through the process described above (S119). Then, the control unit 11 performs a learning process on the learning model 12M based on the training data (S120).
  • the control unit 11 inputs RGB component values included in the training data to the learning model 12M, and acquires an output value output from the learning model 12M by inputting the RGB component values.
  • the learning model 12M performs calculations based on input RGB component values and calculates output values from each output node.
  • the control unit 11 outputs the output value of each output node output from the learning model 12M and a value corresponding to the correct label included in the training data (1 for the output node corresponding to the correct label, and 1 for the other output nodes). 0), and the learning model 12M is trained so that the two approximate each other.
  • the learning model 12M optimizes parameters used for calculation processing in the intermediate layer. For example, the control unit 11 optimizes parameters such as weights (coupling coefficients) between nodes in the intermediate layer using an error backpropagation method that sequentially updates parameters from the output layer to the input layer of the learning model 12M.
  • the control unit 11 determines whether or not there is unprocessed training data that has not been subjected to learning processing among the training data stored in the training DB (S121). If it is determined that there is unprocessed training data (S121: YES), the control unit 11 returns to the process of step S119, and performs the processes of steps S119 to S120 on the training data that has not been subjected to the learning process. If it is determined that there is no unprocessed training data (S121: NO), the control unit 11 ends the series of processes.
  • the learning model 12M can be further optimized by repeatedly performing learning processing using training data as described above. Further, by relearning the already trained learning model 12M using the above-described learning process, a learning model 12M with further improved estimation accuracy can be generated.
  • the training data generation process in steps S111 to S118 and the learning model 12M generation process in steps S119 to S121 may be performed by separate devices.
  • FIG. 24 is a flowchart showing an example of a bleeding detection processing procedure according to the second embodiment
  • FIGS. 25A to 26 are explanatory diagrams of the bleeding detection processing according to the second embodiment.
  • the process shown in FIG. 24 is the process shown in FIG. 2 with step S131 added instead of step S12. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 controls the pixel values (RGB component values) are input into the learning model 12M and classified into bleeding areas or non-bleeding areas based on the output values from the learning model 12M (S131). Specifically, the control unit 11 identifies the larger output value among the output values from the learning model 12M, and identifies the area (bleeding area or A non-bleeding area) is specified in this pixel area.
  • the control unit 11 (bleeding area acquisition unit) generates a bleeding area image indicating the bleeding area in the captured image based on the classification result of each pixel (S13).
  • control unit 11 generates a bleeding area image by assigning white (for example, 1) to pixels classified as bleeding areas and black (for example, 0) to pixels classified as non-bleeding areas. do.
  • the control unit 11 generates a bleeding area image as shown in FIG. 25B, for example, based on the photographed image shown in FIG. 25A.
  • the control unit 11 performs the above-described processing of steps S11, S131, and S13 on the captured images of each frame sequentially transmitted from the endoscope control device 21, and generates a bleeding area image from each captured image. .
  • step S16 the control unit 11 calculates a value of +2 ⁇ with respect to the average value of the measured values, using the moving average of the differences in bleeding areas calculated up to each imaging time as the measured value, and calculates the value of +2 ⁇ with respect to the average value of the measured values. It may also be set as a determination threshold for determining whether or not the occurrence has occurred.
  • the solid line in FIG. 26 shows the time-series change in the difference in the bleeding area between frames calculated in step S14, and the broken line in FIG. It shows the time series change with the value of.
  • the control unit 11 determines that bleeding has occurred when the difference in bleeding areas between frames is equal to or greater than the value of +2 ⁇ with respect to the moving average of the differences in bleeding areas. Therefore, the control unit 11 sets each value of the upper (+2 ⁇ ) time series data indicated by the broken line in FIG. 26 as the threshold value used when determining the presence or absence of bleeding occurrence for each imaging time. Thereby, the presence or absence of bleeding occurrence is determined based on the bleeding area that is accurately detected from each captured image using the learning model 12M.
  • the value of the moving average of the differences in bleeding areas +4 ⁇ may be used as the threshold for determining the presence or absence of bleeding. With such a configuration, also in this embodiment, erroneous detection of bleeding occurrence is suppressed, and the detection accuracy of bleeding occurrence is improved.
  • the endoscopic surgery system of this embodiment is the same as the processing performed by the information processing device 10 of Embodiment 1, except that the learning model 12M is used when detecting a bleeding area in a photographed image.
  • the learning model 12M classifies each pixel in the photographed image into a bleeding area and a non-bleeding area according to the characteristics of each pixel. Therefore, the user using the learning model 12M can detect bleeding regions in the photographed image with high accuracy using the learning model 12M by generating an annotation image from the photographed image and having the learning model 12M learn. Therefore, also in this embodiment, the presence or absence of bleeding can be detected with high accuracy based on the photographed image taken with the endoscope 20.
  • the variations 1 to 8 described in the first embodiment can be applied. Therefore, the information processing device 10 of the present embodiment uses the learning model 12M to accurately detect bleeding areas in photographed images, and also detects each bleeding area by applying one or more of the configurations of Modifications 1 to 8. It is possible to realize a configuration that more accurately detects bleeding areas in frames and more accurately calculates differences in bleeding areas between frames. Also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
  • the learning model 12M of this embodiment is configured to classify whether each pixel is a bleeding area pixel or a non-bleeding area pixel based on the pixel value of each pixel in a captured image.
  • the learning model 12M may be configured to detect a bleeding area in the captured image when the captured image is input.
  • the learning model 12M can be configured with an object detection algorithm such as CNN, R-CNN, Fast R-CNN, SSD, or YOLO.
  • the learning model 12M may be configured with an algorithm that implements semantic segmentation, such as SegNet, FCN (Fully Convolutional Network), and U-Net.
  • An information processing device 10 that is a modification of the second embodiment that detects a bleeding area in a captured image using the learning model 12M will be described.
  • the information processing apparatus of this embodiment has the same configuration as the information processing apparatus 10 of Embodiment 2, so a description of the configuration will be omitted.
  • the information processing device 10 of this embodiment detects a bleeding area in a photographed image using the learning model 12M, and then generates an image showing the difference (increase) in the bleeding area between frames that follow in time series.
  • the information processing device 10 generates an image showing a bleeding area in the current frame that was not a bleeding area in the previous frame.
  • the information processing device 10 divides each pixel into cells of, for example, 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc. in the image showing the difference (increase) in the bleeding area, and in each cell, the pixel value is The number of pixels (bleeding pixels) with a value of 1 (pixel value indicating a bleeding area) is counted. Then, in each cell, the information processing device 10 determines whether or not each cell is a bleeding area according to the ratio of the number of bleeding pixels to the number of pixels included in the cell, and from the result, determines whether or not bleeding has occurred. Determine.
  • FIG. 27 is a flowchart illustrating an example of a bleeding detection processing procedure according to the third embodiment
  • FIGS. 28A to 28C are explanatory diagrams of the bleeding detection processing according to the third embodiment.
  • the process shown in FIG. 27 is the process shown in FIG. 24 with steps S141 to S144 added instead of steps S14 to S16. Description of the same steps as in FIGS. 2 and 24 will be omitted.
  • the control unit 11 executes the processes of steps S11, S131, and S13 in FIG. 24, and generates a bleeding area image indicating the bleeding area in the current frame (S13). Then, the control unit 11 (extraction unit) identifies the area (increased bleeding area) that was not a bleeding area in the previous frame among the bleeding areas in the current frame (frame to be processed), and An image showing the increase (difference) in the bleeding area between the preceding and succeeding frames is generated (S141). For example, the control unit 11 generates an image in which the pixel values of pixels in the increased bleeding area are set to 1, and the pixel values of other pixels are set to 0, as shown in FIG. 28A.
  • the control unit 11 divides each pixel into cells of 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc. in the image showing the increase in the bleeding area, and calculates the pixel value for each cell.
  • the number of pixels (bleeding pixels) where is 1 (pixel value indicating a bleeding area) is counted (S142).
  • FIG. 28B shows the image shown in FIG. 28A in which the pixel values of all pixels in each cell are converted into pixel values according to the number of bleeding pixels in each cell, and the whiter the cell, the larger the number of bleeding pixels. , and the number of bleeding pixels in each cell is represented by a monochrome gradation.
  • the control unit 11 calculates, for each cell, the ratio of the number of bleeding pixels to the number of pixels included in each cell in the image showing the increase in the bleeding area (S143). Then, the control unit 11 (determination unit) determines each cell as a bleeding area or a non-bleeding area based on the ratio of the number of bleeding pixels calculated for each cell (S144). Here, for each imaging time, the control unit 11 determines whether the ratio of the number of bleeding pixels in each cell calculated in step S143 is equal to or higher than a predetermined ratio (for example, 50% to 80%), and If the ratio is above, it is determined to be a bleeding area, and if it is less than a predetermined ratio, it is determined to be a non-bleeding area.
  • a predetermined ratio for example, 50% to 80%
  • the control unit 11 generates a binary image as shown in FIG. 28C. Then, the control unit 11 determines whether bleeding has occurred based on the determination result in step S144 (S17). In this embodiment, the control unit 11 determines whether or not there is a cluster of cells determined in the bleeding area in step S144, and if there is, it is determined that bleeding has occurred, and if there is not, it is determined that bleeding has not occurred. . For example, the control unit 11 counts the number of adjacent cells among the cells determined to be a bleeding area, and if the number is equal to or greater than a predetermined number, the area (cluster of cells) including this cell is determined as a bleeding area. Therefore, in the image shown in FIG.
  • each cell is a bleeding area, and if a predetermined number or more of cells that have been determined to be bleeding areas are adjacent to each other, this area is identified as a bleeding area. , Based on the identified results, the presence or absence of bleeding is detected. After that, the control unit 11 performs the processing from step S18 onwards.
  • the number of pixels that change from non-bleeding pixels to bleeding pixels is counted for each cell in a captured image, and cells in which this number of pixels (bleeding pixel count) exceeds a predetermined percentage are identified as bleeding areas.
  • the processing is the same as the processing performed by the information processing device 10 of the second embodiment, so that the same effects as the second embodiment can be obtained.
  • a bleeding area or a non-bleeding area is determined for each cell, a bleeding area can be detected with high accuracy by determining a cell where pixels determined to be a bleeding area are gathered as a bleeding area. Erroneous detection of bleeding areas can be suppressed.
  • a bleeding area in a photographed image is identified based on a difference image of color components in the photographed image (for example, a difference between an R component and a G component, or a difference between an R component and a B component). , the occurrence of bleeding is determined based on the identified bleeding area.
  • the learning model 12M is used to identify a bleeding area in a captured image, and the occurrence of bleeding is determined based on the identified bleeding area.
  • a bleeding area or a non-bleeding area is determined for each cell according to the ratio of the number of bleeding pixels, and based on the determination result for each cell.
  • This configuration identifies a bleeding area in a photographed image and determines whether bleeding has occurred based on the identified bleeding area.
  • An information processing apparatus 10 that executes at least one determination process among the three processes and identifies the presence or absence of bleeding occurrence based on the result of the executed determination process will be described.
  • FIG. 29 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fourth embodiment.
  • the process shown in FIG. 29 is the process shown in FIG. 2 with steps S151 to S153 added instead of steps S11 to S16. Description of the same steps as in FIG. 2 will be omitted.
  • the control unit 11 selects a determination process for determining the occurrence of bleeding based on the captured image to be processed (S151). For example, the control unit 11 selects at least one of a plurality of types of determination processing depending on the type, format, imaging method, or surgical method of the image (video) taken by the endoscope 20.
  • the process for determining the occurrence of bleeding includes the process shown in FIG. 2 described in Embodiment 1 (first determination process) and the process shown in FIG. 24 described in Embodiment 2 (second determination process). It is possible to execute the process shown in FIG. 27 (third determination process) described in the third embodiment, and the control unit 11 selects at least one of the three determination processes.
  • the control unit 11 executes the selected determination process on the photographed image (S152).
  • the control unit 11 executes the processes of steps S11 to S18 in FIG. Determine whether bleeding has occurred.
  • the control unit 11 executes the processes of steps S11, S131, S13 to S18 in FIG. 24, and detects a bleeding area in the captured image using the learning model 12M.
  • the presence or absence of bleeding is determined based on the difference (increase amount) in the bleeding area between frames that precede and follow in time series.
  • the control unit 11 executes the processes of steps S11, S131, S13, S141 to S144, and S17 to S18 in FIG. The presence or absence of bleeding is determined based on the bleeding area image in which the area difference (increase amount) is determined for each cell.
  • the control unit 11 determines the probability (certainty) that it should be determined that bleeding has occurred, or the probability (certainty) that it should be determined that bleeding has not occurred. degree) is calculated (S153). For example, if three types of determination processes are executed and two types of determination processes yield a determination result that bleeding has occurred, the control unit 11 calculates 2/3 as the confidence level for determining that bleeding has occurred. . Note that the determination results of each determination process may be weighted to calculate the certainty that bleeding has occurred or the certainty that bleeding has not occurred.
  • the control unit 11 identifies (determines) the presence or absence of bleeding in the current frame based on the calculated certainty factor (S17). For example, the control unit 11 specifies the determination result as the occurrence of bleeding when the confidence level of occurrence of bleeding is equal to or higher than a predetermined value (for example, a value of 0.5 to 0.8), and when it is less than the predetermined value, the control unit 11 determines that Identify the outcome without bleeding. After that, the control unit 11 performs the processing from step S18 onwards.
  • a predetermined value for example, a value of 0.5 to 0.8
  • the information processing apparatus 10 determines whether or not a hemostasis operation has been performed when the occurrence of bleeding is detected.
  • information processing device 10 of this embodiment stores a learning model similar to Modification 6 in storage unit 12.
  • the learning model of this embodiment is configured to, when a captured image is input, output the area of the treatment tool 30 used for hemostasis based on the captured image.
  • the learning model 12M of this embodiment receives a plurality of captured images (videos) as input, and performs a calculation to classify whether or not a hemostasis operation has been performed in the captured images based on the input captured images. , may be configured to output the calculated results.
  • the learning model in this case can be constructed using algorithms such as RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory), and Transformer.
  • FIG. 30 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fifth embodiment.
  • the process shown in FIG. 30 is the process shown in FIG. 2 with steps S161 to S163 added between steps S18 and S19. Description of the same steps as in FIG. 2 will be omitted. Note that in FIG. 30, illustrations of steps other than steps S17 to S19 in FIG. 2 are omitted.
  • the control unit 11 executes the process of step S18, and then performs the process to stop the bleeding based on the captured image acquired in step S11. It is determined whether execution of an operation is detected (S161).
  • control unit 11 inputs the photographed image into a learning model, and determines whether or not there is a treatment tool 30 for hemostasis (hemostasis device) in the photographed image based on output information from the learning model. do.
  • the learning model is configured to determine whether or not a hemostasis operation is being performed based on the input captured image
  • the control unit 11 determines whether or not a hemostasis operation is being performed based on the output information from the learning model. You can determine whether it is being done or not.
  • control unit 11 repeats the process of step S161 until the execution of the hemostasis operation is detected.
  • control unit 11 continues the process of determining whether or not the implementation of a hemostasis operation has been detected, based on the captured images sequentially transmitted from the endoscope control device 21. If it is determined that the execution of the hemostasis operation has been detected (S161: YES), the control unit 11 determines whether or not the end of the hemostasis operation has been detected (S162).
  • control unit 11 inputs the photographed image acquired from the endoscope control device 21 into the learning model, and determines whether the hemostasis treatment tool 30 is missing from the photographed image based on the output information from the learning model. to judge. If it is determined that the end of the hemostasis operation has not been detected (S162: NO), the control unit 11 repeats the process of step S162 until the end of the hemostasis operation is detected. Here, the control unit 11 continues the process of determining whether or not the end of the hemostasis operation has been detected, based on the captured images sequentially transmitted from the endoscope control device 21. Note that at this time, the control unit 11 may perform a process of notifying a medical worker that a hemostasis operation is in progress, for example.
  • the control unit 11 detects the end of the hemostasis operation. It is determined whether a predetermined time has elapsed since then (S163). If the control unit 11 determines that the predetermined time has not elapsed (S163: NO), the process returns to step S162, and the hemostasis operation is completed until the predetermined time elapses after the end of the hemostasis operation is first detected. Repeat the detection process.
  • the predetermined time here is a time long enough to determine that hemostasis has been successful.
  • the control unit 11 determines that the hemostasis has been successful, and proceeds to the process of step S19. At this time, the control unit 11 may notify the medical personnel that hemostasis has been successful. Thereby, when the occurrence of bleeding is detected based on the photographed image, it can be determined whether a hemostasis operation has been performed. Therefore, it is possible to objectively judge state transitions such as a bleeding state, hemostasis in progress, and successful hemostasis, and notify the medical personnel. Medical personnel, including surgeons, only have to carry out the optimal treatment for each condition, and it becomes possible to support medical personnel and reduce their burden.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

Provided are: a program which makes it possible to highly accurately detect the occurrence of bleeding from an image captured during a surgery; and others. A computer acquires a plurality of captured images each having a plurality of color components, which are images of a treated area taken on a time-series basis. The computer also acquires bleeding area images with respect to the acquired plurality of images, in which each of the bleeding area images shows a bleeding area in each of the captured images on the basis of the difference in the color components. The computer calculates a difference of bleeding area between bleeding area images of each of captured images before and after in time series, and determines the presence or absence of bleeding from the treated area on the basis of the difference of bleeding area.

Description

プログラム、情報処理方法、及び情報処理装置Program, information processing method, and information processing device
 本開示は、プログラム、情報処理方法、及び情報処理装置に関する。 The present disclosure relates to a program, an information processing method, and an information processing device.
 医療現場では、患者の負担を低減するための低侵襲外科手術として内視鏡(硬性鏡)を用いた手術が行われている。例えば腹腔鏡手術では、術者は、患者の腹部に開けられた孔から腹腔鏡(内視鏡)及び鉗子等の処置具を体内に挿入し、治療部位及び鉗子等を腹腔鏡で撮影した映像をモニタで確認しながら手技を行う。特許文献1には、記録された医療用の動画を表示する際に、有益な箇所を抽出して表示する技術が開示されている。特許文献1に開示された技術では、医療用の動画又は医療用の情報から、一定以上の変化が生じた時点、例えば手術具の使用時点、出血開始時点等が特徴点として検出され、動画を表示する際に、検出した特徴点を基準として動画が表示される。 In the medical field, surgery using an endoscope (rigid scope) is performed as a minimally invasive surgery to reduce the burden on patients. For example, in laparoscopic surgery, the operator inserts a laparoscope (endoscope), forceps, and other treatment instruments into the patient's body through a hole made in the patient's abdomen, and images of the treatment area and the forceps are photographed using the laparoscope. Perform the procedure while checking on the monitor. Patent Document 1 discloses a technique for extracting and displaying useful parts when displaying a recorded medical video. In the technology disclosed in Patent Document 1, points at which a change exceeding a certain level occurs, such as the point at which a surgical tool is used or the point at which bleeding starts, are detected from a medical video or medical information as feature points, and the point at which the video is When displayed, the video is displayed based on the detected feature points.
特開2011-36370号公報Japanese Patent Application Publication No. 2011-36370
 特許文献1では、一連の動画中で面積が変化する領域、あるいは、動きのある領域が動き領域として検出され、検出された動き領域のうちで、時間の経過に伴い領域の面積が増加し、且つ、一定時間以上面積の増加が継続し、且つ、血液の色度と類似するという条件を満たす動き領域が出血点として検出される。しかし、腹腔鏡手術中に撮影される画像においては、視野や対象臓器が常に動いており、また、血管や血溜まり等が映っているため、撮影画像から出血点を高精度に判定することは難しい。 In Patent Document 1, a region whose area changes or a region with movement in a series of videos is detected as a motion region, and among the detected motion regions, the area of the region increases with the passage of time, In addition, a moving region whose area continues to increase for a certain period of time or more and whose chromaticity is similar to that of blood is detected as a bleeding point. However, in images taken during laparoscopic surgery, the field of view and target organs are constantly moving, and blood vessels and blood pools are visible, so it is difficult to accurately determine bleeding points from the images. difficult.
 本発明は、このような事情に鑑みてなされたものであり、その目的とするところは、手術中に撮影された画像から、出血発生を高精度に検出することが可能なプログラム等を提供することにある。 The present invention has been made in view of the above circumstances, and its purpose is to provide a program etc. that can detect bleeding occurrence with high accuracy from images taken during surgery. There is a particular thing.
 本開示の一態様に係るプログラムは、治療部位を時系列で撮影した、複数の色成分を有する複数の撮影画像を取得し、取得した複数の撮影画像のそれぞれについて、前記複数の色成分の差分に基づいて、前記撮影画像中の出血領域を示す出血領域画像を取得し、時系列に前後する撮影画像のそれぞれについて取得した前記出血領域画像間における出血領域の差分を算出し、前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する処理をコンピュータに実行させる。 A program according to an aspect of the present disclosure acquires a plurality of photographed images having a plurality of color components, which are photographed in time series of a treatment region, and calculates a difference between the plurality of color components for each of the plurality of acquired photographic images. Based on this, a bleeding area image indicating the bleeding area in the photographed image is acquired, a difference in the bleeding area between the bleeding area images acquired for each of the preceding and succeeding photographed images in chronological order is calculated, and the difference in the bleeding area is calculated. Based on the difference, the computer is caused to execute a process of determining the presence or absence of bleeding from the treatment site.
 本発明の一態様にあっては、手術中に撮影された画像から、出血発生を高精度に検出することができる。 According to one aspect of the present invention, occurrence of bleeding can be detected with high accuracy from images taken during surgery.
内視鏡手術システムの構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of an endoscopic surgery system. 出血検知処理手順の一例を示すフローチャートである。3 is a flowchart illustrating an example of a bleeding detection processing procedure. 出血検知処理の説明図である。It is an explanatory diagram of bleeding detection processing. 出血検知処理の説明図である。It is an explanatory diagram of bleeding detection processing. 出血検知処理の説明図である。It is an explanatory diagram of bleeding detection processing. 出血検知処理の説明図である。It is an explanatory diagram of bleeding detection processing. 出血検知処理の説明図である。It is an explanatory diagram of bleeding detection processing. アラート出力処理手順の一例を示すフローチャートである。3 is a flowchart illustrating an example of an alert output processing procedure. 変形例1の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure of Modification 1. 変形例1の出血検知処理の説明図である。FIG. 6 is an explanatory diagram of bleeding detection processing according to modification 1; 変形例1の出血検知処理の説明図である。FIG. 6 is an explanatory diagram of bleeding detection processing according to modification 1; 変形例2の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure of Modification 2. 変形例2の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 2. FIG. 変形例2の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 2. FIG. 変形例3の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 3. 変形例3の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 3; 変形例4の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 4. 変形例4の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 4. FIG. 変形例5の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 5. 変形例5の出血検知処理の説明図である。FIG. 9 is an explanatory diagram of bleeding detection processing according to modification 5; 変形例6の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 6. 変形例6の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6; 変形例6の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6; 変形例6の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6; 変形例6の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing according to modification 6; 変形例7の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 7. 変形例7の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification Example 7; 変形例8の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure in Modification 8. 変形例8の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8. 変形例8の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8. 変形例8の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8. 変形例8の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8. 変形例8の出血検知処理の説明図である。FIG. 12 is an explanatory diagram of bleeding detection processing in Modification 8. 学習モデルの構成例を示す説明図である。FIG. 2 is an explanatory diagram showing a configuration example of a learning model. 学習モデルの生成処理手順の一例を示すフローチャートである。3 is a flowchart illustrating an example of a learning model generation process procedure. 実施形態2の出血検知処理手順の一例を示すフローチャートである。7 is a flowchart illustrating an example of a bleeding detection processing procedure according to the second embodiment. 実施形態2の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment. 実施形態2の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment. 実施形態2の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the second embodiment. 実施形態3の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to the third embodiment. 実施形態3の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the third embodiment. 実施形態3の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the third embodiment. 実施形態3の出血検知処理の説明図である。FIG. 7 is an explanatory diagram of bleeding detection processing according to the third embodiment. 実施形態4の出血検知処理手順の一例を示すフローチャートである。12 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fourth embodiment. 実施形態5の出血検知処理手順の一例を示すフローチャートである。13 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fifth embodiment.
 以下に、本開示のプログラム、情報処理方法、及び情報処理装置について、その実施形態を示す図面に基づいて詳述する。以下の各実施形態では、内視鏡手術として、消化管等の腹部を治療対象とする腹腔鏡手術を一例に説明する。なお、各実施形態における手術は、心臓又は肺等の胸部を治療対象とする胸腔鏡手術等であってもよく、また、開腹手術であってもよい。本開示の構成は、治療対象を撮影できれば、どのような手術にも利用できる。 Below, a program, an information processing method, and an information processing device of the present disclosure will be described in detail based on drawings showing embodiments thereof. In each of the following embodiments, laparoscopic surgery that treats the abdomen, such as the gastrointestinal tract, will be described as an example of endoscopic surgery. Note that the surgery in each embodiment may be a thoracoscopic surgery or the like in which the chest, such as the heart or lungs, is treated, or may be an open surgery. The configuration of the present disclosure can be used for any surgery as long as the treatment target can be photographed.
(実施形態1)
 内視鏡手術中に治療部位を撮影した撮影画像に基づいて、治療部位からの出血の発生を検出する情報処理装置について説明する。図1は内視鏡手術システムの構成例を示すブロック図である。図1に示す内視鏡手術システムは、内視鏡20(腹腔鏡)及び内視鏡20を駆動する内視鏡制御装置21、内視鏡制御装置21の動作を制御する情報処理装置10、及び表示装置40等を有する。また、内視鏡手術システムは、処置具30、処置具30を駆動する処置具制御装置31、及び処置具30に対する操作入力を受け付ける操作部32を有する。内視鏡手術では、術者は、患者の身体に開けられた孔から、内視鏡20、処置具30及び鉗子(図示せず)等を体内に挿入し、内視鏡20で撮影された体内の映像を表示装置40で確認しながら、処置具30によって、例えば治療部位の切除等の処置を行う。
(Embodiment 1)
An information processing device that detects the occurrence of bleeding from a treatment site based on captured images of the treatment site during endoscopic surgery will be described. FIG. 1 is a block diagram showing an example of the configuration of an endoscopic surgery system. The endoscopic surgery system shown in FIG. 1 includes an endoscope 20 (laparoscope), an endoscope control device 21 that drives the endoscope 20, an information processing device 10 that controls the operation of the endoscope control device 21, and a display device 40, etc. The endoscopic surgery system also includes a treatment instrument 30, a treatment instrument control device 31 that drives the treatment instrument 30, and an operation section 32 that receives operation inputs for the treatment instrument 30. In endoscopic surgery, the operator inserts an endoscope 20, a treatment tool 30, forceps (not shown), etc. into the patient's body through a hole made in the patient's body, and images are taken using the endoscope 20. While checking images inside the body on the display device 40, a treatment such as resection of the treatment site is performed using the treatment instrument 30.
 内視鏡20は、硬性の鏡筒を有する硬性鏡であり、鏡筒の先端部に撮影部及び光源部を有する。光源部は、LED(Light Emitting Diode)等の光源及び照明用レンズを有し、光源から照射された照明光(可視光線)が照明用レンズによって集光されて被写体に照射される。光源部の光源は、内視鏡20の先端部に設けられた光源のほかに、内視鏡制御装置21が有する光源であってもよい。この場合、内視鏡20は、内視鏡制御装置21の光源から照射された照明光を内視鏡20の先端部まで導くライトガイドを有し、ライドガイドによって導光された光が被写体に照射される。撮影部は、撮影用レンズ及びイメージセンサを有し、光源部が照射した光による被写体からの反射光が撮影用レンズを介してイメージセンサに受光され、イメージセンサによる光電変換によって画像信号に変換される。このような構成により、内視鏡20は、光源部によって体内を照明した状態で撮影部によって撮影することにより、体内の治療部位を撮影した画像信号(撮影画像)を取得する。なお、撮影部は、例えば1秒間に30フレーム(30枚)の画像信号(映像信号)を取得するように構成されている。 The endoscope 20 is a rigid scope having a rigid lens barrel, and has an imaging section and a light source section at the tip of the lens barrel. The light source section includes a light source such as an LED (Light Emitting Diode) and an illumination lens, and illumination light (visible light) emitted from the light source is collected by the illumination lens and irradiated onto the subject. The light source of the light source section may be a light source provided in the endoscope control device 21, in addition to the light source provided at the distal end of the endoscope 20. In this case, the endoscope 20 has a light guide that guides illumination light emitted from the light source of the endoscope control device 21 to the distal end of the endoscope 20, and the light guided by the ride guide is directed toward the subject. irradiated. The photographing section has a photographing lens and an image sensor, and the light reflected from the subject by the light emitted by the light source section is received by the image sensor via the photographing lens, and is converted into an image signal by photoelectric conversion by the image sensor. Ru. With such a configuration, the endoscope 20 acquires an image signal (captured image) of a treatment site inside the body by photographing the inside of the body with the imaging unit while the light source unit illuminates the inside of the body. Note that the photographing unit is configured to obtain, for example, 30 frames (30 images) of image signals (video signals) per second.
 内視鏡20は内視鏡制御装置21に接続されており、内視鏡制御装置21は、内視鏡20の動作を制御する制御信号を内視鏡20へ送信し、内視鏡20は、撮影部が撮影した画像信号を内視鏡制御装置21へ送信する。内視鏡制御装置21が内視鏡20へ送信する制御信号は、例えば撮影時のズーム倍率(変倍率)、焦点距離、撮影方向等、撮影条件に関する情報を含む。内視鏡制御装置21は、内視鏡20から取得した画像信号に各種の信号処理を行って画像データを生成する。ここで生成される1フレームの画像データは、R(赤)成分、G(緑)成分及びB(青)成分を含んでおり、内視鏡制御装置21は、画像信号から、R成分の画像データ、G成分の画像データ及びB成分の画像データを生成する。よって、内視鏡20(撮影部)による1回の撮影処理によって、内視鏡制御装置21は、R成分の画像データ、G成分の画像データ、及びB成分の画像データを含む1枚の画像データ(撮影画像)を取得する。内視鏡制御装置21によって生成された画像データは、情報処理装置10へ出力され、情報処理装置10によって表示装置40へ出力される。これにより、内視鏡20の撮影部によって撮影された撮影画像(映像)が表示装置40に表示され、内視鏡20の光源部によって照明された範囲を光学観察することができる。 The endoscope 20 is connected to an endoscope control device 21, and the endoscope control device 21 transmits a control signal for controlling the operation of the endoscope 20 to the endoscope 20. , transmits the image signal photographed by the photographing unit to the endoscope control device 21. The control signal that the endoscope control device 21 transmits to the endoscope 20 includes information regarding photographing conditions, such as zoom magnification (variable magnification), focal length, and photographing direction during photographing, for example. The endoscope control device 21 performs various signal processing on image signals acquired from the endoscope 20 to generate image data. One frame of image data generated here includes an R (red) component, a G (green) component, and a B (blue) component, and the endoscope control device 21 generates an image of the R component from the image signal. data, G component image data, and B component image data. Therefore, through one imaging process by the endoscope 20 (imaging unit), the endoscope control device 21 generates one image including R component image data, G component image data, and B component image data. Obtain data (photographed images). The image data generated by the endoscope control device 21 is output to the information processing device 10, and the information processing device 10 outputs it to the display device 40. Thereby, a photographed image (video) photographed by the photographing section of the endoscope 20 is displayed on the display device 40, and the range illuminated by the light source section of the endoscope 20 can be optically observed.
 内視鏡20の撮影部が取得した画像信号から画像データを生成するための信号処理は、内視鏡制御装置21で実行される構成のほかに、情報処理装置10で実行されてもよい。この場合、内視鏡制御装置21は、内視鏡20の撮影部から取得した画像信号をそのまま情報処理装置10へ出力し、情報処理装置10が、取得した画像信号に対して所定の信号処理を実行して画像データを生成し、生成した画像データを順次表示装置40へ出力することによって表示装置40に表示させる。なお、光源部が照射する光は、可視光線に限定されず、近赤外光、所定の波長に制御された特殊光等であってもよく、光源部は、それぞれ異なる種類の光を照射する複数の光源を有する構成でもよい。 Signal processing for generating image data from image signals acquired by the imaging unit of the endoscope 20 may be executed by the information processing device 10 in addition to the configuration executed by the endoscope control device 21. In this case, the endoscope control device 21 outputs the image signal acquired from the imaging unit of the endoscope 20 as it is to the information processing device 10, and the information processing device 10 performs predetermined signal processing on the acquired image signal. is executed to generate image data, and the generated image data is sequentially output to the display device 40 to be displayed on the display device 40. Note that the light emitted by the light source section is not limited to visible light, but may also be near-infrared light, special light controlled to a predetermined wavelength, etc., and each light source section irradiates different types of light. A configuration having a plurality of light sources may be used.
 処置具30は、高周波、超音波、マイクロ波等のエネルギーを用いて、生体組織の切開又は切除、あるいは、血管封止、止血等を行うためのエネルギーデバイスである。処置具30は、例えば高周波ナイフ、高周波ハサミ鉗子、電気メス、超音波メス等を含む。処置具30は処置具制御装置31に接続されており、処置具制御装置31は、処置具30の動作を制御する。例えば処置具30が高周波を用いたデバイスである場合、処置具制御装置31は、処置具30に対して高周波電流を出力する高周波出力装置である。処置具制御装置31には操作部32が接続されており、操作部32は、例えば内視鏡手術の術者を含む医療従事者による操作入力を受け付ける入力装置であり、入力された操作内容に対応した制御信号を処置具制御装置31へ送出する。なお、術者は、エネルギーデバイスである処置具30以外に、ハサミ鉗子、剥離鉗子等を手動で操作することにより、病変部の組織の切開、切除、剥離等を行う。 The treatment tool 30 is an energy device for incising or resecting living tissue, sealing blood vessels, stopping bleeding, etc. using energy such as high frequency waves, ultrasound waves, and microwave waves. The treatment tool 30 includes, for example, a high-frequency knife, high-frequency scissors, forceps, an electric scalpel, an ultrasonic scalpel, and the like. The treatment tool 30 is connected to a treatment tool control device 31, and the treatment tool control device 31 controls the operation of the treatment tool 30. For example, when the treatment instrument 30 is a device using high frequency, the treatment instrument control device 31 is a high frequency output device that outputs a high frequency current to the treatment instrument 30. An operation section 32 is connected to the treatment instrument control device 31, and the operation section 32 is an input device that receives operation input from medical personnel including, for example, an endoscopic surgery operator. A corresponding control signal is sent to the treatment instrument control device 31. In addition to the treatment tool 30, which is an energy device, the operator manually operates scissors, forceps, and dissection forceps to incise, excise, and detach the tissue of the diseased area.
 情報処理装置10は、種々の情報処理及び情報の送受信が可能な装置であり、例えばパーソナルコンピュータ、サーバコンピュータ、ワークステーション等である。なお、情報処理装置10は、複数のコンピュータで構成されるマルチコンピュータによって分散処理される構成でもよく、1台の装置内でソフトウェアによって仮想的に構築された仮想マシンによって実現される構成でもよい。また、情報処理装置10をサーバコンピュータで構成する場合、情報処理装置10は、医療機関等に設置されたローカルサーバであってもよく、インターネット等のネットワークを介して通信接続されたクラウドサーバであってもよい。本実施形態では、情報処理装置10が1台のコンピュータであるものとして説明する。 The information processing device 10 is a device capable of various information processing and transmission/reception of information, and is, for example, a personal computer, a server computer, a workstation, or the like. Note that the information processing device 10 may have a configuration in which distributed processing is performed by a multicomputer including a plurality of computers, or may be implemented by a virtual machine virtually constructed using software within one device. Furthermore, when the information processing device 10 is configured with a server computer, the information processing device 10 may be a local server installed in a medical institution or the like, or a cloud server communicatively connected via a network such as the Internet. It's okay. In this embodiment, the information processing device 10 will be described as one computer.
 情報処理装置10は、制御部11、記憶部12、操作部13、通信部14、読み取り部15等を含み、これらの各部はバスを介して相互に接続されている。制御部11は、CPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)等の1又は複数のプロセッサを有する。制御部11は、記憶部12に記憶してあるプログラム12Pを適宜実行することにより、情報処理装置10が行うべき種々の情報処理及び制御処理等を行う。 The information processing device 10 includes a control section 11, a storage section 12, an operation section 13, a communication section 14, a reading section 15, etc., and these sections are interconnected via a bus. The control unit 11 includes one or more processors such as a CPU (Central Processing Unit), an MPU (Micro-Processing Unit), and a GPU (Graphics Processing Unit). The control unit 11 performs various information processing, control processing, etc. that the information processing device 10 should perform by appropriately executing the program 12P stored in the storage unit 12.
 記憶部12は、RAM(Random Access Memory)、フラッシュメモリ、ハードディスク、SSD(Solid State Drive)等を含む。記憶部12は、制御部11が実行するプログラム12P(プログラム製品)及びプログラム12Pの実行に必要な各種のデータ等を予め記憶している。また記憶部12は、制御部11がプログラム12Pを実行する際に発生するデータ等を一時的に記憶する。プログラム12Pは、情報処理装置10の製造段階において記憶部12に書き込まれてもよい。 The storage unit 12 includes a RAM (Random Access Memory), a flash memory, a hard disk, an SSD (Solid State Drive), and the like. The storage unit 12 stores in advance a program 12P (program product) to be executed by the control unit 11 and various data necessary for executing the program 12P. The storage unit 12 also temporarily stores data and the like generated when the control unit 11 executes the program 12P. The program 12P may be written into the storage unit 12 during the manufacturing stage of the information processing device 10.
 操作部13は、例えば内視鏡手術の術者を含む医療従事者による操作入力を受け付ける入力装置であり、入力された操作内容に対応した制御信号を制御部11へ送出する。操作部13は、キーボード、マウス、トラックボール、フットスイッチ、マイク等であってもよく、ジェスチャ入力又は視線入力等を受け付けるセンサ等であってもよい。また、操作部13は、情報処理装置10に設けられた表示部(図示せず)と一体に構成されたタッチパネルであってもよい。操作部13は、情報処理装置10に対する操作に加えて、内視鏡20に対する操作も受け付ける。なお、内視鏡20に対する術者の操作は、情報処理装置10の操作部13が受け付けて内視鏡制御装置21へ出力する構成に限定されない。内視鏡制御装置21が、内視鏡20に対する術者の操作を受け付ける構成でもよい。 The operation unit 13 is an input device that accepts operation input from medical personnel, including, for example, an endoscopic surgery operator, and sends a control signal corresponding to the input operation details to the control unit 11. The operation unit 13 may be a keyboard, a mouse, a trackball, a foot switch, a microphone, etc., or may be a sensor that accepts gesture input, gaze input, etc. Further, the operation unit 13 may be a touch panel configured integrally with a display unit (not shown) provided in the information processing device 10. The operation unit 13 accepts operations on the endoscope 20 in addition to operations on the information processing device 10 . Note that the operator's operation on the endoscope 20 is not limited to the configuration in which the operation unit 13 of the information processing device 10 receives and outputs the operation to the endoscope control device 21. The endoscope control device 21 may be configured to accept an operator's operation on the endoscope 20.
 通信部14は、内視鏡制御装置21を接続するための通信モジュールを有し、制御部11からの制御信号を内視鏡制御装置21へ送信し、内視鏡制御装置21からの画像データ(撮影画像)を受信する。なお、内視鏡制御装置21から受信した画像データは、例えば記憶部12に逐次記憶される。また、通信部14は、表示装置40を接続するための通信モジュールを有し、制御部11からの制御に従って、表示装置40に表示すべき画像データを表示装置40へ送信する。通信部14と、内視鏡制御装置21及び表示装置40のそれぞれとは、ケーブルを介した有線通信を行う構成でもよく、無線通信を行う構成でもよい。また通信部14は、有線通信又は無線通信によって、インターネット又はLAN(Local Area Network)等のネットワークに接続するための通信モジュールを有し、ネットワークを介して他の装置との間で情報の送受信を行うように構成されていてもよい。 The communication unit 14 has a communication module for connecting the endoscope control device 21, transmits control signals from the control unit 11 to the endoscope control device 21, and transmits image data from the endoscope control device 21. Receive (photographed image). Note that the image data received from the endoscope control device 21 is sequentially stored in the storage unit 12, for example. Further, the communication unit 14 has a communication module for connecting the display device 40, and transmits image data to be displayed on the display device 40 to the display device 40 under control from the control unit 11. The communication unit 14, each of the endoscope control device 21 and the display device 40 may be configured to perform wired communication via a cable, or may be configured to perform wireless communication. The communication unit 14 also includes a communication module for connecting to a network such as the Internet or a LAN (Local Area Network) through wired or wireless communication, and transmits and receives information to and from other devices via the network. It may be configured to do so.
 読み取り部15は、CD(Compact Disc)-ROM、DVD(Digital Versatile Disc)-ROM、USB(Universal Serial Bus)メモリ、SD(Secure Digital)カード等を含む可搬型記憶媒体10aに記憶された情報を読み取る。記憶部12に記憶されるプログラム12P(プログラム製品)及び各種のデータは、制御部11が読み取り部15を介して可搬型記憶媒体10aから読み取って記憶部12に記憶してもよい。また記憶部12に記憶されるプログラム12P及び各種のデータは、遠隔のサーバ装置が配信するものを、制御部11が通信部14を介してダウンロードして記憶部12に記憶してもよい。 The reading unit 15 reads information stored in a portable storage medium 10a including a CD (Compact Disc)-ROM, a DVD (Digital Versatile Disc)-ROM, a USB (Universal Serial Bus) memory, an SD (Secure Digital) card, etc. read. The program 12P (program product) and various data stored in the storage unit 12 may be read by the control unit 11 from the portable storage medium 10a via the reading unit 15 and stored in the storage unit 12. Further, the program 12P and various data stored in the storage unit 12 may be distributed by a remote server device, which the control unit 11 may download via the communication unit 14 and store in the storage unit 12.
 表示装置40は、液晶ディスプレイ又は有機ELディスプレイ等であり、情報処理装置10から適宜送信されてくる各種の情報を表示する。上述した構成の情報処理装置10は、操作部13を介した術者による操作に従って、内視鏡20の動作を制御するための制御信号を内視鏡制御装置21へ送信する処理を行う。また、情報処理装置10は、内視鏡20で撮影された撮影画像(画像データ)を内視鏡制御装置21から取得して表示装置40に表示する処理を行う。更に、情報処理装置10は、内視鏡制御装置21から順次取得する画像データに基づいて、治療部位からの出血発生を検知し、必要に応じてアラートを出力する処理を行う。 The display device 40 is a liquid crystal display, an organic EL display, or the like, and displays various information appropriately transmitted from the information processing device 10. The information processing device 10 configured as described above performs a process of transmitting a control signal for controlling the operation of the endoscope 20 to the endoscope control device 21 in accordance with an operation by an operator via the operation unit 13. Further, the information processing device 10 performs a process of acquiring a captured image (image data) captured by the endoscope 20 from the endoscope control device 21 and displaying it on the display device 40 . Further, the information processing device 10 performs a process of detecting the occurrence of bleeding from the treatment site based on image data sequentially acquired from the endoscope control device 21 and outputting an alert as necessary.
 上述した構成の内視鏡手術システムは、例えば医療機関の手術室等に設置されて利用される。内視鏡手術システムは、上述した構成に加えて、患者の体内(腹部)に空気を送って患者の体腔を膨らませる気腹装置、内視鏡20で撮影した撮影画像を記録する記録装置等を有し、気腹装置及び記録装置等が情報処理装置10に接続されている構成でもよい。 The endoscopic surgery system configured as described above is installed and used in, for example, an operating room of a medical institution. In addition to the above-mentioned configuration, the endoscopic surgery system includes a pneumoperitoneum device that sends air into the patient's body (abdomen) to inflate the patient's body cavity, a recording device that records images taken with the endoscope 20, etc. The information processing device 10 may have a configuration in which the insufflation device, the recording device, and the like are connected to the information processing device 10.
 以下に、上述した構成の内視鏡手術システムにおいて、内視鏡20で撮影した撮影画像に基づいて出血を検知する処理について説明する。図2は出血検知処理手順の一例を示すフローチャート、図3A~図4Bは出血検知処理の説明図である。以下の処理は、情報処理装置10の制御部11が、記憶部12に記憶してあるプログラム12Pに従って実行する。 Below, in the endoscopic surgery system configured as described above, a process of detecting bleeding based on an image taken by the endoscope 20 will be described. FIG. 2 is a flowchart showing an example of a bleeding detection processing procedure, and FIGS. 3A to 4B are explanatory diagrams of the bleeding detection processing. The following processing is executed by the control unit 11 of the information processing device 10 according to the program 12P stored in the storage unit 12.
 本実施形態の内視鏡手術システムにおいて、内視鏡20は、内視鏡制御装置21からの制御に従って撮影部による撮影を行い、例えば1秒間に30フレームの画像信号を取得して内視鏡制御装置21へ出力する。内視鏡制御装置21は、内視鏡20から順次取得する画像信号に対して信号処理を行い、処理後の画像データ(以下では撮影画像という)を情報処理装置10へ出力する。情報処理装置10の制御部11(撮影画像取得部)は、内視鏡制御装置21から順次出力される撮影画像を、通信部14を介して取得する(S11)。制御部11は、例えば図3Aに示すような撮影画像を取得する。図3Aに示す撮影画像は、処置具30及び鉗子30aを含む治療部位の撮影画像であり、R成分の撮影画像、G成分の撮影画像、及びB成分の撮影画像を含む。 In the endoscopic surgery system of this embodiment, the endoscope 20 performs imaging by the imaging unit under the control of the endoscope control device 21, and acquires image signals of, for example, 30 frames per second. Output to the control device 21. The endoscope control device 21 performs signal processing on image signals sequentially acquired from the endoscope 20 and outputs processed image data (hereinafter referred to as a photographed image) to the information processing device 10. The control unit 11 (captured image acquisition unit) of the information processing device 10 acquires captured images sequentially output from the endoscope control device 21 via the communication unit 14 (S11). The control unit 11 acquires a photographed image as shown in FIG. 3A, for example. The photographed image shown in FIG. 3A is a photographed image of a treatment region including the treatment instrument 30 and the forceps 30a, and includes a photographed image of an R component, a photographed image of a G component, and a photographed image of a B component.
 制御部11は、取得した撮影画像に基づいて、R成分の撮影画像の各画素値からG成分の撮影画像の各画素値を差し引いた差分画像を算出する(S12)。R成分の撮影画像とG成分の撮影画像との差分画像は、当該撮影画像における赤みの程度を示すことができ、本実施形態では、この差分画像に基づいて、撮影画像中の出血領域を特定する。なお、制御部11は、R成分の撮影画像の各画素値からB成分の撮影画像の各画素値を差し引いた差分画像を算出し、当該撮影画像における赤みの程度を示す差分画像を生成してもよい。制御部11は、例えば図3Bに示すような差分画像を算出する。図3Bに示す差分画像は、R成分の撮影画像とG成分の撮影画像との差分画像の例であり、例えば図3A及び図3B中の破線で示す領域が、赤みが強い出血領域である。なお、制御部11は、上述した差分画像のほかに、R成分の各画素値からG成分の各画素値を除算した画像(R/G)、又は、除算した結果の対数(log(R/G))等の特徴量を表す画像を生成してもよい。また、制御部11は、上述した各特徴量を示す画像に基づいて出血領域を検出する構成のほかに、複数の画像を組み合わせて出血領域を検出してもよい。 Based on the acquired captured image, the control unit 11 calculates a difference image by subtracting each pixel value of the G component captured image from each pixel value of the R component captured image (S12). The difference image between the R component photographed image and the G component photographed image can indicate the degree of redness in the photographed image, and in this embodiment, the bleeding area in the photographed image is identified based on this difference image. do. The control unit 11 calculates a difference image by subtracting each pixel value of the B component photographed image from each pixel value of the R component photographed image, and generates a difference image indicating the degree of redness in the photographed image. Good too. The control unit 11 calculates a difference image as shown in FIG. 3B, for example. The difference image shown in FIG. 3B is an example of a difference image between a photographed image of the R component and a photographed image of the G component. For example, the area indicated by the broken line in FIGS. 3A and 3B is a bleeding area with strong redness. In addition to the above-mentioned difference image, the control unit 11 generates an image (R/G) obtained by dividing each pixel value of the G component from each pixel value of the R component, or the logarithm (log(R/G) of the division result). An image representing a feature amount such as G)) may be generated. Further, in addition to the configuration in which the control unit 11 detects a bleeding area based on the images showing each feature amount described above, the control unit 11 may detect a bleeding area by combining a plurality of images.
 制御部11(出血領域取得部)は、色成分の差分画像に基づいて、当該撮影画像中の出血領域を示す出血領域画像を生成する(S13)。ここでは、制御部11は、差分画像の各画素値と所定閾値とを比較し、所定閾値以上の画素値の画素に対して白(例えば1)を、所定閾値未満の画素値の画素に対して黒(例えば0)をそれぞれ割り当てて出血領域画像を生成する。制御部11は、例えば図3Cに示すような出血領域画像を生成する。図3Cに示す出血領域画像において、白色の領域が出血領域に特定されている。なお、出血領域であるか否かを判別する際の閾値は、当該撮影画像における各画素値(各画素の明るさ)に基づいて動的に設定されてもよい。これにより、撮影条件等を考慮した閾値の設定が可能となり、撮影画像中の出血領域をより適切に特定することができる。制御部11は、内視鏡制御装置21から順次送信されてくる各フレームの撮影画像に対して、上述したステップS11~S13の処理を行い、それぞれの撮影画像から出血領域画像を生成する。 The control unit 11 (bleeding area acquisition unit) generates a bleeding area image indicating the bleeding area in the captured image based on the color component difference image (S13). Here, the control unit 11 compares each pixel value of the difference image with a predetermined threshold, and assigns white (for example, 1) to pixels whose pixel value is equal to or greater than the predetermined threshold, and to pixels whose pixel value is less than the predetermined threshold. A bleeding area image is generated by assigning black (for example, 0) to each of the images. The control unit 11 generates a bleeding area image as shown in FIG. 3C, for example. In the bleeding area image shown in FIG. 3C, a white area is identified as a bleeding area. Note that the threshold value for determining whether or not it is a bleeding area may be dynamically set based on each pixel value (brightness of each pixel) in the captured image. This makes it possible to set a threshold value in consideration of the imaging conditions, etc., and it is possible to more appropriately identify a bleeding area in a captured image. The control unit 11 performs the above-described processes of steps S11 to S13 on the captured images of each frame sequentially transmitted from the endoscope control device 21, and generates a bleeding area image from each captured image.
 制御部11(算出部)は、内視鏡制御装置21から順次送信されてくる各フレームの撮影画像中の出血領域に基づいて、時系列に前後するフレーム間における出血領域の差分(増加量)を算出する(S14)。具体的には、制御部11は、現在のフレーム(処理対象のフレーム)中の出血領域のうちで、直前のフレーム(以下では前フレームという)では出血領域でなかった領域を特定し、特定した領域(増加した出血領域)を示す画像を生成し、増加した出血領域に含まれる画素数を計数する。そして、制御部11は、計数した画素数を、出血領域の増加量(差分)として、現在のフレームの撮影時間に対応付けて記憶部12に記憶する。制御部11は、時系列に前後するフレーム間の出血領域の差分(増加量)を順次算出することにより、例えば図4Aに実線で示すような時系列データを取得する。図4A中の実線は、各フレーム間の出血領域の差分の時系列変化を示しており、図4Aに示すグラフにおいて、横軸は撮影時間を示し、縦軸は各撮影時間の撮影画像と直前の撮影画像とにおける出血領域の差分を示す。なお、撮影時間は、内視鏡20による撮影処理が開始されてからの経過時間であってもよく、例えば内視鏡制御装置21が有する時計が示す撮影日時であってもよい。 The control unit 11 (calculation unit) calculates the difference (increase amount) in the bleeding area between successive frames in time series, based on the bleeding area in the captured images of each frame sequentially transmitted from the endoscope control device 21. is calculated (S14). Specifically, the control unit 11 identifies, among the bleeding areas in the current frame (frame to be processed), an area that was not a bleeding area in the immediately previous frame (hereinafter referred to as the previous frame). An image showing the area (increased bleeding area) is generated, and the number of pixels included in the increased bleeding area is counted. Then, the control unit 11 stores the counted number of pixels as an increase amount (difference) in the bleeding area in the storage unit 12 in association with the imaging time of the current frame. The control unit 11 acquires time-series data as shown by the solid line in FIG. 4A, for example, by sequentially calculating the difference (increase amount) in the bleeding area between frames that precede and follow in time series. The solid line in FIG. 4A shows the time-series change in the difference in the bleeding area between each frame. In the graph shown in FIG. This shows the difference in the bleeding area between the photographed image and the photographed image. Note that the imaging time may be the elapsed time since the endoscope 20 starts the imaging process, and may be, for example, the imaging date and time indicated by a clock included in the endoscope control device 21.
 次に、制御部11は、各フレーム間の出血領域の差分の移動平均を算出する(S15)。例えば制御部11は、3秒程度の所定時間毎の単純移動平均を算出する。撮影画像は、1秒間に30フレーム撮影されるので、制御部11は、例えば3秒間に撮影された90フレームにおいて、それぞれ前後の撮影画像中の出血領域の差分の移動平均を算出する。具体的には、制御部11は、ある時間について、当該時間から所定時間前までにおける各フレーム間の出血領域の差分の平均値を算出し、算出した平均値を当該時間に対応付けて記憶部12に記憶する。制御部11は、撮影時間毎に、所定時間における出血領域の差分の平均値を算出し、内視鏡20で撮影した全てのフレームに基づいて、出血領域の差分の移動平均を算出する。 Next, the control unit 11 calculates a moving average of the differences in the bleeding area between each frame (S15). For example, the control unit 11 calculates a simple moving average every predetermined time period of about 3 seconds. Since 30 frames of captured images are captured per second, the control unit 11 calculates a moving average of the differences in bleeding areas in the previous and subsequent captured images, for example, in 90 frames captured in 3 seconds. Specifically, the control unit 11 calculates the average value of the difference in the bleeding area between each frame from the time to a predetermined time for a certain time, associates the calculated average value with the time, and stores it in the storage unit. Store in 12. The control unit 11 calculates the average value of the difference in the bleeding area in a predetermined time for each imaging time, and calculates the moving average of the difference in the bleeding area based on all the frames photographed by the endoscope 20.
 次に、制御部11は、各撮影時間について、各撮影時間までにステップS15で算出した各フレーム間の出血領域の差分の移動平均に基づいて、出血発生の有無を判定する際に用いる閾値を算出する(S16)。ここでは、制御部11は、各撮影時間までに算出した出血領域の差分の移動平均を測定値として、測定値の平均値に対して+4σ(+4標準偏差)の値を算出し、当該撮影時間における判定閾値に設定する。図4A中の破線は、出血領域の差分の移動平均に対して-4σの値と+4σの値との時系列変化を示しており、制御部11は、フレーム間の出血領域の差分が、出血領域の差分の移動平均に対する-4σ~+4σの範囲(所定範囲)以上となった場合に、出血発生有りと判定する。従って、制御部11は、図4A中の破線で示す上側(+4σ)の時系列データの各値を、各撮影時間について出血発生の有無を判定する際に用いる閾値に設定する。 Next, for each imaging time, the control unit 11 sets a threshold value to be used when determining the presence or absence of bleeding, based on the moving average of the difference in bleeding area between each frame calculated in step S15 up to each imaging time. Calculate (S16). Here, the control unit 11 calculates a value of +4σ (+4 standard deviation) with respect to the average value of the measured values, using the moving average of the differences in bleeding areas calculated up to each imaging time as a measurement value, and Set the judgment threshold in . The broken line in FIG. 4A indicates a time series change between the −4σ value and the +4σ value with respect to the moving average of the bleeding area difference, and the control unit 11 determines whether the bleeding area difference between frames is If the moving average of the area differences is greater than or equal to the range of -4σ to +4σ (predetermined range), it is determined that bleeding has occurred. Therefore, the control unit 11 sets each value of the upper (+4σ) time series data indicated by the broken line in FIG. 4A as a threshold value used when determining the presence or absence of bleeding occurrence for each imaging time.
 制御部11(判定部)は、ステップS14で算出したフレーム間の出血領域の差分と、ステップS16で算出した判定閾値とに基づいて、ここでの撮影時間について出血発生の有無を判定する(S17)。具体的には、制御部11は、各撮影時間について、ステップS14で算出した出血領域の差分が、設定した判定閾値以上であるか否かを判定し、判定閾値以上である場合、出血発生有りと判定し、判定閾値未満である場合、出血発生無しと判定する。即ち、本実施形態では、出血領域の増加量(差分)が、判定閾値以上であるか否かに応じて、出血発生の有無を判定し、判定閾値は、出血領域の増加量の移動平均に基づいて設定される。 The control unit 11 (determination unit) determines the presence or absence of bleeding for this imaging time based on the difference in bleeding area between frames calculated in step S14 and the determination threshold calculated in step S16 (S17 ). Specifically, for each imaging time, the control unit 11 determines whether the difference in the bleeding area calculated in step S14 is greater than or equal to the set determination threshold, and if it is greater than or equal to the determination threshold, it is determined that bleeding has occurred. If it is less than the determination threshold, it is determined that no bleeding has occurred. That is, in this embodiment, the presence or absence of bleeding is determined depending on whether the amount of increase (difference) in the bleeding area is greater than or equal to the determination threshold, and the determination threshold is determined based on the moving average of the amount of increase in the bleeding area. Set based on
 ここで、出血発生を判定するための閾値として、各フレーム間の出血領域の差分の移動平均に対して+4σの値を用いる理由について説明する。図4Bでは、実線で、時系列に前後のフレーム間の出血領域の差分の時系列変化を示しており、破線で、出血領域の差分の移動平均に対する-2σの値と+2σの値との時系列変化を示している。例えば図4B中の破線で示す上側(+2σ)の時系列データの各値を、出血発生の有無の判定閾値に用いる場合、図4B中の時間T1~T5で、出血領域の差分が判定閾値以上となり、出血発生有りと判定される。即ち、図4Bに示す状態では、時間T1で最初の出血が検出された後、時間T2~T5における少量の出血も出血発生有りと頻繁に判定される。このように、出血発生有りと頻繁に判定される場合、医療従事者に通知すべき出血の発生を正確に検出することができず、誤検出が増加する要因となる。本実施形態では、出血領域の増加量に基づいて出血発生の有無を検出するため、図4A及び図4B中の実線が示す波形の立ち上がりを検出する。よって、出血領域の差分の移動平均+2σの値ではなく、出血領域の差分の移動平均+4σの値を出血発生の有無の判定閾値に用いることにより、例えば図4Bに示す状態においても、急激に出血領域が増加している時間T1だけが出血発生有りと判定することができ、出血発生を精度良く検出することが可能となる。 Here, the reason why a value of +4σ is used for the moving average of the difference in the bleeding area between each frame as the threshold for determining the occurrence of bleeding will be explained. In FIG. 4B, the solid line shows the time-series change in the bleeding area difference between the previous and subsequent frames, and the broken line shows the time-series change of the -2σ value and +2σ value with respect to the moving average of the bleeding area difference. It shows a series change. For example, if each value of the upper (+2σ) time series data indicated by the broken line in FIG. 4B is used as a threshold for determining the presence or absence of bleeding, the difference in the bleeding area is greater than or equal to the determination threshold at times T1 to T5 in FIG. 4B. Therefore, it is determined that bleeding has occurred. That is, in the state shown in FIG. 4B, after the first bleeding is detected at time T1, a small amount of bleeding at times T2 to T5 is also frequently determined to be bleeding occurrence. In this way, when it is frequently determined that bleeding has occurred, it is not possible to accurately detect the occurrence of bleeding that should be notified to medical personnel, leading to an increase in false detections. In this embodiment, in order to detect the presence or absence of bleeding based on the amount of increase in the bleeding area, the rise of the waveform indicated by the solid line in FIGS. 4A and 4B is detected. Therefore, by using the value of the moving average of the differences in bleeding areas + 4σ instead of the value of the moving average of differences in bleeding areas + 2σ as the threshold for determining the presence or absence of bleeding, it is possible to prevent sudden bleeding even in the state shown in FIG. 4B, for example. It can be determined that bleeding has occurred only during the time T1 when the area is increasing, and it is possible to detect bleeding occurrence with high accuracy.
 出血発生有りと判定した場合(S17:YES)、制御部11は、出血開始と特定する(S18)。なお、ステップS17で出血発生無しと判定した場合(S17:NO)、制御部11は、ステップS18の処理をスキップしてステップS19の処理に移行する。制御部11は、上述した処理を終了するか否かを判断する(S19)。例えば制御部11は、操作部13を介して術者から、上述した処理の終了指示を受け付け、終了指示を受け付けた場合、処理を終了すると判断する。制御部11は、上述した処理を終了しないと判断した場合(S19:NO)、ステップS11の処理に戻る。この場合、制御部11は、内視鏡制御装置21から順次撮影画像を取得し(S11)、取得した撮影画像に基づいて、ステップS12~S18の処理を実行する。制御部11は、上述した処理を終了すると判断した場合(S19:YES)、一連の処理を終了する。 If it is determined that bleeding has occurred (S17: YES), the control unit 11 specifies that bleeding has started (S18). Note that if it is determined in step S17 that no bleeding has occurred (S17: NO), the control unit 11 skips the process in step S18 and proceeds to the process in step S19. The control unit 11 determines whether or not to end the above-described processing (S19). For example, the control unit 11 receives an instruction to end the above-described process from the operator via the operation unit 13, and determines to end the process when the end instruction is received. When the control unit 11 determines that the above-described process is not to be ended (S19: NO), the process returns to step S11. In this case, the control unit 11 sequentially acquires photographed images from the endoscope control device 21 (S11), and executes the processes of steps S12 to S18 based on the acquired photographed images. If the control unit 11 determines to end the above-described processing (S19: YES), it ends the series of processing.
 次に、上述した出血検知処理によって出血開始が特定された場合にアラートを出力する処理について説明する。図5はアラート出力処理手順の一例を示すフローチャートである。情報処理装置10の制御部11は、上述した出血検知処理を実行しつつ、以下の処理を実行する。制御部11は、上述した出血検知処理において、出血開始が特定されたか否かを判断しており(S21)、出血開始が特定されたと判断した場合(S21:YES)、アラートの出力を開始する(S22)。例えば制御部11は、出血が発生したことを通知するメッセージを表示装置40に表示して、術者の医師及び看護師等の医療従事者に通知する。なお、情報処理装置10又は表示装置40が、ランプ、ブザー、スピーカ等を有する場合、ランプの点灯又は点滅、ブザーの鳴動、あるいは、スピーカからの音声メッセージ又は警告音の出力等を行うことによって、医療従事者に通知してもよい。また、制御部11は、出血開始(出血発生有り)と判定した時点の撮影時間を出血発生時間として記憶部12に記憶する(S23)。 Next, a process for outputting an alert when the start of bleeding is identified by the bleeding detection process described above will be described. FIG. 5 is a flowchart showing an example of an alert output processing procedure. The control unit 11 of the information processing device 10 executes the following process while executing the above-described bleeding detection process. In the bleeding detection process described above, the control unit 11 determines whether the start of bleeding has been identified (S21), and if it is determined that the start of bleeding has been identified (S21: YES), it starts outputting an alert. (S22). For example, the control unit 11 displays a message on the display device 40 notifying that bleeding has occurred, and notifies the operator's doctor and medical personnel such as nurses. Note that when the information processing device 10 or the display device 40 has a lamp, a buzzer, a speaker, etc., by lighting or blinking the lamp, sounding the buzzer, or outputting a voice message or warning sound from the speaker, Healthcare professionals may be notified. Further, the control unit 11 stores the imaging time at the time when it is determined that bleeding has started (bleeding has occurred) in the storage unit 12 as the bleeding occurrence time (S23).
 制御部11は、出血開始を特定してアラート出力を開始した場合、アラートの終了条件を満たすか否かを判断する(S24)。アラートの終了条件は、例えば出血が止まった(終了した)場合であり、制御部11は、時系列に前後のフレーム間の出血領域が増加しない場合に、出血が止まったと判断してもよい。また、制御部11は、後述の実施形態5で説明する図30の出血検知処理におけるステップS161~S165と同様の処理を行い、止血操作によって出血が終了したことを検知した場合に、アラートの終了条件を満たすと判断してもよい。制御部11は、アラートの終了条件を満たさないと判断した場合(S24:NO)、ステップS22で開始したアラートの出力を継続する(S25)。一方、制御部11は、アラートの終了条件を満たすと判断した場合(S24:YES)、ステップS22で開始したアラートの出力を終了する(S26)。また、制御部11は、アラートの終了条件を満たすと判断した時点の撮影時間を出血終了時間として記憶部12に記憶する(S27)。その後、制御部11は、ステップS21の処理に戻る。制御部11は、ステップS25の処理後、上述した処理を終了するか否かを判断する(S28)。例えば制御部11は、操作部13を介して術者から、上述した処理の終了指示を受け付け、終了指示を受け付けた場合、処理を終了すると判断する。制御部11は、上述した処理を終了しないと判断した場合(S28:NO)、ステップS24の処理に戻る。制御部11は、上述した処理を終了すると判断した場合(S28:YES)、一連の処理を終了する。これにより、出血検知処理によって出血発生有りが特定された場合に、アラートを出力して、医療従事者に出血発生を通知することができる。また、アラート出力が開始された後に、出血が終了してアラートの終了条件を満たす場合に、アラート出力を終了することにより、出血が収まったことを医療従事者に通知できる。 When the control unit 11 identifies the start of bleeding and starts outputting an alert, it determines whether the alert termination conditions are met (S24). The condition for ending the alert is, for example, when the bleeding has stopped (ended), and the control unit 11 may determine that the bleeding has stopped when the bleeding area between the previous and subsequent frames does not increase in time series. In addition, the control unit 11 performs processes similar to steps S161 to S165 in the bleeding detection process in FIG. 30, which will be described in Embodiment 5, and terminates the alert when it is detected that the bleeding has ended due to the hemostasis operation. It may be determined that the conditions are met. When the control unit 11 determines that the alert end condition is not satisfied (S24: NO), the control unit 11 continues outputting the alert started in step S22 (S25). On the other hand, if the control unit 11 determines that the alert termination condition is satisfied (S24: YES), it terminates the output of the alert started in step S22 (S26). Further, the control unit 11 stores the imaging time at the time when it is determined that the alert end condition is satisfied in the storage unit 12 as the bleeding end time (S27). After that, the control unit 11 returns to the process of step S21. After the process of step S25, the control unit 11 determines whether or not to end the process described above (S28). For example, the control unit 11 receives an instruction to end the above-described process from the operator via the operation unit 13, and determines to end the process when the end instruction is received. When the control unit 11 determines that the above-described process is not to be ended (S28: NO), the process returns to step S24. If the control unit 11 determines to end the above-described processing (S28: YES), it ends the series of processing. Thereby, when the occurrence of bleeding is identified through the bleeding detection process, an alert can be output to notify medical personnel of the occurrence of bleeding. Further, after the alert output is started, if the bleeding has ended and the alert termination condition is satisfied, the alert output can be terminated to notify the medical staff that the bleeding has stopped.
 本実施形態では、内視鏡20で撮影された撮影画像において、R成分の撮影画像と、G成分又はB成分の撮影画像との差分画像に基づいて、出血領域を高精度で検出できる。また、時系列に前後する撮影画像間において、高精度で検出された出血領域の増加量に基づいて、出血発生の有無を判定するので、出血の発生を早期に、且つ、客観的に検知して医療従事者に通知できる。よって、医療従事者は、出血発生に対して早期に処置を行うことができ、また、輸血の準備が必要であるか否か等の判断を早期に行うことができる。本実施形態では、手術中に内視鏡20で撮影された撮影画像に基づいて出血発生を検知する構成を例に説明したが、このような構成に限定されず、本開示の構成は、例えば手術後の検証処理においても利用可能である。 In the present embodiment, bleeding areas can be detected with high accuracy in images taken with the endoscope 20 based on the difference image between the R component image and the G or B component image. In addition, the presence or absence of bleeding is determined based on the amount of increase in the bleeding area detected with high accuracy between the images taken before and after the time series, so the occurrence of bleeding can be detected early and objectively. medical personnel can be notified. Therefore, medical personnel can promptly treat the occurrence of bleeding, and can also determine whether preparations for blood transfusion are necessary or not at an early stage. Although the present embodiment has been described using as an example a configuration that detects the occurrence of bleeding based on an image captured by the endoscope 20 during surgery, the configuration is not limited to such a configuration, and the configuration of the present disclosure can be used, for example. It can also be used in verification processing after surgery.
 また、本実施形態において、情報処理装置10は、フレーム間の出血領域の差分と、出血領域の差分の移動平均に基づく出血判定閾値とを算出した後に、図4Aに示すようなグラフを表示装置40(表示部)に表示する処理を行ってもよい。例えば図2中のステップS18の処理後に、各撮影時間に対応付けて、フレーム間の出血領域の差分と、各撮影時間における出血判定閾値とをプロットすることにより、図4Aに示すグラフを生成し、生成したグラフを表示装置40へ出力して表示させてもよい。また、このとき、情報処理装置10は、表示するグラフにおいて、出血発生有りを判定したタイミング(撮影時間)を示してもよい。例えば、図4A中の矢符で示すように出血発生有りを検知した撮影時間を提示することができる。なお、情報処理装置10が、手術中に内視鏡20で撮影した撮影画像に基づいて、手術後に出血発生の有無の判定処理を行う場合、全ての撮影画像について、各フレーム間の出血領域の差分と、出血領域の差分の移動平均に基づく出血判定閾値とを算出した後に、図4Aに示すようなグラフを表示装置40に表示させてもよい。この場合、手術後の検証処理において、出血発生のタイミングの検証等が可能となる。 Further, in the present embodiment, the information processing device 10 displays a graph as shown in FIG. 40 (display unit) may be displayed. For example, after the process in step S18 in FIG. 2, the graph shown in FIG. 4A is generated by plotting the difference in bleeding area between frames and the bleeding determination threshold at each imaging time in association with each imaging time. , the generated graph may be output to the display device 40 and displayed. Further, at this time, the information processing device 10 may indicate the timing (imaging time) at which the occurrence of bleeding was determined in the displayed graph. For example, as shown by the arrow in FIG. 4A, the imaging time at which the occurrence of bleeding was detected can be presented. Note that when the information processing device 10 performs processing for determining the presence or absence of bleeding after surgery based on the photographed images taken with the endoscope 20 during the surgery, the bleeding area between each frame is determined for all the photographed images. After calculating the difference and the bleeding determination threshold based on the moving average of the difference in bleeding areas, a graph as shown in FIG. 4A may be displayed on the display device 40. In this case, in the post-surgery verification process, it becomes possible to verify the timing of bleeding occurrence, etc.
 本実施形態において、内視鏡20で撮影された撮影画像に基づく出血発生の有無の判定処理は、情報処理装置10がローカルで行う構成に限定されない。例えば、出血発生の有無の判定処理を実行するサーバを設けてもよい。この場合、情報処理装置10は、内視鏡制御装置21から順次取得する撮影画像をサーバへ送信し、サーバが、撮影画像に基づいて出血発生の有無を判定して判定結果を情報処理装置10へ送信するように構成される。このような構成とした場合であっても、上述した本実施形態と同様の処理が可能であり、同様の効果が得られる。 In the present embodiment, the process for determining the presence or absence of bleeding based on the photographed image taken by the endoscope 20 is not limited to a configuration in which the information processing device 10 performs it locally. For example, a server may be provided that executes a process for determining whether or not bleeding has occurred. In this case, the information processing device 10 transmits the photographed images sequentially acquired from the endoscope control device 21 to the server, and the server determines the presence or absence of bleeding based on the photographed images and sends the determination result to the information processing device 10. configured to send to. Even in the case of such a configuration, the same processing as in this embodiment described above is possible and the same effects can be obtained.
(変形例1)
 上述した実施形態1の変形例1について説明する。本変形例では、内視鏡20の撮影部が撮影を行う際に、撮影部による撮影位置の移動量(変化量)に基づいて、時系列に前後する各フレーム間でのズレを補正する補正処理を行う情報処理装置10について説明する。内視鏡20の撮影部は、例えば術者が内視鏡20を保持(把持)する状態の変化に応じて撮影位置及び撮影方向が変更される。本変形例では、撮影位置及び撮影方向の変化に応じてフレーム間で生じるズレを補正することにより、出血発生の有無をより正確に判定することができる。
(Modification 1)
A first modification of the first embodiment described above will be described. In this modification, when the imaging unit of the endoscope 20 performs imaging, correction is performed to correct the deviation between each frame that follows in time based on the amount of movement (amount of change) of the imaging position by the imaging unit. The information processing device 10 that performs processing will be described. The imaging position and imaging direction of the imaging unit of the endoscope 20 are changed depending on, for example, a change in the state in which the operator holds (grasps) the endoscope 20 . In this modification, the presence or absence of bleeding can be determined more accurately by correcting the shift that occurs between frames in accordance with changes in the imaging position and imaging direction.
 図6は変形例1の出血検知処理手順の一例を示すフローチャート、図7A及び図7Bは変形例1の出血検知処理の説明図である。図6に示す処理は、図2に示す処理において、ステップS11,S12の間にステップS31~S32を追加したものである。図2と同じステップについては説明を省略する。 FIG. 6 is a flowchart illustrating an example of a bleeding detection processing procedure according to modification 1, and FIGS. 7A and 7B are explanatory diagrams of the bleeding detection processing according to modification 1. The process shown in FIG. 6 is the process shown in FIG. 2 with steps S31 and S32 added between steps S11 and S12. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、内視鏡20で撮影された撮影画像を内視鏡制御装置21から取得した場合(S11)、内視鏡20による撮影位置の変化量を取得する(S31)。例えば内視鏡手術システムが、内視鏡20を保持する保持機構を有し、内視鏡制御装置21が、保持機構の動作を制御することによって内視鏡20の保持状態を制御する構成である場合、制御部11は、内視鏡制御装置21から撮影位置の変化量を取得できる。また内視鏡20に、撮影部(例えば内視鏡20の先端部)の位置を検知するセンサが設けられている場合、制御部11は、センサによる検知結果に基づいて、撮影位置の変化量を取得してもよい。なお、撮影位置の変化量は、被写体に対向する面上での平行移動量及び回転量と、ズーム倍率(スケール)とを含む。 In the information processing device 10 of this modification, when the control unit 11 acquires a photographed image photographed by the endoscope 20 from the endoscope control device 21 (S11), the control unit 11 determines the amount of change in the photographing position by the endoscope 20. (S31). For example, the endoscopic surgery system has a holding mechanism that holds the endoscope 20, and the endoscope control device 21 is configured to control the holding state of the endoscope 20 by controlling the operation of the holding mechanism. In some cases, the control unit 11 can acquire the amount of change in the imaging position from the endoscope control device 21. Furthermore, if the endoscope 20 is provided with a sensor that detects the position of the imaging section (for example, the distal end of the endoscope 20), the control section 11 determines the amount of change in the imaging position based on the detection result by the sensor. may be obtained. Note that the amount of change in the photographing position includes the amount of translation and rotation on the surface facing the subject, and the zoom magnification (scale).
 制御部11は、直前の撮影画像から生成された出血領域画像(前フレームの出血領域画像)に対して、撮影位置の変化量に基づく補正処理を行う(S32)。例えば、制御部11は、被写体に対向する面上での平行移動量だけ各画素を移動させ、回転量だけ各画素を回転させ、ズーム倍率に応じた変倍処理を行うための変換行列を算出する。そして制御部11は、算出した変換行列に基づいて、前フレームの出血領域画像の各画素を移動させ、変換処理後の出血領域画像を生成する。これにより、前フレームの出血領域画像に、変換行列による座標変換を行うことができ、前フレームの出血領域画像の各画素が、現在のフレーム(処理対象のフレーム)にアラインメントされた出血領域画像が得られる。 The control unit 11 performs correction processing based on the amount of change in the imaging position on the bleeding area image (the bleeding area image of the previous frame) generated from the immediately previous captured image (S32). For example, the control unit 11 moves each pixel by the amount of translation on the surface facing the subject, rotates each pixel by the amount of rotation, and calculates a transformation matrix for performing magnification processing according to the zoom magnification. do. Then, the control unit 11 moves each pixel of the bleeding area image of the previous frame based on the calculated transformation matrix, and generates a bleeding area image after the conversion process. As a result, coordinate transformation can be performed on the bleeding area image of the previous frame using the transformation matrix, and a bleeding area image in which each pixel of the bleeding area image of the previous frame is aligned with the current frame (frame to be processed) is created. can get.
 図7Aは、時系列に前後する2つフレームから生成された出血領域画像を重ねて表示したものであり、破線で示す領域が前フレームにおける出血領域を示し、実線で示す領域が後フレーム(現在のフレーム)における出血領域を示す。制御部11は、前フレームの出血領域画像に対して、撮影位置の変化量に基づく補正処理を行うことにより、前フレームと後フレームとにおける撮影位置の変化量に基づくズレを補正することができる。よって、このような補正処理を前フレームの出血領域画像に行うことにより、図7Bに示すように、撮影位置の変化量に基づくズレが補正された、前フレームの出血領域画像を得ることができる。 FIG. 7A shows a superimposed display of bleeding area images generated from two frames before and after in chronological order, where the area indicated by a broken line indicates the bleeding area in the previous frame, and the area indicated by a solid line indicates the bleeding area in the subsequent frame (current Figure 2 shows the bleeding area in the frame. The control unit 11 can correct the deviation based on the amount of change in the imaging position between the previous frame and the subsequent frame by performing correction processing on the bleeding area image of the previous frame based on the amount of change in the imaging position. . Therefore, by performing such correction processing on the bleeding area image of the previous frame, it is possible to obtain the bleeding area image of the previous frame in which the deviation based on the amount of change in the imaging position has been corrected, as shown in FIG. 7B. .
 その後、制御部11は、ステップS12以降の処理を行う。なお、ステップS14で制御部11は、ステップS32で補正処理を行った前フレームの出血領域画像と、ステップS13で生成した現在のフレームの出血領域画像との差分を算出する(S14)。図6中のステップS31~S32は、ステップS11の処理後、ステップS14の処理前までに実行されればよい。 After that, the control unit 11 performs the processing from step S12 onwards. Note that in step S14, the control unit 11 calculates the difference between the bleeding area image of the previous frame subjected to the correction process in step S32 and the bleeding area image of the current frame generated in step S13 (S14). Steps S31 to S32 in FIG. 6 may be executed after the process of step S11 and before the process of step S14.
 本変形例では、時系列に前後するフレームにおいて、撮影位置の変化量に基づくズレを補正することができるので、フレーム間において、撮影位置の変化量に基づくズレを解消して、出血領域の差分を精度よく算出することができる。なお、本変形例では、撮影画像から生成した出血領域画像に対して、撮影位置の変化量に伴う各画素位置のズレを補正したが、このような構成に限定されない。たとえば、撮影位置の変化量に基づく補正処理を、撮影画像に対して行ってもよい。この場合、ステップS32で、制御部11は、前フレームの撮影画像に対して補正処理を行い、補正処理後の撮影画像に基づいて、当該撮影画像から出血領域画像を生成し、生成した出血領域画像を前フレームの出血領域画像として、ステップS14の処理を行えばよい。 In this modification, it is possible to correct the deviation based on the amount of change in the imaging position in frames that follow in time series, so the difference in bleeding area can be corrected between frames by eliminating the deviation based on the amount of change in the imaging position. can be calculated with high accuracy. Note that in this modification, the deviation of each pixel position due to the amount of change in the imaging position is corrected for the bleeding region image generated from the imaging image, but the present invention is not limited to such a configuration. For example, correction processing based on the amount of change in the photographing position may be performed on the photographed image. In this case, in step S32, the control unit 11 performs a correction process on the captured image of the previous frame, generates a bleeding area image from the captured image based on the captured image after the correction process, and generates a bleeding area image from the captured image. The process in step S14 may be performed using the image as the bleeding area image of the previous frame.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、撮影位置の移動に伴って生じる撮影視野全体のズレを補正することにより、撮影の際に生じるフレーム間のズレを排除できるので、フレーム間の出血領域の差分を精度良く算出することができる。よって、精度良く算出された出血領域の差分に基づいて、出血発生の有無を精度良く検出できる。また、本変形例においても、上述した実施形態1で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. In addition, in this modified example, by correcting the deviation of the entire imaging field that occurs due to the movement of the imaging position, it is possible to eliminate the deviation between frames that occurs during imaging, so the difference in bleeding area between frames can be accurately calculated. It can be calculated. Therefore, the presence or absence of bleeding can be detected with high accuracy based on the difference between the bleeding areas calculated with high accuracy. Further, also in this modification, the modifications described in the above-described first embodiment can be applied.
(変形例2)
 本変形例では、内視鏡20で撮影した撮影画像に基づいて、時系列に前後する各フレーム間におけるズレを補正する補正処理を行う情報処理装置10について説明する。上述の変形例1は、内視鏡20(撮影部)の撮影位置の変化量を機械的に取得する構成であるのに対して、本変形例は、撮影画像に対する画像処理によって、各フレーム間のズレ量を算出し、算出したズレ量を補正する構成である。
(Modification 2)
In this modification, an information processing device 10 that performs a correction process to correct a shift between frames that follow each other in time series based on images taken with an endoscope 20 will be described. Modification 1 described above has a configuration that mechanically acquires the amount of change in the imaging position of the endoscope 20 (imaging unit), whereas this modification uses image processing for the captured image to determine the amount of change between each frame. This configuration calculates the amount of deviation and corrects the calculated amount of deviation.
 図8は変形例2の出血検知処理手順の一例を示すフローチャート、図9A及び図9Bは変形例2の出血検知処理の説明図である。図8に示す処理は、図6に示す処理において、ステップS31~S32の代わりにステップS41~S43を追加したものである。図6と同じステップについては説明を省略する。 FIG. 8 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 2, and FIGS. 9A and 9B are explanatory diagrams of bleeding detection processing in Modification 2. The process shown in FIG. 8 is the process shown in FIG. 6 with steps S41 to S43 added instead of steps S31 to S32. Description of the same steps as in FIG. 6 will be omitted.
 本変形例の情報処理装置10において、制御部11は、内視鏡20で撮影された撮影画像を内視鏡制御装置21から取得した場合(S11)、取得した撮影画像に対して、前フレームにおける各特徴点に対応する特徴点を抽出する(S41)。例えば制御部11は、内視鏡制御装置21から順次取得する撮影画像に基づいて、オプティカルフロー等の画像処理によって、時系列に前のフレームから後のフレームにおける被写体の動きを示す動きベクトル(動き量及び方向)を算出する。よって、制御部11は、前フレームで抽出した各特徴点に対応する特徴点を、現在のフレームから抽出することができる。図9Aは、撮影画像から抽出された特徴点を黒丸で示した例であり、図9Bに示すように、制御部11は、前フレーム(n-1フレーム)における特徴点に基づいて、後フレーム(nフレーム)における特徴点を取得する。なお、動きを追跡する特徴点は、任意の手法を用いて特定されればよい。これにより、撮影画像に対する画像処理によって、時系列に前後するフレーム間におけるズレ量(各特徴点の移動量)を取得できる。 In the information processing device 10 of this modification, when the control unit 11 acquires a photographed image photographed by the endoscope 20 from the endoscope control device 21 (S11), the control unit 11 selects the previous frame for the acquired photographic image. A feature point corresponding to each feature point in is extracted (S41). For example, the control unit 11 uses image processing such as optical flow based on captured images sequentially acquired from the endoscope control device 21 to generate a motion vector (movement amount and direction). Therefore, the control unit 11 can extract feature points corresponding to the feature points extracted in the previous frame from the current frame. FIG. 9A is an example in which the feature points extracted from the captured image are indicated by black circles, and as shown in FIG. 9B, the control unit 11 controls the feature points in the subsequent frame based on the feature points in the previous frame (n-1 frame). Obtain feature points in (n frames). Note that the feature points whose movements are to be tracked may be specified using any method. Thereby, the amount of shift (the amount of movement of each feature point) between the frames that follow in time series can be obtained by image processing on the captured image.
 制御部11は、前フレームにおける各特徴点の位置と、ステップS41で抽出した現在のフレームにおける各特徴点の位置とに基づいて、前フレームの各特徴点を現在のフレームの各特徴点の位置に近づけるための変換行列を算出する(S42)。制御部11は、変換行列に基づく変換処理によって前フレームの各特徴点を移動させた後の各位置と、現在のフレームの各特徴点の位置との差異が最小となるような変換行列を算出する。そして制御部11は、算出した変換行列に基づいて、前フレームの出血領域画像に対して補正処理を行い(S43)、後フレームとの間におけるズレ量を補正する。これにより、本変形例においても、図7Bに示すように、フレーム間におけるズレが補正され、前フレームの出血領域画像の各画素が、現在のフレームにアラインメントされた出血領域画像が得られる。その後、制御部11は、ステップS12以降の処理を行う。なお、図8においても、ステップS41~S43は、ステップS11の処理後、ステップS14の処理前までに実行されればよい。 The control unit 11 changes the position of each feature point of the previous frame to the position of each feature point of the current frame based on the position of each feature point of the previous frame and the position of each feature point of the current frame extracted in step S41. A transformation matrix is calculated to bring the value closer to (S42). The control unit 11 calculates a transformation matrix that minimizes the difference between each position after moving each feature point of the previous frame and the position of each feature point of the current frame by a transformation process based on the transformation matrix. do. Then, the control unit 11 performs a correction process on the bleeding area image of the previous frame based on the calculated transformation matrix (S43), and corrects the amount of deviation between it and the subsequent frame. As a result, also in this modification, as shown in FIG. 7B, the misalignment between frames is corrected, and a bleeding area image in which each pixel of the bleeding area image of the previous frame is aligned with the current frame is obtained. After that, the control unit 11 performs the processing from step S12 onwards. Note that in FIG. 8 as well, steps S41 to S43 may be executed after the process of step S11 and before the process of step S14.
 本変形例においても、上述した実施形態1及び変形例1と同様の効果が得られる。また、本変形例では、撮影画像に基づく画像処理によって、フレーム間のズレ量が算出されるので、内視鏡20による撮影位置の変化量を機械的に取得する必要がない。よって、例えば術後に、内視鏡20で撮影された撮影画像のみに基づいて出血発生の検知処理を行う場合であっても、フレーム間のズレを解消することができるので、時系列に前後のフレーム間における出血領域の差分を精度良く算出できる。また、本変形例においても、上述した実施形態1及び変形例1で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment and modification 1 described above can be obtained. Furthermore, in this modification, the amount of shift between frames is calculated by image processing based on the captured images, so there is no need to mechanically obtain the amount of change in the imaging position by the endoscope 20. Therefore, even if, for example, post-surgery detection processing is performed on the occurrence of bleeding based only on images taken with the endoscope 20, it is possible to eliminate the discrepancy between frames. The difference in bleeding areas between frames can be calculated with high accuracy. Further, also in this modification, the modifications described in the above-mentioned Embodiment 1 and Modification 1 can be applied.
(変形例3)
 本変形例では、内視鏡20による撮影時のズーム倍率(拡大率)の変更量が所定倍率以上となった場合に、ズーム倍率の変更前後で撮影された撮影画像に基づく出血発生の判定結果を無効とする情報処理装置10について説明する。内視鏡20の撮影部は、撮影時のズーム倍率を変更できるように構成されており、ズームイン(拡大)して撮影した場合、ズームイン前の撮影画像中の出血領域が、ズームイン後の撮影画像では大きく拡大されてしまう。即ち、ズームイン前後で撮影画像中の出血領域のサイズ(画素数)が増加し、この増加量に基づいて出血発生の有無を検出した場合、誤検出が発生する。従って、本変形例では、ズーム倍率(拡大率)の変更量が所定倍率以上となった場合に、ズーム倍率の変更前後における撮影画像に基づく出血発生の判定結果を無効とすることにより、出血発生の誤検出を抑制できる。
(Modification 3)
In this modification, when the amount of change in the zoom magnification (enlargement rate) during imaging by the endoscope 20 is greater than or equal to a predetermined magnification, the determination result of bleeding occurrence based on the photographed images taken before and after the change in zoom magnification is determined. An information processing device 10 that disables the following will be described. The photographing unit of the endoscope 20 is configured to be able to change the zoom magnification during photographing, and when photographing with zoom-in (enlargement), the bleeding area in the photographed image before zooming in will change to the photographed image after zooming in. It will be greatly expanded. That is, the size (number of pixels) of the bleeding area in the captured image increases before and after zooming in, and if the presence or absence of bleeding is detected based on this increase, erroneous detection will occur. Therefore, in this modified example, when the amount of change in the zoom magnification (enlargement rate) is greater than or equal to a predetermined magnification, the determination result of bleeding occurrence based on the captured images before and after changing the zoom magnification is invalidated. false positive detection can be suppressed.
 図10は変形例3の出血検知処理手順の一例を示すフローチャート、図11は変形例3の出血検知処理の説明図である。図10に示す処理は、図2に示す処理において、ステップS17のYESとステップS18との間にステップS51~S53を追加したものである。図2と同じステップについては説明を省略する。 FIG. 10 is a flowchart illustrating an example of a procedure for bleeding detection processing according to modification 3, and FIG. 11 is an explanatory diagram of the bleeding detection processing according to modification 3. The process shown in FIG. 10 is the process shown in FIG. 2 with steps S51 to S53 added between YES in step S17 and step S18. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S17の処理を実行し、出血発生有りと判定した場合(S17:YES)、現在のフレームの撮影時におけるズーム倍率を取得する(S51)。例えば制御部11は、内視鏡制御装置21から、内視鏡制御装置21が内視鏡20を制御したズーム倍率を取得する。制御部11は、内視鏡20による撮影が行われる都度、内視鏡20によるズーム倍率を取得することにより、例えば図11に実線で示すようなズーム倍率の時系列データを取得する。図11中の実線は、内視鏡20によるズーム倍率の時系列変化を示しており、図11に示すグラフにおいて、横軸は撮影時間を示し、縦軸は各撮影時間でのズーム倍率を示す。なお、制御部11は、内視鏡20に対して設定されたズーム倍率を取得する代わりに、時系列に前後のフレームに対する画像処理によって、前フレームからのズーム倍率の変更量(拡大率)を算出してもよい。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S17 in FIG. Acquire the magnification (S51). For example, the control unit 11 acquires from the endoscope control device 21 the zoom magnification at which the endoscope control device 21 has controlled the endoscope 20 . The control unit 11 acquires the zoom magnification of the endoscope 20 each time an image is taken by the endoscope 20, thereby acquiring time-series data of the zoom magnification as shown by the solid line in FIG. 11, for example. The solid line in FIG. 11 shows the time-series change in zoom magnification by the endoscope 20. In the graph shown in FIG. 11, the horizontal axis shows the imaging time, and the vertical axis shows the zoom magnification at each imaging time. . Note that instead of acquiring the zoom magnification set for the endoscope 20, the control unit 11 calculates the amount of change in the zoom magnification (magnification rate) from the previous frame by image processing on the previous and subsequent frames in chronological order. It may be calculated.
 次に、制御部11は、各撮影時間におけるズーム倍率に基づいて、出血発生の判定処理を無効とすべきズームイン処理(ズーム倍率の変更)が行われたか否かを判断するための閾値(倍率閾値)を算出する(S52)。例えば、制御部11は、図11中の実線で示すズーム倍率の移動平均を算出し、各撮影時間におけるズーム倍率の移動平均を、当該撮影時間における倍率閾値に設定する。ズーム倍率の移動平均は、例えば3秒程度の所定時間毎の単純移動平均とする。図11中の破線は、各撮影時間におけるズーム倍率の移動平均の時系列変化を示しており、制御部11は、図11中の破線で示す時系列データの各値を、各撮影時間について、ズームイン処理が行われたか否かを判断する際に用いる変倍閾値に設定する。 Next, the control unit 11 sets a threshold value (magnification threshold value) is calculated (S52). For example, the control unit 11 calculates the moving average of the zoom magnification shown by the solid line in FIG. 11, and sets the moving average of the zoom magnification for each imaging time as the magnification threshold for the imaging time. The moving average of the zoom magnification is, for example, a simple moving average every predetermined period of about 3 seconds. The broken line in FIG. 11 indicates a time series change in the moving average of the zoom magnification at each imaging time, and the control unit 11 changes each value of the time series data shown by the broken line in FIG. Set as the scaling threshold used when determining whether zoom-in processing has been performed.
 制御部11は、ステップS51で取得したズーム倍率が、ステップS52で設定した倍率閾値以上であるか否かを判断する(S53)。制御部11は、ズーム倍率が倍率閾値以上であると判断した場合(S53:YES)、ステップS18の処理をスキップしてステップS19の処理に移行する。これにより、ズーム倍率が倍率閾値以上である場合、即ち、出血発生の判定処理を無効とすべきズームイン処理が行われたと判断された場合、制御部11は、直近のステップS17で判定した出血発生有りの判定結果を無効とし、出血無しと特定する。一方、制御部11は、ズーム倍率が倍率閾値未満であると判断した場合(S53:NO)、出血発生の判定処理を無効とすべきズームイン処理が行われていないと判断し、ステップS18の処理を実行する。これにより、ズーム倍率が倍率閾値未満である場合、即ち、出血発生の判定処理を無効とすべきズームイン処理が行われていないと判断された場合、制御部11は、直近のステップS17で判定した出血発生有りの判定結果に基づいて、出血開始と特定する。図11に示すズーム倍率の時系列変化では、両矢印で示す区間が、ズームイン処理が行われたと判断された時間を示す。 The control unit 11 determines whether the zoom magnification obtained in step S51 is greater than or equal to the magnification threshold set in step S52 (S53). If the control unit 11 determines that the zoom magnification is equal to or greater than the magnification threshold (S53: YES), the control unit 11 skips the process of step S18 and proceeds to the process of step S19. As a result, if the zoom magnification is equal to or higher than the magnification threshold, that is, if it is determined that the zoom-in process that should invalidate the bleeding occurrence determination process has been performed, the control unit 11 controls the bleeding occurrence determined in the most recent step S17. The determination result of "Yes" is invalidated and it is determined that there is no bleeding. On the other hand, if the control unit 11 determines that the zoom magnification is less than the magnification threshold (S53: NO), it determines that the zoom-in process that should invalidate the bleeding occurrence determination process is not performed, and the process in step S18 is performed. Execute. As a result, if the zoom magnification is less than the magnification threshold, that is, if it is determined that the zoom-in process that should invalidate the bleeding occurrence determination process has not been performed, the control unit 11 Based on the determination result that bleeding has occurred, it is determined that bleeding has started. In the time-series change in zoom magnification shown in FIG. 11, the section indicated by the double-headed arrow indicates the time when it is determined that the zoom-in process has been performed.
 図10中のステップS51~S53は、ステップS11の処理後、ステップS18の処理前までに実行されればよく、例えばステップS11の処理後に実行する場合、ズーム倍率が倍率閾値以上であると判断した場合に、ステップS12~S18の処理をスキップしてもよい。この場合、ズーム倍率が倍率閾値以上となった場合に、出血発生の判定のための各処理を実行しないことにより、処理負担の軽減が可能となる。 Steps S51 to S53 in FIG. 10 only need to be executed after the processing of step S11 and before the processing of step S18. For example, when executed after the processing of step S11, it is determined that the zoom magnification is equal to or higher than the magnification threshold. In this case, the processing of steps S12 to S18 may be skipped. In this case, when the zoom magnification is equal to or greater than the magnification threshold, each process for determining the occurrence of bleeding is not executed, thereby making it possible to reduce the processing load.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、出血発生の判定処理を無効とすべきズームイン処理が行われたか否かを判断し、当該ズームイン処理が行われた場合に、ズームイン処理の前後に撮影された撮影画像に基づく出血発生の判定結果を無効とすることにより、ズーム倍率の変更量が大きい場合に生じる出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本変形例においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. In addition, in this modification, it is determined whether zoom-in processing that should invalidate the bleeding occurrence determination processing is performed, and when the zoom-in processing is performed, the captured images taken before and after the zoom-in processing are By invalidating the determination result of bleeding occurrence based on this, it is possible to suppress erroneous detection of bleeding occurrence that occurs when the amount of change in zoom magnification is large. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this modification, the modifications described in the above-described first embodiment and each modification can be applied.
(変形例4)
 本変形例では、撮影画像中に検出された出血領域について、所定時間遡って、当該出血領域の重心を追跡し、追跡した軌跡が撮影画像の周縁の所定領域に含まれる場合に、当該出血領域は、撮影位置の移動等によって撮影範囲外から撮影範囲内に入ってきたものとして、出血発生の判定処理に用いる対象から除外する情報処理装置10について説明する。内視鏡20による撮影位置が移動することにより、撮影範囲が移動し、前フレームの撮影時では撮影範囲外にあった出血領域が、現在のフレームの撮影範囲内に入ってくる場合が発生する。本実施形態では、内視鏡20の撮影範囲内で発生した出血を精度良く検出することを目的とするので、撮影範囲外から撮影範囲内に入ってきた出血領域に基づいて出血発生の有無を検出した場合、誤検出が発生する。従って、本変形例では、上述したような出血領域を、出血発生を検出する際の対象から除外することにより、出血発生の誤検出を抑制できる。
(Modification 4)
In this modification, for a bleeding area detected in a captured image, the center of gravity of the bleeding area is traced back for a predetermined period of time, and if the tracked trajectory is included in a predetermined area around the periphery of the captured image, the bleeding area is detected. The following describes an information processing apparatus 10 that excludes objects used in the bleeding occurrence determination process as objects that have entered the imaging range from outside the imaging range due to movement of the imaging position or the like. As the imaging position of the endoscope 20 moves, the imaging range moves, and a bleeding area that was outside the imaging range when imaging the previous frame may come within the imaging range of the current frame. . In this embodiment, the purpose is to accurately detect bleeding that has occurred within the imaging range of the endoscope 20, so the presence or absence of bleeding is detected based on the bleeding area that has entered the imaging range from outside the imaging range. If detected, a false positive will occur. Therefore, in this modification, by excluding the above-mentioned bleeding area from the targets when detecting the occurrence of bleeding, it is possible to suppress erroneous detection of the occurrence of bleeding.
 図12は変形例4の出血検知処理手順の一例を示すフローチャート、図13は変形例4の出血検知処理の説明図である。図12に示す処理は、図2に示す処理において、ステップS13,S14の間にステップS61~S63を追加したものである。図2と同じステップについては説明を省略する。 FIG. 12 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 4, and FIG. 13 is an explanatory diagram of bleeding detection processing in Modification 4. The process shown in FIG. 12 is the process shown in FIG. 2 with steps S61 to S63 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S13の処理を実行し、現在のフレーム中の出血領域を示す出血領域画像を生成する(S13)。制御部11は、生成した出血領域画像中の出血領域の重心を特定する(S61)。具体的には、制御部11は、出血領域画像中の出血領域の重心の座標値を算出する。なお、出血領域画像中の各画素の座標値は、例えば撮影画像の左上を原点(0,0)とし、原点から右方向への画素数と原点から下方向への画素数とによって表される。出血領域の重心の座標値は、例えば出血領域に含まれる全画素の座標値の平均値で表されてもよく、出血領域の左端の画素及び右端の画素における左右方向の中央位置の座標値と、出血領域の上端の画素及び下端の画素における上下方向の中央位置の座標値とで表されてもよく、出血領域の輪郭上の各画素の座標値の平均値で表されてもよい。図13は出血領域画像の例を示しており、図13に示す出血領域については、+マークで示す位置が出血領域の重心Cの位置に特定される。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13). The control unit 11 identifies the center of gravity of the bleeding area in the generated bleeding area image (S61). Specifically, the control unit 11 calculates the coordinate values of the center of gravity of the bleeding area in the bleeding area image. Note that the coordinate value of each pixel in the bleeding area image is expressed, for example, with the upper left of the captured image as the origin (0, 0), and the number of pixels from the origin to the right and the number of pixels from the origin to the bottom. . The coordinate value of the center of gravity of the bleeding area may be expressed, for example, by the average value of the coordinate values of all pixels included in the bleeding area, and may be expressed as the coordinate value of the center position in the left and right direction of the leftmost pixel and the rightmost pixel of the bleeding area. , the coordinate value of the center position in the vertical direction of the upper end pixel and the lower end pixel of the bleeding region, or the average value of the coordinate values of each pixel on the outline of the bleeding region. FIG. 13 shows an example of a bleeding area image, and for the bleeding area shown in FIG. 13, the position indicated by the + mark is specified as the position of the center of gravity C of the bleeding area.
 制御部11は、現在のフレーム(出血領域画像)中の出血領域に基づいて、所定時間前のフレームまで遡って出血領域の重心を追跡し、当該出血領域の重心の軌跡を取得する(S62)。図13に示す出血領域画像では、所定時間前のフレームまで遡った出血領域の重心をそれぞれ+マークで示している。なお、所定時間前までの各フレームにおける出血領域画像において、出血領域の重心の座標値は既に算出されて記憶部12に記憶されていてもよい。この場合、制御部11は、記憶部12に記憶してある、各フレームにおける出血領域の重心の座標値を読み出すことによって、出血領域の重心の軌跡を取得できる。また、制御部11は、現在のフレームの出血領域画像における出血領域に基づいて、前のフレームにおける出血領域を抽出し、抽出した出血領域の重心を算出する処理を、所定時間前のフレームまで順次実行することによって、出血領域の重心の軌跡を取得してもよい。 Based on the bleeding area in the current frame (bleeding area image), the control unit 11 traces the center of gravity of the bleeding area going back to a frame a predetermined time ago, and obtains the locus of the center of gravity of the bleeding area (S62). . In the bleeding area image shown in FIG. 13, the centroids of bleeding areas traced back to frames a predetermined time ago are each indicated by a + mark. Note that in the bleeding area images in each frame up to a predetermined time ago, the coordinate values of the center of gravity of the bleeding area may already be calculated and stored in the storage unit 12. In this case, the control unit 11 can acquire the locus of the center of gravity of the bleeding area by reading out the coordinate values of the center of gravity of the bleeding area in each frame stored in the storage unit 12. Further, the control unit 11 extracts the bleeding area in the previous frame based on the bleeding area in the bleeding area image of the current frame, and performs the process of calculating the center of gravity of the extracted bleeding area sequentially up to the frame a predetermined time ago. By executing this, the locus of the center of gravity of the bleeding area may be obtained.
 制御部11は、取得した出血領域の重心の軌跡が、撮影画像の周縁の所定領域に含まれるか否かを判断する(S63)。例えば、制御部11は、図13に示す撮影画像(出血領域画像)においてハッチングで示す周縁領域に、出血領域の重心の軌跡が含まれるか否かを判断する。撮影画像の周縁領域に出血領域の重心の軌跡が含まれると判断した場合(S63:YES)、制御部11は、ステップS14~S18の処理をスキップしてステップS19の処理に移行する。撮影画像の周縁領域に出血領域の重心の軌跡が含まれる場合、当該出血領域は、撮影位置の移動等によって撮影範囲外から撮影範囲内に入ってきたものと考えられる。よって、このような出血領域を検出した場合、この出血領域に基づいて出血発生の判定処理を行わないことにより、誤検出を回避する。図13に示す出血領域における重心の軌跡は、撮影画像の下縁領域に含まれているので、この出血領域は、撮影位置の移動等によって撮影範囲外から撮影範囲内に入ってきたものと判断され、出血発生の判定処理がスキップされる。 The control unit 11 determines whether the locus of the center of gravity of the acquired bleeding area is included in a predetermined area at the periphery of the photographed image (S63). For example, the control unit 11 determines whether the trajectory of the center of gravity of the bleeding area is included in the peripheral area indicated by hatching in the photographed image (bleeding area image) shown in FIG. 13 . If it is determined that the trajectory of the center of gravity of the bleeding area is included in the peripheral area of the photographed image (S63: YES), the control unit 11 skips the processes of steps S14 to S18 and proceeds to the process of step S19. If the peripheral area of the photographed image includes the locus of the center of gravity of the bleeding area, it is considered that the bleeding area has come into the photographing range from outside the photographing range due to movement of the photographing position or the like. Therefore, when such a bleeding area is detected, erroneous detection is avoided by not performing a process for determining the occurrence of bleeding based on this bleeding area. Since the locus of the center of gravity in the bleeding area shown in Figure 13 is included in the lower edge area of the photographed image, it is determined that this bleeding area has come into the photographing range from outside the photographing range due to movement of the photographing position, etc. The bleeding occurrence determination process is skipped.
 一方、制御部11は、出血領域の重心の軌跡が、撮影画像の周縁領域に含まれないと判断した場合(S63:NO)、当該出血領域は撮影範囲内で発生したものと判断し、ステップS14~S18の処理を実行する。これにより、内視鏡20の撮影範囲内で発生した出血については、時系列に前後するフレーム間の出血領域の差分に基づいて出血発生の判定処理が行われる。なお、出血領域画像に複数の出血領域が含まれる場合、制御部11は、それぞれの出血領域について、ステップS61~S63の処理を実行する。これにより、それぞれの出血領域について、重心が撮影範囲外から入ってきたものであるか否かを判断し、撮影範囲外から入ってきた出血領域については、出血発生の判定処理の判定対象から除外し、出血発生の判定処理に用いない。 On the other hand, if the control unit 11 determines that the locus of the center of gravity of the bleeding area is not included in the peripheral area of the photographed image (S63: NO), it determines that the bleeding area has occurred within the photographing range, and steps The processes of S14 to S18 are executed. As a result, for bleeding that occurs within the photographing range of the endoscope 20, the determination process of bleeding occurrence is performed based on the difference in the bleeding area between the frames that follow in time series. Note that when the bleeding area image includes a plurality of bleeding areas, the control unit 11 executes the processes of steps S61 to S63 for each bleeding area. As a result, for each bleeding area, it is determined whether the center of gravity has entered from outside the imaging range, and bleeding areas that have entered from outside the imaging range are excluded from the determination process of bleeding occurrence determination processing. However, it is not used for determining the occurrence of bleeding.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、撮影位置の移動等によって撮影範囲外から撮影範囲内に入ってきた出血領域を、出血発生の判定処理に用いないことにより、出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本実施形態においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, a bleeding area that has come into the imaging range from outside the imaging range due to movement of the imaging position or the like is not used in the process of determining the occurrence of bleeding, thereby suppressing erroneous detection of the occurrence of bleeding. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
(変形例5)
 本変形例では、ミストの発生によって撮影画像中の出血領域が急激に減少した場合に、このときの撮影画像(フレーム)を出血発生の判定処理の対象としない情報処理装置10について説明する。内視鏡手術システムにおいて、処置具30によって生体組織の切開、切除、剥離等が行われた場合にミスト(霧、湯気)が発生する。ミストが発生した場合、出血領域の赤さが薄まるので出血領域と判定されず、出血領域の面積が急減に減少する状況が発生することがある。従って、本変形例では、ミストが発生した場合に、ミストの発生前後に撮影された撮影画像を、出血有無の判定対象から除外して出血発生の判定処理を実行しないことにより、出血発生の誤検出を抑制する。
(Modification 5)
In this modification, an information processing apparatus 10 will be described in which, when a bleeding area in a photographed image rapidly decreases due to the generation of mist, the photographed image (frame) at this time is not subjected to the bleeding occurrence determination process. In the endoscopic surgery system, mist (mist, steam) is generated when living tissue is incised, resected, peeled off, etc. using the treatment tool 30. When mist is generated, the redness of the bleeding area becomes diluted, so that it is not determined to be a bleeding area, and a situation may occur where the area of the bleeding area suddenly decreases. Therefore, in this modification, when mist is generated, the images taken before and after the mist generation are excluded from the targets for determining the presence or absence of bleeding, and the bleeding occurrence determination processing is not executed. Suppress detection.
 図14は変形例5の出血検知処理手順の一例を示すフローチャート、図15は変形例5の出血検知処理の説明図である。図14に示す処理は、図2に示す処理において、ステップS15,S16の間にステップS71~S72を追加したものである。図2と同じステップについては説明を省略する。 FIG. 14 is a flowchart showing an example of a procedure for bleeding detection processing according to modification 5, and FIG. 15 is an explanatory diagram of the bleeding detection processing according to modification 5. The process shown in FIG. 14 is the process shown in FIG. 2 with steps S71 and S72 added between steps S15 and S16. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S15の処理を実行する。制御部11は、ステップS14で、時系列に前後するフレーム間の出血領域の差分を順次算出することにより、例えば図15に実線で示すような出血領域の差分の時系列データを取得する。図15に示すグラフにおいて、横軸は撮影時間を示し、縦軸は各撮影時間における出血領域の差分を示す。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S15 in FIG. In step S14, the control unit 11 acquires time-series data of differences in bleeding areas as shown by solid lines in FIG. 15, for example, by sequentially calculating differences in bleeding areas between frames that follow in time series. In the graph shown in FIG. 15, the horizontal axis shows the imaging time, and the vertical axis shows the difference in the bleeding area at each imaging time.
 次に、制御部11は、各撮影時間について、各撮影時間までにステップS15で算出した各フレーム間の出血領域の差分の移動平均に基づいて、ミスト発生の有無を判定する際に用いる閾値(ミスト判定閾値)を算出する(S71)。ここでは、制御部11は、各撮影時間までに算出した出血領域の差分の移動平均を測定値として、測定値の平均値に対して-2σの値を算出し、当該撮影時間におけるミスト判定閾値に設定する。図15の破線は、出血領域の差分の移動平均に対して-2σの値と+2σの値との時系列変化を示しており、制御部11は、フレーム間の出血領域の差分が、出血領域の差分の移動平均に対して-2σの値未満となった場合に、ミスト発生有りと判定する。従って、制御部11は、図15の破線で示す下側(-2σ)の時系列データの各値を、各撮影時間についてミスト発生の有無を判定する際に用いる閾値に設定する。 Next, for each imaging time, the control unit 11 sets a threshold ( mist determination threshold value) is calculated (S71). Here, the control unit 11 uses the moving average of the differences in bleeding areas calculated up to each imaging time as a measurement value, calculates a value of −2σ with respect to the average value of the measurement values, and sets the mist determination threshold at the imaging time. Set to . The broken line in FIG. 15 indicates a time-series change between the −2σ value and the +2σ value with respect to the moving average of the bleeding area difference, and the control unit 11 determines that the bleeding area difference between frames is When the moving average of the difference is less than -2σ, it is determined that mist has occurred. Therefore, the control unit 11 sets each value of the lower (-2σ) time series data indicated by the broken line in FIG. 15 as a threshold value used when determining whether mist is generated for each imaging time.
 制御部11は、ステップS14で算出したフレーム間の出血領域の差分と、ステップS71で算出したミスト判定閾値とに基づいて、ここでの撮影画像についてミスト発生の有無を判定する(S72)。具体的には、制御部11は、各撮影時間について、ステップS14で算出した出血領域の差分が、設定したミスト判定閾値未満であるか否かを判定し、ミスト判定閾値未満である場合、ミスト発生有りと判定し、ミスト判定閾値以上である場合、ミスト発生無しと判定する。即ち、本実施形態では、出血領域の増加量(差分)が、ミスト判定閾値未満であるか否かに応じて、ミスト発生の有無を判定する。なお、ミスト発生の有無は、撮影画像における色変化(具体的には出血領域の差分)以外に、例えば画像全体のコントラスト(明暗比)が急激に低下したか否かに応じて判定してもよい。例えば画像全体のコントラストが所定値以上低下した場合にミスト発生有りと判定してもよい。 The control unit 11 determines whether mist has occurred in the photographed image based on the difference in bleeding area between frames calculated in step S14 and the mist determination threshold calculated in step S71 (S72). Specifically, for each imaging time, the control unit 11 determines whether the difference in the bleeding area calculated in step S14 is less than the set mist determination threshold, and if it is less than the mist determination threshold, If it is determined that mist has occurred, and if the mist determination threshold value is greater than or equal to the mist determination threshold, it is determined that mist has not occurred. That is, in this embodiment, the presence or absence of mist generation is determined depending on whether the amount of increase (difference) in the bleeding area is less than the mist determination threshold. In addition, the presence or absence of mist generation can be determined not only by color changes in captured images (specifically, differences in bleeding areas) but also by, for example, whether the contrast (brightness ratio) of the entire image has suddenly decreased. good. For example, it may be determined that mist has occurred when the contrast of the entire image has decreased by a predetermined value or more.
 ミスト発生無しと判定した場合(S72:NO)、制御部11は、ステップS16以降の処理を実行し、出血発生の判定処理を行う。一方、ミスト発生有りと判定した場合(S72:YES)、制御部11は、ステップS16~S18の処理をスキップしてステップS19の処理に移行する。これにより、ミストが発生した場合、制御部11は、ここでの撮影画像(出血領域)に基づいて出血発生の判定処理を行わないことにより、誤検出を回避する。図15に示す出血領域の差分の時系列変化では、矢印で示す区間が、ミスト発生有りと判断された時間を示す。 If it is determined that no mist is generated (S72: NO), the control unit 11 executes the processes from step S16 onwards, and performs the process of determining the occurrence of bleeding. On the other hand, if it is determined that mist is generated (S72: YES), the control unit 11 skips the processes of steps S16 to S18 and proceeds to the process of step S19. Thereby, when mist is generated, the control unit 11 avoids erroneous detection by not performing determination processing of bleeding occurrence based on the photographed image (bleeding area). In the time-series change in the difference in the bleeding area shown in FIG. 15, the section indicated by the arrow indicates the time when it is determined that mist has occurred.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、撮影画像中にミストの発生を検出した場合に、ミスト発生の前後に撮影された撮影画像に基づいて出血発生の判定処理を行わないことにより、出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本変形例においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. In addition, in this modification, when the occurrence of mist is detected in a photographed image, the erroneous detection of bleeding occurrence is avoided by not performing the determination process of bleeding occurrence based on the photographed images taken before and after the occurrence of mist. It can be suppressed. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this modification, the modifications described in the above-described first embodiment and each modification can be applied.
(変形例6)
 本変形例では、撮影画像において、処置具30及びガーゼによって隠れていた出血領域が、処置具30及びガーゼの移動に伴って露出した場合に、当該出血領域を、出血発生の判定処理に用いる対象から除外する情報処理装置10について説明する。内視鏡手術では処置具30及びガーゼを用いて止血が行われ、止血後に処置具30及びガーゼが取り除かれた場合に、処置具30及びガーゼによって隠れていた出血領域が露出し、出血領域の面積が大きく増加する状況が発生することがある。従って、本変形例では、処置具30及びガーゼの移動に伴って露出した出血領域を、出血発生を検出する際の対象から除外することにより、出血発生の誤検出を抑制する。
(Modification 6)
In this modification, when a bleeding area that has been hidden by the treatment instrument 30 and gauze in the photographed image is exposed as the treatment instrument 30 and gauze move, the bleeding area is used as a target for determining the occurrence of bleeding. The information processing devices 10 to be excluded from the following will be explained. In endoscopic surgery, bleeding is stopped using the treatment instrument 30 and gauze, and when the treatment instrument 30 and gauze are removed after the bleeding has stopped, the bleeding area that was hidden by the treatment instrument 30 and gauze is exposed, and the bleeding area is exposed. Situations may arise where the area increases significantly. Therefore, in this modification, the bleeding area exposed due to the movement of the treatment tool 30 and the gauze is excluded from the target when detecting the occurrence of bleeding, thereby suppressing the erroneous detection of the occurrence of bleeding.
 本変形例の情報処理装置10は、図1に示す実施形態1の構成に加えて、記憶部12に、画像中の所定の物体を検知する学習モデルを記憶している。ここでの学習モデルは、撮影画像が入力された場合に、撮影画像中の処置具30及びガーゼの領域を出力するように機械学習によって学習済みの学習モデルである。このような学習モデルは、CNN(Convolution Neural Network)、R-CNN(Regions with CNN)、Fast R-CNN、Faster R-CNN、Mask R-CNN,SSD(Single Shot Multibook Detector)、YOLO(You Only Look Once)等の任意の物体検出アルゴリズムで構成されてもよく、これらのモデルのいくつかを組み合わせて構成されてもよい。情報処理装置10の制御部11は、内視鏡20で撮影した撮影画像を、上述した学習モデルに入力し、学習モデルからの出力情報に基づいて、撮影画像中の処置具30及びガーゼの領域を特定することができる。なお、撮影画像中のガーゼの検出は、学習モデルを用いるほかに、テンプレートを用いたパターンマッチングによって行われてもよい。この場合、ガーゼ表面の画像特徴量を示すテンプレートを予め記憶部12に記憶しておき、制御部11は、撮影画像からテンプレートに一致する領域の有無を判断することにより、撮影画像中のガーゼの有無を検出できる。 In addition to the configuration of the first embodiment shown in FIG. 1, the information processing device 10 of this modification stores a learning model for detecting a predetermined object in an image in the storage unit 12. The learning model here is a learning model that has been trained by machine learning to output the region of the treatment instrument 30 and gauze in the photographed image when the photographed image is input. Such learning models include CNN (Convolution Neural Network), R-CNN (Regions with CNN), Fast R-CNN, Faster R-CNN, Mask R-CNN, SSD (Single Shot Multibook Detector), and YOLO (You Only It may be configured with any object detection algorithm such as Look Once), or it may be configured by combining several of these models. The control unit 11 of the information processing device 10 inputs the captured image captured by the endoscope 20 into the learning model described above, and determines the area of the treatment instrument 30 and gauze in the captured image based on the output information from the learning model. can be identified. Note that detection of gauze in a photographed image may be performed by pattern matching using a template, in addition to using a learning model. In this case, a template indicating the image feature amount of the gauze surface is stored in advance in the storage unit 12, and the control unit 11 determines whether or not there is an area matching the template from the captured image. Presence can be detected.
 図16は変形例6の出血検知処理手順の一例を示すフローチャート、図17A~図17Dは変形例6の出血検知処理の説明図である。図16に示す処理は、図2に示す処理において、ステップS13,S14の間にステップS81~S83を追加したものである。図2と同じステップについては説明を省略する。 FIG. 16 is a flowchart illustrating an example of a procedure for bleeding detection processing in Modification 6, and FIGS. 17A to 17D are explanatory diagrams of bleeding detection processing in Modification 6. The process shown in FIG. 16 is the process shown in FIG. 2 with steps S81 to S83 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S13の処理を実行し、現在のフレーム中の出血領域を示す出血領域画像を生成する(S13)。また、制御部11は、ステップS11で取得した撮影画像を学習モデルに入力し、学習モデルからの出力情報に基づいて、撮影画像中の処置具30及びガーゼを検出する(S81)。例えば図17Aに示すように、3つの処置具30が写っている撮影画像が学習モデルに入力された場合、学習モデルは、図17Bに示すように、3つの処置具30の領域を示す処置具画像を出力する。よって、制御部11は、学習モデルから出力された処置具画像に基づいて、各処置具30の領域を把握できる。なお、図17Bの左側には前フレーム(n-1フレーム)中の処置具30の領域を示す処置具画像を示し、右側には現在のフレーム(nフレーム)中の処置具30の領域を示す処置具画像を示す。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13). Further, the control unit 11 inputs the photographed image acquired in step S11 to the learning model, and detects the treatment tool 30 and gauze in the photographed image based on the output information from the learning model (S81). For example, as shown in FIG. 17A, when a photographed image in which three treatment instruments 30 are captured is input to the learning model, as shown in FIG. 17B, the learning model Output the image. Therefore, the control unit 11 can grasp the area of each treatment tool 30 based on the treatment tool image output from the learning model. Note that the left side of FIG. 17B shows a treatment tool image showing the area of the treatment tool 30 in the previous frame (n-1 frame), and the right side shows the area of the treatment tool 30 in the current frame (n frame). A treatment tool image is shown.
 制御部11は、現在のフレーム中に処置具30又はガーゼを検出した場合、検出した処置具30及びガーゼについて、前フレームにおける位置から、現在のフレームにおける位置への移動領域を特定する(S82)。具体的には、制御部11は、前フレームにおける処置具30及びガーゼの領域と、現在のフレームにおける処置具30及びガーゼの領域とを比較し、前フレームにのみ存在する領域を、処置具30及びガーゼの移動領域に特定する。例えば図17Cには、実線で現在のフレームにおける処置具30及びガーゼの領域を示し、破線で前フレームにおける処置具30及びガーゼの領域を示している。この場合、制御部11は、図17Dに示すように、前フレームにのみ存在する領域を、処置具30及びガーゼの移動領域に特定する。なお、前フレームにおける処置具30及びガーゼの領域を示す処置具画像が既に生成されて記憶部12にされている場合、制御部11は、記憶部12から、前フレームにおける処置具30及びガーゼの位置を取得できる。また、制御部11は、前フレームの撮影画像を学習モデルに入力することにより、学習モデルから、前フレームにおける処置具30及びガーゼの領域を示す処置具画像を取得してもよい。 When the control unit 11 detects the treatment instrument 30 or gauze in the current frame, the control unit 11 specifies the movement area of the detected treatment instrument 30 and gauze from the position in the previous frame to the position in the current frame (S82). . Specifically, the control unit 11 compares the area of the treatment instrument 30 and gauze in the previous frame with the area of the treatment instrument 30 and gauze in the current frame, and replaces the area existing only in the previous frame with the area of the treatment instrument 30 and gauze. and specific areas of gauze movement. For example, in FIG. 17C, a solid line indicates the area of the treatment instrument 30 and gauze in the current frame, and a broken line indicates the area of the treatment instrument 30 and gauze in the previous frame. In this case, as shown in FIG. 17D, the control unit 11 specifies a region that exists only in the front frame as a movement region of the treatment instrument 30 and the gauze. Note that if a treatment instrument image showing the area of the treatment instrument 30 and gauze in the previous frame has already been generated and stored in the storage unit 12, the control unit 11 stores the area of the treatment instrument 30 and gauze in the previous frame from the storage unit 12. You can get the location. Furthermore, the control unit 11 may obtain a treatment tool image indicating the region of the treatment tool 30 and gauze in the previous frame from the learning model by inputting the captured image of the previous frame into the learning model.
 制御部11は、ステップS13で生成した現在のフレームにおける出血領域画像中の出血領域のうちで、特定した処置具30及びガーゼの移動領域に含まれる画素を、当該出血領域画像から除外する(S83)。これにより、処置具30及びガーゼの移動領域に含まれる出血領域の画素を、出血発生の判定処理の対象から除外できる。その後、制御部11は、ステップS14以降の処理を行う。なお、ステップS14で、制御部11は、ステップS13で生成した出血領域画像に含まれる出血領域から、処置具30及びガーゼの移動領域に含まれる画素が除外された残りの出血領域に基づいて、前フレームにおける出血領域と、現在のフレームにおける出血領域との差分を算出する(S14)。これにより、処置具30及びガーゼの移動によって露出した出血領域を、出血発生の判定処理の対象から除外でき、出血発生の誤検出を抑制できる。 The control unit 11 excludes pixels included in the identified movement area of the treatment instrument 30 and gauze from the bleeding area image in the current frame generated in step S13 (S83 ). Thereby, the pixels of the bleeding area included in the moving area of the treatment instrument 30 and the gauze can be excluded from the bleeding occurrence determination process. After that, the control unit 11 performs the processing from step S14 onwards. In addition, in step S14, the control unit 11 performs the following based on the remaining bleeding area from which pixels included in the movement area of the treatment instrument 30 and gauze are excluded from the bleeding area included in the bleeding area image generated in step S13. The difference between the bleeding area in the previous frame and the bleeding area in the current frame is calculated (S14). Thereby, the bleeding area exposed by the movement of the treatment instrument 30 and the gauze can be excluded from the bleeding occurrence determination process, and erroneous detection of bleeding occurrence can be suppressed.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、処置具30及びガーゼの移動に伴って露出した出血領域を、出血発生の判定処理に用いないことにより、出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本実施形態においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, the bleeding area exposed due to the movement of the treatment tool 30 and the gauze is not used in the process of determining the occurrence of bleeding, thereby suppressing erroneous detection of the occurrence of bleeding. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
(変形例7)
 本変形例では、時系列に前後するフレーム間において各画素の動きを検出し、出血領域内の画素で、動き量(移動量)が大きい画素を、出血発生の判定処理に用いる対象から除外する情報処理装置10について説明する。内視鏡手術では、臓器を移動させつつ治療部位に対する手技を行うことがあり、例えば血の付いた臓器を移動させた場合に、臓器に付いた血の領域が移動し、移動後の位置で新たな出血領域と誤検出される可能性がある。従って、本変形例では、時系列に前後するフレーム間において、出血領域に含まれる画素のうちで動き量が大きい画素を、出血発生を検出する際の対象から除外することにより、出血発生の誤検出を抑制する。
(Modification 7)
In this modified example, the movement of each pixel is detected between frames that precede and follow in time series, and pixels within the bleeding area that have a large amount of movement (movement amount) are excluded from the targets used in the process to determine the occurrence of bleeding. The information processing device 10 will be explained. In endoscopic surgery, the procedure may be performed on the treatment area while moving the organ. For example, if an organ with blood is moved, the area of blood on the organ will move and the area after the movement will be different. There is a possibility that it will be mistakenly detected as a new bleeding area. Therefore, in this modification, by excluding pixels included in a bleeding area that have a large amount of movement between frames that precede and follow in time series from the targets when detecting bleeding occurrence, it is possible to detect the occurrence of bleeding incorrectly. Suppress detection.
 図18は変形例7の出血検知処理手順の一例を示すフローチャート、図19は変形例7の出血検知処理の説明図である。図18に示す処理は、図2に示す処理において、ステップS13,S14の間にステップS91~S93を追加したものである。図2と同じステップについては説明を省略する。 FIG. 18 is a flowchart illustrating an example of a procedure for bleeding detection processing according to modification 7, and FIG. 19 is an explanatory diagram of the bleeding detection processing according to modification 7. The process shown in FIG. 18 is the process shown in FIG. 2 with steps S91 to S93 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S13の処理を実行し、現在のフレーム中の出血領域を示す出血領域画像を生成する(S13)。一方で、制御部11は、ステップS11で取得した撮影画像と、前フレームの撮影画像とに基づいて、前フレームから現在のフレームへの各画素の移動量を検出する(S91)。具体的には、制御部11は、オプティカルフロー等の画像処理によって、前フレームから現在のフレームへの各画素の動き(移動)を示す動きベクトル(動き量及び方向)を算出する。図19は、現在のフレームにおける各画素について、前フレームからの動きベクトルを示した例であり、動きベクトルは、矢印の向きで移動方向を示し、矢印の長さで移動量を示している。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13). On the other hand, the control unit 11 detects the amount of movement of each pixel from the previous frame to the current frame based on the captured image acquired in step S11 and the captured image of the previous frame (S91). Specifically, the control unit 11 calculates a motion vector (amount and direction of motion) indicating the movement (movement) of each pixel from the previous frame to the current frame by image processing such as optical flow. FIG. 19 is an example showing a motion vector from the previous frame for each pixel in the current frame. In the motion vector, the direction of the arrow indicates the direction of movement, and the length of the arrow indicates the amount of movement.
 制御部11は、各画素の動きベクトルに基づいて、移動量が大きい画素、具体的には、移動量が所定量以上の画素を特定する(S92)。これにより、撮影画像において、部分的に移動した画素が特定される。なお、制御部11は、ステップS13で生成した出血領域画像中の出血領域に含まれる各画素についてのみ、移動量が所定量以上であるか否かを判断し、出血領域内で、移動量が大きい画素を特定してもよい。そして、制御部11は、ステップS13で生成した現在のフレームにおける出血領域画像中の出血領域に含まれる画素から、移動量が大きい画素を除外する(S93)。これにより、出血領域に含まれる画素のうちで、移動量が大きい画素を、出血発生の判定処理の対象から除外できる。その後、制御部11は、ステップS14以降の処理を行う。なお、ステップS14で、制御部11は、ステップS13で生成した出血領域画像に含まれる出血領域から、移動量が大きい画素が除外された残りの出血領域に基づいて、前フレームにおける出血領域と、現在のフレームにおける出血領域との差分を算出する(S14)。これにより、例えば血の付いた臓器の移動等によって移動した出血領域を、出血発生の判定処理の対象から除外でき、出血発生の誤検出を抑制できる。 Based on the motion vector of each pixel, the control unit 11 identifies pixels with a large movement amount, specifically, pixels whose movement amount is a predetermined amount or more (S92). As a result, pixels that have partially moved in the photographed image are identified. Note that the control unit 11 determines whether the amount of movement is equal to or greater than a predetermined amount only for each pixel included in the bleeding area in the bleeding area image generated in step S13, and determines whether the amount of movement is equal to or greater than a predetermined amount within the bleeding area. Large pixels may also be identified. Then, the control unit 11 excludes pixels that have a large amount of movement from the pixels included in the bleeding area in the bleeding area image in the current frame generated in step S13 (S93). Thereby, of the pixels included in the bleeding area, pixels that have a large amount of movement can be excluded from the bleeding occurrence determination process. After that, the control unit 11 performs the processing from step S14 onwards. In addition, in step S14, the control unit 11 determines the bleeding area in the previous frame and the bleeding area in the previous frame based on the remaining bleeding area from which pixels with a large movement amount are removed from the bleeding area included in the bleeding area image generated in step S13. The difference from the bleeding area in the current frame is calculated (S14). As a result, a bleeding area that has been moved due to movement of a bloody organ, for example, can be excluded from the bleeding occurrence determination processing, and erroneous detection of bleeding occurrence can be suppressed.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、例えば血の付いた臓器の移動に伴って移動した出血領域を、出血発生の判定処理に用いないことにより、出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本実施形態においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, for example, by not using a bleeding area that has moved due to the movement of a bloody organ in the process of determining the occurrence of bleeding, it is possible to suppress erroneous detection of the occurrence of bleeding. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
(変形例8)
 本変形例では、内視鏡20で撮影した撮影画像中でブレ又はボケが発生している領域を、出血発生の判定処理に用いる対象から除外する情報処理装置10について説明する。内視鏡20はイメージセンサによって撮影を行うので、ブレ又はボケが発生することがあり、ブレ又はボケが発生した領域では、出血領域を精度良く検出することは難しい。従って、本変形例では、撮影画像中でブレ又はボケが発生している領域を特定し、撮影画像中の出血領域から、ブレ又はボケが発生している領域を除外し、出血発生を検出する際の対象から除外することにより、出血発生の誤検出を抑制する。ブレ又はボケが発生している領域には高周波成分が含まれないので、本変形例では、撮影画像から高周波成分を抽出し、撮影画像において高周波成分が含まれない領域を低周波領域としてブレ又はボケが発生している領域に特定する。
(Modification 8)
In this modification, an information processing apparatus 10 will be described in which an area in which blur or blur occurs in a captured image captured by an endoscope 20 is excluded from targets used in a process for determining the occurrence of bleeding. Since the endoscope 20 uses an image sensor to take pictures, blur or blur may occur, and it is difficult to accurately detect bleeding regions in areas where blur or blur occurs. Therefore, in this modification, the area where blurring or blurring occurs in the captured image is identified, the area where blurring or blurring occurs is excluded from the bleeding area in the captured image, and the occurrence of bleeding is detected. By excluding the blood from the target, false detection of bleeding occurrence can be suppressed. Since high-frequency components are not included in the area where the blur or blur occurs, in this modification, the high-frequency components are extracted from the captured image, and the areas in the captured image that do not include the high-frequency components are treated as low-frequency areas. Identify the area where the blur is occurring.
 図20は変形例8の出血検知処理手順の一例を示すフローチャート、図21A~図21Eは変形例8の出血検知処理の説明図である。図20に示す処理は、図2に示す処理において、ステップS13,S14の間にステップS101~S104を追加したものである。図2と同じステップについては説明を省略する。 FIG. 20 is a flowchart showing an example of a procedure for bleeding detection processing in Modification 8, and FIGS. 21A to 21E are explanatory diagrams of bleeding detection processing in Modification 8. The process shown in FIG. 20 is the process shown in FIG. 2 with steps S101 to S104 added between steps S13 and S14. Description of the same steps as in FIG. 2 will be omitted.
 本変形例の情報処理装置10において、制御部11は、図2中のステップS11~S13の処理を実行し、現在のフレーム中の出血領域を示す出血領域画像を生成する(S13)。一方で、制御部11は、ステップS11で取得した撮影画像に対して、ガウシアンフィルタ等の平滑化フィルタを用いた平滑化処理を行う(S101)。これにより、例えば図21Aに示す撮影画像から、図21Bに示すような平滑化された画像が生成される。次に制御部11は、撮影画像から、平滑化された画像を減算して、撮影画像中の高周波成分を抽出する(S102)。具体的には、制御部11は、撮影画像の各画素の画素値から、平滑化された画像の各画素の画素値を減算し、図21Cに示すような高周波成分のみを有する画像を生成する。 In the information processing device 10 of this modification, the control unit 11 executes the processes of steps S11 to S13 in FIG. 2, and generates a bleeding area image indicating the bleeding area in the current frame (S13). On the other hand, the control unit 11 performs smoothing processing using a smoothing filter such as a Gaussian filter on the captured image acquired in step S11 (S101). As a result, a smoothed image as shown in FIG. 21B is generated from the photographed image shown in FIG. 21A, for example. Next, the control unit 11 subtracts the smoothed image from the photographed image to extract high frequency components in the photographed image (S102). Specifically, the control unit 11 subtracts the pixel value of each pixel of the smoothed image from the pixel value of each pixel of the captured image, and generates an image having only high frequency components as shown in FIG. 21C. .
 制御部11は、高周波成分のみを有する画像に基づいて、撮影画像中の低周波領域を特定する(S103)。例えば制御部11は、高周波成分のみを有する画像に対して二値化処理を行い、図21Dに示すような二値画像を生成する。これにより、高周波成分が画素値1の画素に変換され、低周波成分が画素値0の画素に変換された二値画像が得られる。また制御部11は、二値画像の各画素を、例えば8画素×8画素、16画素×16画素等のセルに分割し、各セルにおいて画素値が1(高周波成分を示す画素値)の画素の割合が所定割合(例えば50%~80%)以上である場合は当該セルに含まれる全画素の画素値を1に変換し、所定割合未満である場合は当該セルに含まれる全画素の画素値を0に変換し、図21Eに示すような画像を生成する。そして、制御部11は、図21Eに示す画像に基づいて、高周波成分がない領域を、撮影画像中の低周波成分の領域に特定する。なお、適切な出血領域にも高周波成分が含まれていない可能性があるので、制御部11は、図21Eに示す画像において、低周波成分の領域の分布を考慮して、適切な出血領域であるか、ブレ又はボケが発生している領域であるかを判断する。例えば制御部11は、図21Eに示す画像において、低周波成分の領域の輪郭がぶれているか否かを判断し、ぶれていない場合には適切な出血領域であると判断し、ぶれている場合にはブレ又はボケが発生している領域であると判断する。これにより、図21Eに示す画像では、破線で囲まれた領域が、ブレ又はボケが発生している低周波成分の領域に特定される。 The control unit 11 identifies a low frequency region in the captured image based on the image having only high frequency components (S103). For example, the control unit 11 performs binarization processing on an image having only high frequency components, and generates a binary image as shown in FIG. 21D. As a result, a binary image in which high frequency components are converted to pixels with a pixel value of 1 and low frequency components are converted to pixels with a pixel value of 0 is obtained. Further, the control unit 11 divides each pixel of the binary image into cells of, for example, 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc., and in each cell, a pixel with a pixel value of 1 (a pixel value indicating a high frequency component) If the ratio is above a predetermined ratio (for example, 50% to 80%), the pixel values of all pixels included in the cell are converted to 1, and if it is less than the predetermined ratio, the pixel values of all pixels included in the cell are converted to 1. The value is converted to 0 to generate an image as shown in FIG. 21E. Then, based on the image shown in FIG. 21E, the control unit 11 specifies an area where there is no high frequency component as an area with a low frequency component in the photographed image. Note that since there is a possibility that a high frequency component is not included in an appropriate bleeding area, the control unit 11 considers the distribution of areas of low frequency components in the image shown in FIG. 21E, and selects an appropriate bleeding area. It is determined whether there is any blurring or blurring in the area. For example, in the image shown in FIG. 21E, the control unit 11 determines whether the outline of the low frequency component area is blurred or not, and if the outline is not blurred, it is determined that it is an appropriate bleeding area; It is determined that this is an area where blur or blur is occurring. As a result, in the image shown in FIG. 21E, the area surrounded by the broken line is specified as a low-frequency component area where blurring or blurring occurs.
 制御部11は、ステップS13で生成した現在のフレームにおける出血領域画像中の出血領域から、ステップS103で特定した低周波成分の領域を除外する(S104)。これにより、出血領域に含まれる画素のうちで、ブレ又はボケが発生している領域の画素を、出血発生の判定処理の対象から除外できる。その後、制御部11は、ステップS14以降の処理を行う。なお、ステップS14で、制御部11は、ステップS13で生成した出血領域画像に含まれる出血領域から、ブレ又はボケが発生している領域が除外された残りの出血領域に基づいて、前フレームにおける出血領域と、現在のフレームにおける出血領域との差分を算出する(S14)。これにより、出血領域のうちでブレ又はボケが発生している領域を、出血発生の判定処理の対象から除外でき、出血発生の誤検出を抑制できる。 The control unit 11 excludes the region of the low frequency component identified in step S103 from the bleeding region in the bleeding region image in the current frame generated in step S13 (S104). Thereby, among the pixels included in the bleeding area, the pixels in the area where blurring or blurring occurs can be excluded from the target of the bleeding occurrence determination process. After that, the control unit 11 performs the processing from step S14 onwards. In addition, in step S14, the control unit 11 determines the bleeding area in the previous frame based on the remaining bleeding area from which the blurred or blurred area is excluded from the bleeding area included in the bleeding area image generated in step S13. The difference between the bleeding area and the bleeding area in the current frame is calculated (S14). As a result, a region in which blur or blur has occurred among the bleeding regions can be excluded from the bleeding occurrence determination processing, and erroneous detection of bleeding occurrence can be suppressed.
 本変形例においても、上述した実施形態1と同様の効果が得られる。また、本変形例では、ブレ又はボケが発生している出血領域を、出血発生の判定処理に用いないことにより、出血発生の誤検出を抑制できる。本変形例の構成は、上述した各変形例の構成と組み合わせることが可能であり、各変形例と組み合わせた場合であっても同様の効果が得られる。また、本実施形態においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this modification, the same effects as in the first embodiment described above can be obtained. Furthermore, in this modification, by not using a bleeding area where blurring or blurring occurs in the bleeding occurrence determination process, it is possible to suppress erroneous detection of bleeding occurrence. The configuration of this modification can be combined with the configurations of each modification described above, and the same effect can be obtained even when combined with each modification. Further, also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
 上述した変形例1~8では、時系列に前後するフレーム間における出血領域の差分に基づいて出血発生の有無を判定する際に、各フレームにおける出血領域をより正確に検出でき、フレーム間における出血領域の差分をより正確に算出できる構成である。上述した実施形態1の情報処理装置10において、出血発生の有無を精度良く判定するための変形例は、上述した変形例1~8に限定されない。例えば、上述した実施形態1の情報処理装置10では、臓器又は血溜まり等の赤い領域を誤って出血領域とする誤検出が発生することがある。このような誤検出を抑制するために、例えば、撮影画像に基づいて検出した出血領域の大きさ又は形状から、真の出血領域であるか、臓器又は血溜まり等の非出血領域であるかを判別してもよい。具体的には、図2に示す処理において、制御部11は、ステップS13の処理後に、ステップS13で生成した出血領域画像中の出血領域の大きさ又は形状に基づいて、当該出血領域が真の出血領域であるか、非出血領域であるかを判断する処理を行ってもよい。例えば、出血領域は出血点から広がる傾向があるため、出血発生直後では面積が小さく、また、広範囲に散らばっていないと考えられる。よって、最初に検出された時点の出血領域が所定サイズ以上に大きい場合、出血領域の数が所定数以上に多い場合、複数の出血領域が広範囲(例えば画像全体)に広がっている場合、出血領域の形状が単純形状でない場合等では、出血領域ではない可能性が高い。よって、制御部11は、出血領域画像中の各出血領域について、上述したいずれかの状態であるか否かを判断し、いずれの状態でもない出血領域を真の出血領域であると判断する。そして、制御部11は、真の出血領域であると判断した出血領域について、ステップS14以降の処理を実行する。即ち、制御部11は、上述したいずれかの状態である出血領域を非出血領域であると判断し、当該出血領域を出血発生の判定対象から除外する。これにより、撮影画像中の赤い領域から真の出血領域が特定できるので、出血領域の誤検出が抑制され、その結果、出血発生の誤検出が抑制される。なお、本実施形態において、上述した各変形例の構成を必要に応じて適宜組み合わせて用いることができ、これにより、出血発生の判定精度をより向上させることができる。 In the above-mentioned variations 1 to 8, when determining the presence or absence of bleeding based on the difference in the bleeding area between adjacent frames in time series, the bleeding area in each frame can be detected more accurately, and the bleeding between frames can be detected more accurately. This configuration allows for more accurate calculation of area differences. In the information processing device 10 of the first embodiment described above, modifications for accurately determining whether or not bleeding has occurred are not limited to the first to eighth modifications described above. For example, in the information processing device 10 of the first embodiment described above, erroneous detection may occur in which a red area such as an organ or a blood pool is mistakenly determined as a bleeding area. In order to suppress such false detections, for example, it is possible to determine whether the bleeding area is a true bleeding area or a non-bleeding area such as an organ or blood pool based on the size or shape of the bleeding area detected based on the captured image. May be determined. Specifically, in the process shown in FIG. 2, after the process in step S13, the control unit 11 determines whether the bleeding area is true based on the size or shape of the bleeding area in the bleeding area image generated in step S13. Processing may be performed to determine whether the area is a bleeding area or a non-bleeding area. For example, the bleeding area tends to spread from the bleeding point, so immediately after the bleeding occurs, the area is small and is not widely scattered. Therefore, if the bleeding area at the time of first detection is larger than a predetermined size, if the number of bleeding areas is larger than a predetermined number, or if multiple bleeding areas are spread over a wide area (for example, the entire image), the bleeding area If the shape is not a simple shape, there is a high possibility that it is not a bleeding area. Therefore, the control unit 11 determines whether or not each bleeding area in the bleeding area image is in any of the states described above, and determines that a bleeding area that is not in any of the states is a true bleeding area. Then, the control unit 11 executes the processes from step S14 onward for the bleeding area determined to be a true bleeding area. That is, the control unit 11 determines that a bleeding area in any of the above states is a non-bleeding area, and excludes the bleeding area from the bleeding occurrence determination target. As a result, the true bleeding area can be identified from the red area in the photographed image, so erroneous detection of the bleeding area is suppressed, and as a result, erroneous detection of the occurrence of bleeding is suppressed. Note that, in this embodiment, the configurations of the above-described modifications can be used in appropriate combinations as necessary, thereby making it possible to further improve the accuracy of determining the occurrence of bleeding.
(実施形態2)
 撮影画像中の出血領域を、学習モデルを用いて検出する情報処理装置10について説明する。本実施形態の情報処理装置は、実施形態1の情報処理装置10と同様の構成を有するので、構成についての説明は省略する。なお、本実施形態の情報処理装置10は、記憶部12に、例えば機械学習によって訓練データを学習済みの学習モデル12M(図22参照)を記憶している。本実施形態の学習モデル12Mは、内視鏡20で撮影した撮影画像の各画素の画素値が入力された場合に、この画素は、出血領域に含まれる画素(出血ピクセル)であるか、非出血領域に含まれる画素(非出血ピクセル)であるかを示す情報(出血領域に関する情報)を出力するように学習された学習済みモデルである。学習モデル12Mは、人工知能ソフトウェアの一部として機能するプログラムモジュールとしての利用が想定される。学習モデル12Mは、入力値に対して所定の演算を行い、演算結果を出力するものであり、記憶部12には、この演算を規定する関数の係数や閾値等のデータが学習モデル12Mとして記憶される。
(Embodiment 2)
An information processing device 10 that detects a bleeding area in a photographed image using a learning model will be described. The information processing apparatus of this embodiment has the same configuration as the information processing apparatus 10 of Embodiment 1, so a description of the configuration will be omitted. Note that the information processing device 10 of this embodiment stores in the storage unit 12 a learning model 12M (see FIG. 22) that has been trained with training data, for example, by machine learning. In the learning model 12M of this embodiment, when the pixel value of each pixel of a captured image captured by the endoscope 20 is input, whether this pixel is a pixel included in a bleeding area (bleeding pixel) or not. This is a trained model that has been trained to output information indicating whether a pixel is included in a bleeding area (non-bleeding pixel) (information regarding a bleeding area). The learning model 12M is assumed to be used as a program module that functions as part of artificial intelligence software. The learning model 12M performs predetermined calculations on input values and outputs the calculation results, and the storage unit 12 stores data such as coefficients and threshold values of functions that define this calculation as the learning model 12M. be done.
 図22は学習モデル12Mの構成例を示す説明図である。図22に示す学習モデル12Mは、内視鏡20で撮影した撮影画像の画素のR成分の画素値(R成分値)、G成分の画素値(G成分値)、B成分の画素値(B成分値)を入力とし、入力されたRGBの成分値に基づいて、当該画素が出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを分類する演算を行い、演算した結果を出力するように学習してある。学習モデル12Mは、1つの画素のRGBの成分値から、当該画素が出血領域に含まれている確率を出力する任意の関数であればよく、ロジスティック回帰や線形回帰等のアルゴリズム、あるいはニューラルネットワーク等を用いて構成される。 FIG. 22 is an explanatory diagram showing a configuration example of the learning model 12M. The learning model 12M shown in FIG. 22 includes pixel values of the R component (R component value), pixel values of the G component (G component value), and pixel values of the B component (B component value) is input, and based on the input RGB component values, a calculation is performed to classify whether the pixel is included in a bleeding area or a pixel included in a non-bleeding area, and the calculated result is calculated. It has been learned to output . The learning model 12M may be any function that outputs the probability that a pixel is included in a bleeding area from the RGB component values of one pixel, and may be an algorithm such as logistic regression or linear regression, or a neural network, etc. Constructed using
 学習モデル12Mは、1つの画素のRGBの成分値が入力される入力層と、入力されたRGBの成分値に基づいて、当該画素が出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを分類する中間層と、中間層の演算結果を基に、当該画素が出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを示す情報を出力する出力層とを有する。中間層は、各種の関数及び閾値等を用いて、入力層を介して入力されたRGBの成分値から出力値を算出する。出力層は、出血領域及び非出血領域のそれぞれに対応付けられた2つの出力ノードを有しており、各出力ノードから、出血領域に含まれる画素であるかと分類すべき確率(確信度)、及び非出血領域に含まれる画素であると分類すべき確率(確信度)を出力する。出力層の各出力ノードからの出力値は、例えば0~1の値であり、各出力ノードから出力された確率の合計が1.0(100%)となる。 The learning model 12M determines whether the pixel is included in a bleeding area or in a non-bleeding area based on an input layer into which RGB component values of one pixel are input and the input RGB component values. An intermediate layer that classifies whether the pixel is a pixel, and an output layer that outputs information indicating whether the pixel is included in a bleeding area or a non-bleeding area based on the calculation results of the intermediate layer. and has. The intermediate layer calculates an output value from the RGB component values input via the input layer using various functions, threshold values, and the like. The output layer has two output nodes associated with each of the bleeding area and non-bleeding area, and from each output node, the probability (confidence) that the pixel should be classified as being included in the bleeding area, and outputs the probability (confidence) that the pixel should be classified as being included in a non-bleeding area. The output value from each output node of the output layer is, for example, a value between 0 and 1, and the sum of the probabilities output from each output node is 1.0 (100%).
 上述した構成により、本実施形態の学習モデル12Mは、画像中の各画素のRGBの成分値が入力された場合に、出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを示す出力値(確信度)を出力する。情報処理装置10は、上述した学習モデル12Mにおいて、各出力ノードからの出力値のうちで大きい方の出力値(確信度)を出力した出力ノードを特定し、特定した出力ノードに出血領域が対応付けられているか、又は非出血領域が対応付けられているかに応じて、入力された画素値の画素が出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを分類する。なお、学習モデル12Mの出力層は、出血領域に含まれる画素であると分類すべき確率、及び非出血領域に含まれる画素であると分類すべき確率を出力する複数の出力ノードを有する代わりに、分類確率(確信度)が高い領域(出血領域又は非出血領域)を示す情報を出力する1個の出力ノードを有する構成でもよい。 With the above-described configuration, the learning model 12M of this embodiment, when the RGB component values of each pixel in an image is input, determines whether the pixel is included in a bleeding area or a non-bleeding area. Outputs an output value (confidence) that indicates. In the learning model 12M described above, the information processing device 10 identifies the output node that outputs the larger output value (confidence) among the output values from each output node, and determines that the bleeding area corresponds to the identified output node. The pixel of the input pixel value is classified as a pixel included in a bleeding area or a pixel included in a non-bleeding area, depending on whether the input pixel value is a pixel included in a bleeding area or a pixel included in a non-bleeding area. Note that the output layer of the learning model 12M has a plurality of output nodes that output the probability that a pixel should be classified as a pixel included in a bleeding region and the probability that a pixel should be classified as a pixel included in a non-bleeding region. , a configuration having one output node that outputs information indicating a region (bleeding region or non-bleeding region) with high classification probability (confidence) may be used.
 学習モデル12Mは、訓練用のRGBの成分値(画素値)と、この画素が出血領域に含まれる画素であるか非出血領域に含まれる画素であるかを示す情報(正解ラベル)とを含む訓練データを用いて機械学習することにより生成できる。訓練データは、医師等の専門家によって、内視鏡20で撮影した撮影画像の各画素に対してそれぞれ出血領域又は非出血領域が割り当てられたアノテーション画像から生成される。具体的には、訓練データは、アノテーション画像において、出血領域が割り当てられた画素のRGBの成分値に対して、出血領域を示す正解ラベルを付与して生成され、非出血領域が割り当てられた画素のRGBの成分値に対して、非出血領域を示す正解ラベルを付与して生成される。 The learning model 12M includes RGB component values (pixel values) for training and information (correct label) indicating whether this pixel is included in a bleeding area or a non-bleeding area. It can be generated by machine learning using training data. The training data is generated by an expert such as a doctor from an annotation image in which a bleeding area or a non-bleeding area is assigned to each pixel of an image taken with the endoscope 20. Specifically, the training data is generated by adding a correct label indicating a bleeding area to the RGB component values of pixels to which a bleeding area is assigned in an annotation image, and to the RGB component values of pixels to which a non-bleeding area is assigned. It is generated by adding a correct label indicating a non-bleeding area to the RGB component values of .
 学習モデル12Mは、訓練データに含まれるRGBの成分値が入力された場合に、訓練データに含まれる正解ラベル(出血領域又は非出血領域)に対応する出力ノードからの出力値が1に近づき、他の出力ノードからの出力値が0に近づくように学習する。学習処理において学習モデル12Mは、入力されたRGBの成分値に基づいて中間層での演算を行い、各出力ノードからの出力値を算出する。学習モデル12Mは、算出した各出力ノードの出力値と正解ラベルに応じた値(正解ラベルに対応する出力ノードに対しては1、他の出力ノードに対しては0)とを比較し、両者が近似するように、中間層での演算処理に用いるパラメータを最適化する。当該パラメータは、中間層におけるノード間の重み(結合係数)等である。パラメータの最適化の方法は特に限定されないが、誤差逆伝播法、最急降下法等を用いることができる。これにより、RGBの画素値が入力された場合に、この画素が出血領域内の画素であるか非出血領域内の画素であるかを予測し、予測結果を出力する学習モデル12Mが得られる。 In the learning model 12M, when the RGB component values included in the training data are input, the output value from the output node corresponding to the correct label (bleeding area or non-bleeding area) included in the training data approaches 1, Learn so that the output values from other output nodes approach 0. In the learning process, the learning model 12M performs calculations in the intermediate layer based on input RGB component values, and calculates output values from each output node. The learning model 12M compares the calculated output value of each output node with the value corresponding to the correct label (1 for the output node corresponding to the correct label, 0 for other output nodes), and compares both. The parameters used for calculation processing in the intermediate layer are optimized so that The parameters include weights (coupling coefficients) between nodes in the intermediate layer. The parameter optimization method is not particularly limited, but an error backpropagation method, steepest descent method, etc. can be used. As a result, when an RGB pixel value is input, a learning model 12M that predicts whether this pixel is in a bleeding area or a non-bleeding area and outputs a prediction result is obtained.
 学習モデル12Mの学習は他の学習装置で行われてもよい。他の学習装置で学習が行われて生成された学習済みの学習モデル12Mは、例えばインターネット経由又は可搬型記憶媒体10a経由で学習装置から情報処理装置10にダウンロードされて記憶部12に記憶される。学習モデル12Mは、図22に示す構成に限定されない。例えば、撮影画像の各画素のR成分値、G成分値、及びB成分値に加えて、R成分値からG成分値を差し引いた値(R-G)、R成分値からG成分値を除算した値(R/G)、R成分値からB成分値を除算した値(R/B)等の特徴量が入力される構成でもよく、これらの特徴量のうちの複数の特徴量の組合せが入力される構成でもよい。また、学習モデル12Mは、撮影画像中の1画素の各特徴量が入力される構成のほかに、対象画素に加えて対象画素の近傍の画素の各特徴量が入力される構成でもよい。この場合、対象画素を含む微小な領域内の各画素値から、出血領域又は非出血領域であるかを分類することが可能である。また、このような構成とした場合、学習モデル12Mは、SVM(サポートベクターマシン)、決定木、ランダムフォレスト等のアルゴリズムを用いて構成されてもよく、複数のアルゴリズムを組み合わせて構成されてもよい。 Learning of the learning model 12M may be performed by another learning device. The trained learning model 12M generated by performing learning on another learning device is downloaded from the learning device to the information processing device 10 via the Internet or the portable storage medium 10a, and is stored in the storage unit 12, for example. . The learning model 12M is not limited to the configuration shown in FIG. 22. For example, in addition to the R component value, G component value, and B component value of each pixel of the captured image, the value obtained by subtracting the G component value from the R component value (RG), and the value obtained by dividing the G component value from the R component value A configuration in which feature quantities such as a value obtained by dividing the B component value from the R component value (R/B), etc., may be input, and a combination of multiple feature quantities among these feature quantities may be input. The configuration may be input. Further, the learning model 12M may have a configuration in which each feature amount of one pixel in a photographed image is inputted, or a configuration in which in addition to the target pixel, each feature amount of pixels in the vicinity of the target pixel is inputted. In this case, it is possible to classify whether the area is a bleeding area or a non-bleeding area from each pixel value in a minute area including the target pixel. In addition, in the case of such a configuration, the learning model 12M may be configured using algorithms such as SVM (support vector machine), decision tree, random forest, etc., or may be configured by combining multiple algorithms. .
 以下に、学習モデル12Mを生成する処理について説明する。図23は学習モデル12Mの生成処理手順の一例を示すフローチャートである。以下の処理は、情報処理装置10の制御部11が、記憶部12に記憶してあるプログラム12Pに従って実行するが、他の学習装置で行われてもよい。以下の処理では、制御部11はまず、アノテーション画像に基づいて訓練データを生成し、生成した訓練データを用いて学習モデル12Mの学習処理を行う。アノテーション画像は、例えば記憶部12に予め記憶されていているものとする。 The process of generating the learning model 12M will be described below. FIG. 23 is a flowchart showing an example of the procedure for generating the learning model 12M. The following process is executed by the control unit 11 of the information processing device 10 according to the program 12P stored in the storage unit 12, but may be executed by another learning device. In the following process, the control unit 11 first generates training data based on the annotation image, and performs a learning process on the learning model 12M using the generated training data. It is assumed that the annotation image is stored in the storage unit 12 in advance, for example.
 情報処理装置10の制御部11は、記憶部12からアノテーション画像を1つ読み出す(S111)。制御部11は、アノテーション画像中の1つの画素の画素値を抽出し(S112)、抽出した画素が出血領域を割り当てられた画素であるか否かを判断する(S113)。出血領域が割り当てられた画素であると判断した場合(S113:YES)、制御部11は、抽出した画素値に、出血領域を示す正解ラベル(出血ラベル)を付与して訓練データを生成する(S114)。出血領域が割り当てられた画素でないと判断した場合(S113:NO)、制御部11は、抽出した画素値に、非出血領域を示す正解ラベル(非出血ラベル)を付与して訓練データを生成する(S115)。制御部11は、生成した訓練データを、例えば記憶部12に用意された訓練DB(図示せず)に記憶する(S116)。 The control unit 11 of the information processing device 10 reads one annotation image from the storage unit 12 (S111). The control unit 11 extracts the pixel value of one pixel in the annotation image (S112), and determines whether the extracted pixel is a pixel to which a bleeding area has been assigned (S113). If it is determined that the pixel is a pixel to which a bleeding area is assigned (S113: YES), the control unit 11 generates training data by adding a correct label (bleeding label) indicating a bleeding area to the extracted pixel value ( S114). If it is determined that the pixel is not assigned to a bleeding area (S113: NO), the control unit 11 generates training data by adding a correct label (non-bleeding label) indicating a non-bleeding area to the extracted pixel value. (S115). The control unit 11 stores the generated training data in, for example, a training DB (not shown) prepared in the storage unit 12 (S116).
 制御部11は、ステップS111で読み出したアノテーション画像中の画素のうちで、訓練データの生成処理に用いられていない画素(未処理の画素)があるか否かを判断する(S117)。未処理の画素があると判断した場合(S117:YES)、制御部11は、ステップS112の処理に戻り、未処理の画素についてステップS112~S116の処理を行う。これにより、アノテーション画像中の各画素の画素値に、各画素が出血領域の画素であるか非出血領域の画素であるかを示す正解ラベル(出血領域に関する情報)が付与された訓練データが蓄積される。 The control unit 11 determines whether there are any pixels (unprocessed pixels) that are not used in the training data generation process among the pixels in the annotation image read out in step S111 (S117). If it is determined that there are unprocessed pixels (S117: YES), the control unit 11 returns to the process of step S112 and performs the processes of steps S112 to S116 for the unprocessed pixels. As a result, training data is accumulated in which the pixel value of each pixel in the annotation image is given a correct label (information regarding the bleeding area) indicating whether each pixel is a pixel in a bleeding area or a pixel in a non-bleeding area. be done.
 未処理の画素がないと判断した場合(S117:NO)、制御部11は、記憶部12に記憶してあるアノテーション画像のうちで、訓練データの生成処理に用いられていない画像(未処理の画像)があるか否かを判断する(S118)。未処理の画像があると判断した場合(S118:YES)、制御部11は、ステップS111の処理に戻り、未処理のアノテーション画像についてステップS111~S117の処理を行う。これにより、用意されたアノテーション画像に基づいて、学習モデル12Mの学習に用いる訓練データを生成して訓練DBに蓄積できる。 If it is determined that there are no unprocessed pixels (S117: NO), the control unit 11 selects images that are not used in the training data generation process (unprocessed pixels) among the annotation images stored in the storage unit 12. image) is present (S118). If it is determined that there is an unprocessed image (S118: YES), the control unit 11 returns to the process of step S111 and performs the processes of steps S111 to S117 for the unprocessed annotation image. Thereby, training data used for learning the learning model 12M can be generated and stored in the training DB based on the prepared annotation image.
 制御部11は、未処理の画像がないと判断した場合(S118:NO)、上述したように訓練DBに蓄積した訓練データを用いて、学習モデル12Mの学習を行う。具体的には、制御部11は、上述した処理によって訓練DBに蓄積した訓練データのうちの1つを読み出す(S119)。そして、制御部11は、訓練データに基づいて、学習モデル12Mの学習処理を行う(S120)。ここでは、制御部11は、訓練データに含まれるRGBの成分値を学習モデル12Mに入力し、RGBの成分値が入力されることによって学習モデル12Mから出力される出力値を取得する。学習モデル12Mは、入力されたRGBの成分値に基づく演算を行い、各出力ノードからの出力値を算出する。制御部11は、学習モデル12Mから出力された各出力ノードの出力値と、訓練データに含まれる正解ラベルに応じた値(正解ラベルに対応する出力ノードに対しては1、他の出力ノードに対しては0)とを比較し、両者が近似するように学習モデル12Mを学習させる。学習処理において、学習モデル12Mは、中間層での演算処理に用いるパラメータを最適化する。例えば制御部11は、中間層におけるノード間の重み(結合係数)等のパラメータを、学習モデル12Mの出力層から入力層に向かって順次更新する誤差逆伝播法を用いて最適化する。 When the control unit 11 determines that there are no unprocessed images (S118: NO), the control unit 11 performs learning of the learning model 12M using the training data accumulated in the training DB as described above. Specifically, the control unit 11 reads out one of the training data accumulated in the training DB through the process described above (S119). Then, the control unit 11 performs a learning process on the learning model 12M based on the training data (S120). Here, the control unit 11 inputs RGB component values included in the training data to the learning model 12M, and acquires an output value output from the learning model 12M by inputting the RGB component values. The learning model 12M performs calculations based on input RGB component values and calculates output values from each output node. The control unit 11 outputs the output value of each output node output from the learning model 12M and a value corresponding to the correct label included in the training data (1 for the output node corresponding to the correct label, and 1 for the other output nodes). 0), and the learning model 12M is trained so that the two approximate each other. In the learning process, the learning model 12M optimizes parameters used for calculation processing in the intermediate layer. For example, the control unit 11 optimizes parameters such as weights (coupling coefficients) between nodes in the intermediate layer using an error backpropagation method that sequentially updates parameters from the output layer to the input layer of the learning model 12M.
 制御部11は、訓練DBに記憶してある訓練データのうちで、学習処理が行われていない未処理の訓練データがあるか否かを判断する(S121)。未処理の訓練データがあると判断した場合(S121:YES)、制御部11は、ステップS119の処理に戻り、学習処理が未処理の訓練データについてステップS119~S120の処理を行う。未処理の訓練データがないと判断した場合(S121:NO)、制御部11は、一連の処理を終了する。上述した学習処理により、RGBの画素値が入力された場合に、入力された画素値の画素が出血領域の画素である可能性(出力ノード0からの出力値)と、非出血領域の画素である可能性(出力ノード1からの出力値)とを出力する学習モデル12Mが生成される。 The control unit 11 determines whether or not there is unprocessed training data that has not been subjected to learning processing among the training data stored in the training DB (S121). If it is determined that there is unprocessed training data (S121: YES), the control unit 11 returns to the process of step S119, and performs the processes of steps S119 to S120 on the training data that has not been subjected to the learning process. If it is determined that there is no unprocessed training data (S121: NO), the control unit 11 ends the series of processes. Through the learning process described above, when RGB pixel values are input, the probability that the input pixel value is a pixel in a bleeding area (output value from output node 0) and the probability that a pixel with the input pixel value is a pixel in a non-bleeding area are determined. A learning model 12M that outputs a certain possibility (output value from output node 1) is generated.
 学習モデル12Mは、上述したような訓練データを用いた学習処理を繰り返し行うことにより更に最適化することが可能である。また、既に学習済みの学習モデル12Mについても、上述した学習処理で再学習させることにより、推定精度が更に向上した学習モデル12Mを生成できる。なお、上述した処理において、ステップS111~S118による訓練データの生成処理と、ステップS119~S121による学習モデル12Mの生成処理とは、各別の装置で行われてもよい。 The learning model 12M can be further optimized by repeatedly performing learning processing using training data as described above. Further, by relearning the already trained learning model 12M using the above-described learning process, a learning model 12M with further improved estimation accuracy can be generated. In the above-described process, the training data generation process in steps S111 to S118 and the learning model 12M generation process in steps S119 to S121 may be performed by separate devices.
 以下に、上述した処理によって生成した学習モデル12Mを用いて、内視鏡20で撮影した撮影画像に基づいて出血を検知する処理について説明する。図24は実施形態2の出血検知処理手順の一例を示すフローチャート、図25A~図26は実施形態2の出血検知処理の説明図である。図24に示す処理は、図2に示す処理において、ステップS12の代わりにステップS131を追加したものである。図2と同じステップについては説明を省略する。 Hereinafter, a process of detecting bleeding based on an image taken by the endoscope 20 using the learning model 12M generated by the above-described process will be described. FIG. 24 is a flowchart showing an example of a bleeding detection processing procedure according to the second embodiment, and FIGS. 25A to 26 are explanatory diagrams of the bleeding detection processing according to the second embodiment. The process shown in FIG. 24 is the process shown in FIG. 2 with step S131 added instead of step S12. Description of the same steps as in FIG. 2 will be omitted.
 本実施形態の情報処理装置10において、制御部11は、内視鏡20で撮影された撮影画像を内視鏡制御装置21から取得した場合(S11)、撮影画像中の画素の画素値(RGBの成分値)を学習モデル12Mに入力し、学習モデル12Mからの出力値に基づいて出血領域又は非出血領域に分類する(S131)。具体的には、制御部11は、学習モデル12Mからの出力値のうちで大きい方の出力値を特定し、大きい方の出力値を出力した出力ノードに対応付けられている領域(出血領域又は非出血領域)を、ここでの画素の領域に特定する。制御部11(出血領域取得部)は、各画素の分類結果に基づいて、当該撮影画像中の出血領域を示す出血領域画像を生成する(S13)。ここでは、制御部11は、出血領域に分類された画素に対して白(例えば1)を、非出血領域に分類された画素に対して黒(例えば0)をそれぞれ割り当てて出血領域画像を生成する。制御部11は、例えば図25Aに示す撮影画像に基づいて、図25Bに示すような出血領域画像を生成する。制御部11は、内視鏡制御装置21から順次送信されてくる各フレームの撮影画像に対して、上述したステップS11,S131,S13の処理を行い、それぞれの撮影画像から出血領域画像を生成する。 In the information processing device 10 of the present embodiment, when a photographed image photographed by the endoscope 20 is acquired from the endoscope control device 21 (S11), the control unit 11 controls the pixel values (RGB component values) are input into the learning model 12M and classified into bleeding areas or non-bleeding areas based on the output values from the learning model 12M (S131). Specifically, the control unit 11 identifies the larger output value among the output values from the learning model 12M, and identifies the area (bleeding area or A non-bleeding area) is specified in this pixel area. The control unit 11 (bleeding area acquisition unit) generates a bleeding area image indicating the bleeding area in the captured image based on the classification result of each pixel (S13). Here, the control unit 11 generates a bleeding area image by assigning white (for example, 1) to pixels classified as bleeding areas and black (for example, 0) to pixels classified as non-bleeding areas. do. The control unit 11 generates a bleeding area image as shown in FIG. 25B, for example, based on the photographed image shown in FIG. 25A. The control unit 11 performs the above-described processing of steps S11, S131, and S13 on the captured images of each frame sequentially transmitted from the endoscope control device 21, and generates a bleeding area image from each captured image. .
 その後、制御部11は、ステップS14以降の処理を行う。なお、本実施形態では、ステップS16で制御部11は、各撮影時間までに算出した出血領域の差分の移動平均を測定値として、測定値の平均値に対して+2σの値を算出し、出血発生の有無を判定するための判定閾値に設定してもよい。図26中の実線は、ステップS14で算出されたフレーム間における出血領域の差分の時系列変化を示し、図26中の破線は、出血領域の差分の移動平均に対して-2σの値と+2σの値との時系列変化を示す。本実施形態では、制御部11(判定部)は、フレーム間の出血領域の差分が、出血領域の差分の移動平均に対する+2σの値以上となった場合に、出血発生有りと判定する。従って、制御部11は、図26中の破線で示す上側(+2σ)の時系列データの各値を、各撮影時間について出血発生の有無を判定する際に用いる閾値に設定する。これにより、学習モデル12Mを用いて各撮影画像から精度良く検知された出血領域に基づいて、出血発生の有無が判定される。なお、本実施形態においても、出血領域の差分の移動平均+2σの値ではなく、出血領域の差分の移動平均+4σの値を出血発生の有無の判定閾値に用いてもよい。このような構成とした場合、本実施形態においても、出血発生の誤検出が抑制され、出血発生の検出精度が向上する。 After that, the control unit 11 performs the processing from step S14 onwards. In this embodiment, in step S16, the control unit 11 calculates a value of +2σ with respect to the average value of the measured values, using the moving average of the differences in bleeding areas calculated up to each imaging time as the measured value, and calculates the value of +2σ with respect to the average value of the measured values. It may also be set as a determination threshold for determining whether or not the occurrence has occurred. The solid line in FIG. 26 shows the time-series change in the difference in the bleeding area between frames calculated in step S14, and the broken line in FIG. It shows the time series change with the value of. In this embodiment, the control unit 11 (determination unit) determines that bleeding has occurred when the difference in bleeding areas between frames is equal to or greater than the value of +2σ with respect to the moving average of the differences in bleeding areas. Therefore, the control unit 11 sets each value of the upper (+2σ) time series data indicated by the broken line in FIG. 26 as the threshold value used when determining the presence or absence of bleeding occurrence for each imaging time. Thereby, the presence or absence of bleeding occurrence is determined based on the bleeding area that is accurately detected from each captured image using the learning model 12M. Note that in this embodiment as well, instead of the value of the moving average of the differences in bleeding areas +2σ, the value of the moving average of the differences in bleeding areas +4σ may be used as the threshold for determining the presence or absence of bleeding. With such a configuration, also in this embodiment, erroneous detection of bleeding occurrence is suppressed, and the detection accuracy of bleeding occurrence is improved.
 本実施形態の内視鏡手術システムにおいて、撮影画像中の出血領域を検出する際に学習モデル12Mを用いること以外は、実施形態1の情報処理装置10が行う処理と同様であるので、実施形態1と同様の効果が得られる。また、本実施形態では、学習モデル12Mが撮影画像中の各画素の特徴に応じて、それぞれの画素を出血領域及び非出血領域に分類する。よって、学習モデル12Mを用いるユーザは、撮影画像からアノテーション画像を生成して学習モデル12Mに学習させることにより、学習モデル12Mを用いて撮影画像中の出血領域を高精度に検出できる。従って、本実施形態においても、内視鏡20で撮影した撮影画像に基づいて、出血発生の有無を精度よく検知できる。 The endoscopic surgery system of this embodiment is the same as the processing performed by the information processing device 10 of Embodiment 1, except that the learning model 12M is used when detecting a bleeding area in a photographed image. The same effect as 1 can be obtained. Furthermore, in the present embodiment, the learning model 12M classifies each pixel in the photographed image into a bleeding area and a non-bleeding area according to the characteristics of each pixel. Therefore, the user using the learning model 12M can detect bleeding regions in the photographed image with high accuracy using the learning model 12M by generating an annotation image from the photographed image and having the learning model 12M learn. Therefore, also in this embodiment, the presence or absence of bleeding can be detected with high accuracy based on the photographed image taken with the endoscope 20.
 本実施形態においても、実施形態1で説明した変形例1~8の適用が可能である。従って、本実施形態の情報処理装置10は、学習モデル12Mを用いて撮影画像中の出血領域を精度良く検知すると共に、変形例1~8のいずれか又は複数の構成を適用することにより、各フレームにおける出血領域をより正確に検出し、フレーム間における出血領域の差分をより正確に算出する構成を実現できる。また本実施形態においても、上述した実施形態1及び各変形例で適宜説明した変形例の適用が可能である。 Also in this embodiment, the variations 1 to 8 described in the first embodiment can be applied. Therefore, the information processing device 10 of the present embodiment uses the learning model 12M to accurately detect bleeding areas in photographed images, and also detects each bleeding area by applying one or more of the configurations of Modifications 1 to 8. It is possible to realize a configuration that more accurately detects bleeding areas in frames and more accurately calculates differences in bleeding areas between frames. Also in this embodiment, the modifications appropriately described in the above-described first embodiment and each modification can be applied.
 本実施形態の学習モデル12Mは、撮影画像中の各画素の画素値から、各画素が出血領域の画素であるか非出血領域の画素であるかを分類する構成であるが、このような構成に限定されない。例えば学習モデル12Mは、撮影画像が入力された場合に、撮影画像中の出血領域を検出するように構成されていてもよい。この場合、例えば学習モデル12Mは、CNN、R-CNN、Fast R-CNN、SSD、YOLO等の物体検知アルゴリズムで構成することができる。また、学習モデル12Mは、SegNet、FCN(Fully Convolutional Network )、U-Net等のセマンティックセグメンテーションを実現するアルゴリズムで構成されてもよい。 The learning model 12M of this embodiment is configured to classify whether each pixel is a bleeding area pixel or a non-bleeding area pixel based on the pixel value of each pixel in a captured image. but not limited to. For example, the learning model 12M may be configured to detect a bleeding area in the captured image when the captured image is input. In this case, for example, the learning model 12M can be configured with an object detection algorithm such as CNN, R-CNN, Fast R-CNN, SSD, or YOLO. Furthermore, the learning model 12M may be configured with an algorithm that implements semantic segmentation, such as SegNet, FCN (Fully Convolutional Network), and U-Net.
(実施形態3)
 学習モデル12Mを用いて撮影画像中の出血領域を検出する実施形態2の変形例である情報処理装置10について説明する。本実施形態の情報処理装置は、実施形態2の情報処理装置10と同様の構成を有するので、構成についての説明は省略する。本実施形態の情報処理装置10は、学習モデル12Mを用いて撮影画像中の出血領域を検出した後、時系列に前後するフレーム間における出血領域の差分(増加分)を示す画像を生成する。ここでは、情報処理装置10は、現在のフレーム中の出血領域のうちで、前フレームでは出血領域でなかった領域を示す画像を生成する。そして、情報処理装置10は、出血領域の差分(増加分)を示す画像において、各画素を例えば8画素×8画素、16画素×16画素等のセルに分割し、各セルにおいて、画素値が1(出血領域を示す画素値)の画素(出血ピクセル)の数を計数する。そして、情報処理装置10は、各セルにおいて、セルに含まれる画素数に対する出血ピクセル数の割合に応じて、各セルが出血領域であるか否かを判断し、その結果から、出血発生の有無を判定する。
(Embodiment 3)
An information processing device 10 that is a modification of the second embodiment that detects a bleeding area in a captured image using the learning model 12M will be described. The information processing apparatus of this embodiment has the same configuration as the information processing apparatus 10 of Embodiment 2, so a description of the configuration will be omitted. The information processing device 10 of this embodiment detects a bleeding area in a photographed image using the learning model 12M, and then generates an image showing the difference (increase) in the bleeding area between frames that follow in time series. Here, the information processing device 10 generates an image showing a bleeding area in the current frame that was not a bleeding area in the previous frame. Then, the information processing device 10 divides each pixel into cells of, for example, 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc. in the image showing the difference (increase) in the bleeding area, and in each cell, the pixel value is The number of pixels (bleeding pixels) with a value of 1 (pixel value indicating a bleeding area) is counted. Then, in each cell, the information processing device 10 determines whether or not each cell is a bleeding area according to the ratio of the number of bleeding pixels to the number of pixels included in the cell, and from the result, determines whether or not bleeding has occurred. Determine.
 図27は実施形態3の出血検知処理手順の一例を示すフローチャート、図28A~図28Cは実施形態3の出血検知処理の説明図である。図27に示す処理は、図24に示す処理において、ステップS14~S16の代わりにステップS141~S144を追加したものである。図2及び図24と同じステップについては説明を省略する。 FIG. 27 is a flowchart illustrating an example of a bleeding detection processing procedure according to the third embodiment, and FIGS. 28A to 28C are explanatory diagrams of the bleeding detection processing according to the third embodiment. The process shown in FIG. 27 is the process shown in FIG. 24 with steps S141 to S144 added instead of steps S14 to S16. Description of the same steps as in FIGS. 2 and 24 will be omitted.
 本実施形態の情報処理装置10において、制御部11は、図24中のステップS11,S131,S13の処理を実行し、現在のフレーム中の出血領域を示す出血領域画像を生成する(S13)。そして、制御部11(抽出部)は、現在のフレーム(処理対象のフレーム)中の出血領域のうちで、前フレームでは出血領域でなかった領域(増加した出血領域)を特定し、時系列に前後するフレーム間において出血領域の増加分(差分)を示す画像を生成する(S141)。例えば制御部11は、図28Aに示すように、増加した出血領域の画素の画素値を1とし、それ以外の画素の画素値を0とする画像を生成する。 In the information processing device 10 of this embodiment, the control unit 11 executes the processes of steps S11, S131, and S13 in FIG. 24, and generates a bleeding area image indicating the bleeding area in the current frame (S13). Then, the control unit 11 (extraction unit) identifies the area (increased bleeding area) that was not a bleeding area in the previous frame among the bleeding areas in the current frame (frame to be processed), and An image showing the increase (difference) in the bleeding area between the preceding and succeeding frames is generated (S141). For example, the control unit 11 generates an image in which the pixel values of pixels in the increased bleeding area are set to 1, and the pixel values of other pixels are set to 0, as shown in FIG. 28A.
 次に制御部11(画素計数部)は、出血領域の増加分を示す画像において、各画素を例えば8画素×8画素、16画素×16画素等のセルに分割し、セル毎に、画素値が1(出血領域を示す画素値)の画素(出血ピクセル)の数を計数する(S142)。図28Bは、図28Aに示す画像において、各セル内の全画素の画素値を、各セル内の出血ピクセル数に応じた画素値に変換したものであり、白いセルほど出血ピクセル数が多いセルを示し、各セル内の出血ピクセル数をモノクロのグラデーションで表している。 Next, the control unit 11 (pixel counting unit) divides each pixel into cells of 8 pixels x 8 pixels, 16 pixels x 16 pixels, etc. in the image showing the increase in the bleeding area, and calculates the pixel value for each cell. The number of pixels (bleeding pixels) where is 1 (pixel value indicating a bleeding area) is counted (S142). FIG. 28B shows the image shown in FIG. 28A in which the pixel values of all pixels in each cell are converted into pixel values according to the number of bleeding pixels in each cell, and the whiter the cell, the larger the number of bleeding pixels. , and the number of bleeding pixels in each cell is represented by a monochrome gradation.
 次に制御部11は、出血領域の増加分を示す画像において、セル毎に、各セルに含まれる画素数に対して、出血ピクセル数の割合を算出する(S143)。そして、制御部11(判定部)は、各セルについて算出した出血ピクセル数の割合に基づいて、各セルを出血領域又は非出血領域に判別する(S144)。ここでは、制御部11は、各撮影時間について、ステップS143で算出した各セルの出血ピクセル数の割合が、所定割合(例えば50%~80%)以上であるか否かを判定し、所定割合以上である場合、出血領域に判別し、所定割合未満である場合、非出血領域に判別する。これにより、制御部11は、図28Cに示すような二値画像を生成する。そして、制御部11は、ステップS144の判別結果に基づいて、出血発生の有無を判定する(S17)。本実施形態では、制御部11は、ステップS144で出血領域に判別されたセルの塊があるか否かを判断し、ある場合、出血発生有りと判定し、ない場合、出血発生無しと判定する。例えば制御部11は、出血領域に判別されたセルのうちで隣接するセルの数を計数し、所定数以上であった場合に、このセルを含む領域(セルの塊)を出血領域とする。よって、図28Cに示す画像において、出血領域に判別されたセルのうちで隣接するセルの数が所定数未満であるセルについては、非出血領域とされる。これにより、本実施形態では、セル単位で出血領域であるか否かが判定され、出血領域に判定されたセルが所定数以上隣接している場合に、この領域が出血領域であると特定され、特定された結果に基づいて、出血発生の有無が検出される。その後、制御部11は、ステップS18以降の処理を行う。 Next, the control unit 11 calculates, for each cell, the ratio of the number of bleeding pixels to the number of pixels included in each cell in the image showing the increase in the bleeding area (S143). Then, the control unit 11 (determination unit) determines each cell as a bleeding area or a non-bleeding area based on the ratio of the number of bleeding pixels calculated for each cell (S144). Here, for each imaging time, the control unit 11 determines whether the ratio of the number of bleeding pixels in each cell calculated in step S143 is equal to or higher than a predetermined ratio (for example, 50% to 80%), and If the ratio is above, it is determined to be a bleeding area, and if it is less than a predetermined ratio, it is determined to be a non-bleeding area. Thereby, the control unit 11 generates a binary image as shown in FIG. 28C. Then, the control unit 11 determines whether bleeding has occurred based on the determination result in step S144 (S17). In this embodiment, the control unit 11 determines whether or not there is a cluster of cells determined in the bleeding area in step S144, and if there is, it is determined that bleeding has occurred, and if there is not, it is determined that bleeding has not occurred. . For example, the control unit 11 counts the number of adjacent cells among the cells determined to be a bleeding area, and if the number is equal to or greater than a predetermined number, the area (cluster of cells) including this cell is determined as a bleeding area. Therefore, in the image shown in FIG. 28C, among the cells determined to be bleeding regions, those cells for which the number of adjacent cells is less than a predetermined number are determined to be non-bleeding regions. As a result, in this embodiment, it is determined whether or not each cell is a bleeding area, and if a predetermined number or more of cells that have been determined to be bleeding areas are adjacent to each other, this area is identified as a bleeding area. , Based on the identified results, the presence or absence of bleeding is detected. After that, the control unit 11 performs the processing from step S18 onwards.
 本実施形態では、撮影画像中のセル毎に、非出血ピクセルから出血ピクセルに変化した画素数を計数し、この画素数(出血ピクセル数)が所定割合以上となったセルを出血領域に特定すること以外は、実施形態2の情報処理装置10が行う処理と同様であるので、実施形態2と同様の効果が得られる。また、本実施形態では、セル毎に出血領域又は非出血領域を判定するので、出血領域と判定された画素が集まっているセルを出血領域に決定することにより、出血領域を精度良く検出できるので出血領域の誤検出を抑制できる。 In this embodiment, the number of pixels that change from non-bleeding pixels to bleeding pixels is counted for each cell in a captured image, and cells in which this number of pixels (bleeding pixel count) exceeds a predetermined percentage are identified as bleeding areas. Other than this, the processing is the same as the processing performed by the information processing device 10 of the second embodiment, so that the same effects as the second embodiment can be obtained. Furthermore, in this embodiment, since a bleeding area or a non-bleeding area is determined for each cell, a bleeding area can be detected with high accuracy by determining a cell where pixels determined to be a bleeding area are gathered as a bleeding area. Erroneous detection of bleeding areas can be suppressed.
 本実施形態においても、実施形態1で説明した変形例1~8のいずれか又は複数の適用が可能であり、変形例1~8の構成を適用することによって同様の効果が得られる。また、本実施形態においても、上述した各実施形態及び各変形例で適宜説明した変形例の適用が可能である。 Also in this embodiment, it is possible to apply any or more of Modifications 1 to 8 described in Embodiment 1, and similar effects can be obtained by applying the configurations of Modifications 1 to 8. Further, also in this embodiment, the modifications described in each of the above-described embodiments and modifications can be applied.
(実施形態4)
 上述した実施形態1では、撮影画像中の色成分の差分画像(例えばR成分とG成分との差分、又は、R成分とB成分との差分)に基づいて撮影画像中の出血領域を特定し、特定した出血領域に基づいて出血発生を判定する構成である。実施形態2では、学習モデル12Mを用いて撮影画像中の出血領域を特定し、特定した出血領域に基づいて出血発生を判定する構成である。実施形態3では、学習モデル12Mを用いて撮影画像中の出血領域を特定した後にセル毎に出血ピクセル数の割合に応じて出血領域又は非出血領域を判別し、セル毎の判別結果に基づいて撮影画像中の出血領域を特定し、特定した出血領域に基づいて出血発生を判定する構成である。本実施形態では、実施形態1における処理(第1判定処理)、実施形態2における処理(第2判定処理)、及び、実施形態3における処理(第3判定処理)の実行が可能であり、3つの処理の少なくとも1つの判定処理を実行し、実行した判定処理の結果に基づいて、出血発生の有無を特定する情報処理装置10について説明する。
(Embodiment 4)
In the first embodiment described above, a bleeding area in a photographed image is identified based on a difference image of color components in the photographed image (for example, a difference between an R component and a G component, or a difference between an R component and a B component). , the occurrence of bleeding is determined based on the identified bleeding area. In the second embodiment, the learning model 12M is used to identify a bleeding area in a captured image, and the occurrence of bleeding is determined based on the identified bleeding area. In the third embodiment, after identifying a bleeding area in a captured image using the learning model 12M, a bleeding area or a non-bleeding area is determined for each cell according to the ratio of the number of bleeding pixels, and based on the determination result for each cell. This configuration identifies a bleeding area in a photographed image and determines whether bleeding has occurred based on the identified bleeding area. In this embodiment, it is possible to execute the process in Embodiment 1 (first determination process), the process in Embodiment 2 (second determination process), and the process in Embodiment 3 (third determination process). An information processing apparatus 10 that executes at least one determination process among the three processes and identifies the presence or absence of bleeding occurrence based on the result of the executed determination process will be described.
 図29は、実施形態4の出血検知処理手順の一例を示すフローチャートである。図29に示す処理は、図2に示す処理において、ステップS11~S16の代わりにステップS151~S153を追加したものである。図2と同じステップについては説明を省略する。 FIG. 29 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fourth embodiment. The process shown in FIG. 29 is the process shown in FIG. 2 with steps S151 to S153 added instead of steps S11 to S16. Description of the same steps as in FIG. 2 will be omitted.
 本実施形態の情報処理装置10において、制御部11は、処理対象の撮影画像に基づいて出血発生を判定する判定処理を選択する(S151)。例えば、制御部11は、内視鏡20で撮影した画像(映像)の種類、フォーマット、撮影方法、あるいは、手術方法等に応じて、複数種類の判定処理のうちの少なくとも1つを選択する。なお、本実施形態では、出血発生の判定処理として、実施形態1で説明した図2に示す処理(第1判定処理)と、実施形態2で説明した図24に示す処理(第2判定処理)と、実施形態3で説明した図27に示す処理(第3判定処理)との実行が可能であり、制御部11は、3つの判定処理の少なくとも1つを選択する。 In the information processing device 10 of the present embodiment, the control unit 11 selects a determination process for determining the occurrence of bleeding based on the captured image to be processed (S151). For example, the control unit 11 selects at least one of a plurality of types of determination processing depending on the type, format, imaging method, or surgical method of the image (video) taken by the endoscope 20. In this embodiment, the process for determining the occurrence of bleeding includes the process shown in FIG. 2 described in Embodiment 1 (first determination process) and the process shown in FIG. 24 described in Embodiment 2 (second determination process). It is possible to execute the process shown in FIG. 27 (third determination process) described in the third embodiment, and the control unit 11 selects at least one of the three determination processes.
 次に制御部11は、撮影画像に対して、選択した判定処理を実行する(S152)。ここでは、第1判定処理が選択された場合、制御部11は、図2中のステップS11~S18の処理を実行し、時系列に前後するフレーム間における出血領域の差分(増加量)に基づいて出血発生の有無を判定する。また、第2判定処理が選択された場合、制御部11は、図24中のステップS11,S131,S13~S18の処理を実行し、学習モデル12Mを用いて撮影画像中の出血領域を検出し、時系列に前後するフレーム間における出血領域の差分(増加量)に基づいて出血発生の有無を判定する。また、第3判定処理が選択された場合、制御部11は、図27中のステップS11,S131,S13,S141~S144,S17~S18の処理を実行し、時系列に前後するフレーム間における出血領域の差分(増加量)をセル毎に判別した出血領域画像に基づいて出血発生の有無を判定する。 Next, the control unit 11 executes the selected determination process on the photographed image (S152). Here, when the first determination process is selected, the control unit 11 executes the processes of steps S11 to S18 in FIG. Determine whether bleeding has occurred. Further, when the second determination process is selected, the control unit 11 executes the processes of steps S11, S131, S13 to S18 in FIG. 24, and detects a bleeding area in the captured image using the learning model 12M. , the presence or absence of bleeding is determined based on the difference (increase amount) in the bleeding area between frames that precede and follow in time series. Further, when the third determination process is selected, the control unit 11 executes the processes of steps S11, S131, S13, S141 to S144, and S17 to S18 in FIG. The presence or absence of bleeding is determined based on the bleeding area image in which the area difference (increase amount) is determined for each cell.
 制御部11は、ステップS151で選択した判定処理による判定結果(出血発生の有無)に基づいて、出血発生有りと判断すべき確率(確信度)、又は、出血発生無しと判断すべき確率(確信度)を算出する(S153)。例えば、3種類の判定処理を実行し、2種類の判定処理で出血発生有りの判定結果が得られた場合、制御部11は、出血発生有りと判断すべき確信度として2/3を算出する。なお、各判定処理の判定結果に対して重み付けを行って、出血発生有りの確信度又は出血発生無しの確信度を算出してもよい。 Based on the determination result (presence or absence of bleeding occurrence) of the determination process selected in step S151, the control unit 11 determines the probability (certainty) that it should be determined that bleeding has occurred, or the probability (certainty) that it should be determined that bleeding has not occurred. degree) is calculated (S153). For example, if three types of determination processes are executed and two types of determination processes yield a determination result that bleeding has occurred, the control unit 11 calculates 2/3 as the confidence level for determining that bleeding has occurred. . Note that the determination results of each determination process may be weighted to calculate the certainty that bleeding has occurred or the certainty that bleeding has not occurred.
 そして、制御部11は、算出した確信度に基づいて、現在のフレームについて出血発生の有無を特定(判定)する(S17)。例えば制御部11は、出血発生有りの確信度が所定値(例えば0.5~0.8の値)以上である場合、判定結果を出血発生有りに特定し、所定値未満である場合、判定結果を出血発生無しに特定する。その後、制御部11は、ステップS18以降の処理を行う。 Then, the control unit 11 identifies (determines) the presence or absence of bleeding in the current frame based on the calculated certainty factor (S17). For example, the control unit 11 specifies the determination result as the occurrence of bleeding when the confidence level of occurrence of bleeding is equal to or higher than a predetermined value (for example, a value of 0.5 to 0.8), and when it is less than the predetermined value, the control unit 11 determines that Identify the outcome without bleeding. After that, the control unit 11 performs the processing from step S18 onwards.
 本実施形態では、上述の実施形態1~3の情報処理装置10が行う判定処理の実行が可能であり、3つの判定処理の判定結果から総合的に出血発生の有無を判定することにより、より安定した検知結果が得られる判定処理を実現できる。また、撮影画像の種類等に応じて、3つの判定処理を使い分けることができるので、撮影画像に適した判定処理を行うことによって出血発生の判定精度をより向上させることができる。 In this embodiment, it is possible to execute the determination process performed by the information processing apparatus 10 of the above-described embodiments 1 to 3, and by comprehensively determining the presence or absence of bleeding from the determination results of the three determination processes, it is possible to It is possible to realize a determination process that provides stable detection results. Further, since the three determination processes can be used depending on the type of photographed image, etc., the accuracy of determining the occurrence of bleeding can be further improved by performing the determination process suitable for the photographed image.
 本実施形態においても、実施形態1で説明した変形例1~8のいずれか又は複数の適用が可能であり、変形例1~8の構成を適用することによって同様の効果が得られる。また、本実施形態においても、上述した各実施形態及び各変形例で適宜説明した変形例の適用が可能である。 Also in this embodiment, it is possible to apply any or more of Modifications 1 to 8 described in Embodiment 1, and similar effects can be obtained by applying the configurations of Modifications 1 to 8. Further, also in this embodiment, the modifications described in each of the above-described embodiments and modifications can be applied.
(実施形態5)
 実施形態1~4の情報処理装置10によって出血発生有りが検出された場合に、止血操作が行われたか否かを判断する情報処理装置10について説明する。本実施形態の情報処理装置10は、図1に示す実施形態1の構成に加えて、記憶部12に、変形例6と同様の学習モデルを記憶している。本実施形態の学習モデルは、撮影画像が入力された場合に、撮影画像に基づいて、止血に用いる処置具30の領域を出力するように構成されている。なお、本実施形態の学習モデル12Mは、複数の撮影画像(映像)を入力とし、入力された撮影画像に基づいて、撮影画像中で止血操作が行われているか否かを分類する演算を行い、演算した結果を出力するように構成されていてもよい。この場合の学習モデルは、例えばRNN(Recurrent Neural Network)、LSTM(Long Short-Term Memory)、Transformer等のアルゴリズムを用いて構成できる。
(Embodiment 5)
The information processing apparatus 10 according to the first to fourth embodiments determines whether or not a hemostasis operation has been performed when the occurrence of bleeding is detected. In addition to the configuration of Embodiment 1 shown in FIG. 1, information processing device 10 of this embodiment stores a learning model similar to Modification 6 in storage unit 12. The learning model of this embodiment is configured to, when a captured image is input, output the area of the treatment tool 30 used for hemostasis based on the captured image. Note that the learning model 12M of this embodiment receives a plurality of captured images (videos) as input, and performs a calculation to classify whether or not a hemostasis operation has been performed in the captured images based on the input captured images. , may be configured to output the calculated results. The learning model in this case can be constructed using algorithms such as RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory), and Transformer.
 図30は実施形態5の出血検知処理手順の一例を示すフローチャートである。図30に示す処理は、図2に示す処理において、ステップS18,S19の間にステップS161~S163を追加したものである。図2と同じステップについては説明を省略する。なお、図30では図2中のステップS17~S19以外のステップの図示を省略している。本実施形態の情報処理装置10において、制御部11は、出血発生有りと判定した場合(S17:YES)、ステップS18の処理を実行し、その後、ステップS11で取得した撮影画像に基づいて、止血操作の実施を検知したか否かを判断する(S161)。具体的には、制御部11は、撮影画像を学習モデルに入力し、学習モデルからの出力情報に基づいて、撮影画像中に止血用の処置具30(止血器具)が有るか否かを判断する。なお、学習モデルが、入力された撮影画像に基づいて、止血操作が行われているか否かを判別する構成である場合、制御部11は、学習モデルからの出力情報に基づいて、止血操作が行われているか否かを判断できる。 FIG. 30 is a flowchart illustrating an example of a bleeding detection processing procedure according to the fifth embodiment. The process shown in FIG. 30 is the process shown in FIG. 2 with steps S161 to S163 added between steps S18 and S19. Description of the same steps as in FIG. 2 will be omitted. Note that in FIG. 30, illustrations of steps other than steps S17 to S19 in FIG. 2 are omitted. In the information processing device 10 of the present embodiment, when it is determined that bleeding has occurred (S17: YES), the control unit 11 executes the process of step S18, and then performs the process to stop the bleeding based on the captured image acquired in step S11. It is determined whether execution of an operation is detected (S161). Specifically, the control unit 11 inputs the photographed image into a learning model, and determines whether or not there is a treatment tool 30 for hemostasis (hemostasis device) in the photographed image based on output information from the learning model. do. Note that if the learning model is configured to determine whether or not a hemostasis operation is being performed based on the input captured image, the control unit 11 determines whether or not a hemostasis operation is being performed based on the output information from the learning model. You can determine whether it is being done or not.
 止血操作の実施を検知していないと判断した場合(S161:NO)、制御部11は、止血操作の実施を検知するまで、ステップS161の処理を繰り返す。ここでは、制御部11は、内視鏡制御装置21から順次送信されてくる撮影画像に基づいて、止血操作の実施を検知したか否かの判断処理を継続する。止血操作の実施を検知したと判断した場合(S161:YES)、制御部11は、止血操作の終了を検知したか否かを判断する(S162)。ここでも、制御部11は、内視鏡制御装置21から取得した撮影画像を学習モデルに入力し、学習モデルからの出力情報に基づいて、撮影画像中から止血用の処置具30がなくなったか否かを判断する。止血操作の終了を検知していないと判断した場合(S162:NO)、制御部11は、止血操作の終了を検知するまで、ステップS162の処理を繰り返す。ここでは、制御部11は、内視鏡制御装置21から順次送信されてくる撮影画像に基づいて、止血操作の終了を検知したか否かの判断処理を継続する。なお、このとき制御部11は、例えば止血操作中であることを医療従事者に通知する処理を行ってもよい。 If it is determined that the execution of the hemostasis operation is not detected (S161: NO), the control unit 11 repeats the process of step S161 until the execution of the hemostasis operation is detected. Here, the control unit 11 continues the process of determining whether or not the implementation of a hemostasis operation has been detected, based on the captured images sequentially transmitted from the endoscope control device 21. If it is determined that the execution of the hemostasis operation has been detected (S161: YES), the control unit 11 determines whether or not the end of the hemostasis operation has been detected (S162). Here, too, the control unit 11 inputs the photographed image acquired from the endoscope control device 21 into the learning model, and determines whether the hemostasis treatment tool 30 is missing from the photographed image based on the output information from the learning model. to judge. If it is determined that the end of the hemostasis operation has not been detected (S162: NO), the control unit 11 repeats the process of step S162 until the end of the hemostasis operation is detected. Here, the control unit 11 continues the process of determining whether or not the end of the hemostasis operation has been detected, based on the captured images sequentially transmitted from the endoscope control device 21. Note that at this time, the control unit 11 may perform a process of notifying a medical worker that a hemostasis operation is in progress, for example.
 止血操作の終了を検知したと判断した場合(S162:YES)、即ち、撮影画像中に止血器具を検出した後に当該止血器具の除去を検知した場合、制御部11は、止血操作の終了を検知してから所定時間が経過したか否かを判断する(S163)。制御部11は、所定時間が経過していないと判断した場合(S163:NO)、ステップS162の処理に戻り、止血操作の終了を最初に検知してから所定時間が経過するまで、止血操作終了の検知処理を繰り返す。ここでの所定時間は、止血が成功したと判断できる程度の時間とする。所定時間が経過したと判断した場合(S163:YES)、制御部11は、止血が成功したとして、ステップS19の処理に移行する。このとき制御部11は、止血が成功したことを医療従事者に通知してもよい。これにより、撮影画像に基づいて出血発生を検知した場合に、止血操作が行われたか否かを判断できる。よって、出血状態、止血中、止血の成功等の状態遷移を客観的に判断して医療従事者に通知することができる。術者を含む医療従事者は、各状態に最適な処置を実施すればよく、医療従事者をサポートして負担軽減が可能となる。 If it is determined that the end of the hemostasis operation has been detected (S162: YES), that is, if the removal of the hemostasis device is detected after the hemostasis device is detected in the photographed image, the control unit 11 detects the end of the hemostasis operation. It is determined whether a predetermined time has elapsed since then (S163). If the control unit 11 determines that the predetermined time has not elapsed (S163: NO), the process returns to step S162, and the hemostasis operation is completed until the predetermined time elapses after the end of the hemostasis operation is first detected. Repeat the detection process. The predetermined time here is a time long enough to determine that hemostasis has been successful. If it is determined that the predetermined time has elapsed (S163: YES), the control unit 11 determines that the hemostasis has been successful, and proceeds to the process of step S19. At this time, the control unit 11 may notify the medical personnel that hemostasis has been successful. Thereby, when the occurrence of bleeding is detected based on the photographed image, it can be determined whether a hemostasis operation has been performed. Therefore, it is possible to objectively judge state transitions such as a bleeding state, hemostasis in progress, and successful hemostasis, and notify the medical personnel. Medical personnel, including surgeons, only have to carry out the optimal treatment for each condition, and it becomes possible to support medical personnel and reduce their burden.
 本実施形態においても、実施形態1で説明した変形例1~8のいずれか又は複数の適用が可能であり、変形例1~8の構成を適用することによって同様の効果が得られる。また、本実施形態の構成は、上述した実施形態1~4の情報処理装置10に適用可能であり、実施形態1~4の情報処理装置10に適用した場合であっても同様の効果が得られる。なお、実施形態2~4の情報処理装置10に適用した場合、制御部11は、図24,図27,図29に示すそれぞれの処理において、ステップS18,S19の間でステップS161~S163の処理を実行すればよい。また、本実施形態においても、上述した各実施形態及び各変形例で適宜説明した変形例の適用が可能である。 Also in this embodiment, it is possible to apply any or more of Modifications 1 to 8 described in Embodiment 1, and similar effects can be obtained by applying the configurations of Modifications 1 to 8. Furthermore, the configuration of this embodiment is applicable to the information processing apparatus 10 of the first to fourth embodiments described above, and similar effects can be obtained even when applied to the information processing apparatus 10 of the first to fourth embodiments. It will be done. Note that when applied to the information processing apparatus 10 of embodiments 2 to 4, the control unit 11 performs the processing of steps S161 to S163 between steps S18 and S19 in each of the processes shown in FIGS. 24, 27, and 29. All you have to do is execute. Further, also in this embodiment, the modifications described in each of the above-described embodiments and modifications can be applied.
 今回開示された実施の形態はすべての点で例示であって、制限的なものでは無いと考えられるべきである。本発明の範囲は、上記した意味では無く、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。 The embodiments disclosed herein are illustrative in all respects and should not be considered restrictive. The scope of the present invention is not defined by the above-mentioned meaning, but is indicated by the scope of the claims, and is intended to include meanings equivalent to the scope of the claims and all changes within the scope.
 10  情報処理装置
 11  制御部
 12  記憶部
 13  操作部
 14  通信部
 20  内視鏡
 21  内視鏡制御装置
 30  処置具
 31  処置具制御装置
 40  表示装置
 12M 学習モデル
10 information processing device 11 control unit 12 storage unit 13 operation unit 14 communication unit 20 endoscope 21 endoscope control device 30 treatment tool 31 treatment tool control device 40 display device 12M learning model

Claims (22)

  1.  治療部位を時系列で撮影した、複数の色成分を有する複数の撮影画像を取得し、
     取得した複数の撮影画像のそれぞれについて、前記複数の色成分の差分に基づいて、前記撮影画像中の出血領域を示す出血領域画像を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した前記出血領域画像間における出血領域の差分を算出し、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータに実行させるプログラム。
    Obtain multiple images of the treatment area in chronological order that have multiple color components,
    For each of the plurality of acquired captured images, based on the difference between the plurality of color components, obtain a bleeding area image indicating the bleeding area in the captured image,
    Calculating the difference in the bleeding area between the bleeding area images acquired for each of the images taken before and after in time series,
    A program that causes a computer to execute a process of determining whether or not there is bleeding from the treatment site based on the difference in the bleeding area.
  2.  前記出血領域画像間における出血領域の差分が、前記出血領域の差分の移動平均に基づく所定範囲以上となった場合に、前記治療部位からの出血有りと判定する
     処理を前記コンピュータに実行させる請求項1に記載のプログラム。
    A method for causing the computer to execute a process of determining that there is bleeding from the treatment site when a difference in the bleeding area between the images of the bleeding area exceeds a predetermined range based on a moving average of the difference in the bleeding area. The program described in 1.
  3.  治療部位を時系列で撮影した複数の撮影画像を取得し、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の差分を算出し、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータに実行させるプログラム。
    Acquire multiple images of the treatment area taken in chronological order,
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Get information about the bleeding area,
    Calculating the difference in the bleeding area between the photographed images based on information regarding the bleeding area acquired for each of the photographed images before and after the photographed images in time series,
    A program that causes a computer to execute a process of determining whether or not there is bleeding from the treatment site based on the difference in the bleeding area.
  4.  治療部位を時系列で撮影した複数の撮影画像を取得し、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の増加領域を抽出し、
     前記撮影画像を所定画素数のセル毎に分割した各セルについて、前記増加領域に含まれる画素数を計数し、
     各セルについて計数した前記増加領域に含まれる画素数に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータに実行させるプログラム。
    Acquire multiple images of the treatment area taken in chronological order,
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Get information about the bleeding area,
    Based on the information regarding the bleeding area acquired for each of the photographed images before and after the photographed images in time series, extracting the increasing area of the bleeding area between the photographed images,
    For each cell obtained by dividing the photographed image into cells each having a predetermined number of pixels, count the number of pixels included in the increased area,
    A program that causes a computer to execute a process of determining the presence or absence of bleeding from the treatment site based on the number of pixels included in the increased area counted for each cell.
  5.  前記撮影画像の各セルについて計数した前記増加領域に含まれる画素数が、各セルに含まれる所定画素数に対して所定割合以上であるセルを、出血領域に特定する
     処理を前記コンピュータに実行させる請求項4に記載のプログラム。
    causing the computer to execute a process of specifying, as a bleeding area, a cell in which the number of pixels included in the increased area counted for each cell in the photographed image is equal to or greater than a predetermined ratio to a predetermined number of pixels included in each cell; The program according to claim 4.
  6.  前記撮影画像を撮影した際の撮影位置の変化量を取得し、
     前記撮影画像に対して、前記撮影位置の変化量に基づく画像補正を行う
     処理を前記コンピュータに実行させる請求項1~5のいずれかひとつに記載のプログラム。
    Obtaining the amount of change in the photographing position when photographing the photographed image,
    The program according to any one of claims 1 to 5, which causes the computer to execute a process of performing image correction on the photographed image based on the amount of change in the photographing position.
  7.  時系列に前後する撮影画像に基づいて、前記撮影画像中の特徴点の移動量を検出し、
     前記撮影画像に対して、前記特徴点の移動量に基づく画像補正を行う
     処理を前記コンピュータに実行させる請求項1~6のいずれかひとつに記載のプログラム。
    Detecting the amount of movement of the feature point in the captured image based on the captured images that are taken before and after in time series,
    The program according to any one of claims 1 to 6, which causes the computer to execute a process of performing image correction on the photographed image based on the amount of movement of the feature point.
  8.  前記撮影画像の撮影時に倍率が変更されたか否かを検出し、
     倍率が変更された前後に撮影された撮影画像に基づく前記出血の有無の判定結果を無効とする
     処理を前記コンピュータに実行させる請求項1~7のいずれかひとつに記載のプログラム。
    Detecting whether the magnification was changed when the photographed image was taken;
    8. The program according to claim 1, which causes the computer to execute a process of invalidating the determination result of the presence or absence of bleeding based on images taken before and after the magnification is changed.
  9.  前記撮影画像中の出血領域の重心を検出し、
     時系列に前後する撮影画像中の出血領域の重心の軌跡を特定し、
     前記出血領域の重心の軌跡が前記撮影画像の周縁領域に含まれる場合、前記出血領域を、前記出血の有無の判定対象から除外する
     処理を前記コンピュータに実行させる請求項1~8のいずれかひとつに記載のプログラム。
    detecting the center of gravity of the bleeding area in the photographed image;
    Identify the locus of the center of gravity of the bleeding area in the images taken before and after the time series,
    Any one of claims 1 to 8, wherein the computer executes a process of excluding the bleeding area from the determination target for the presence or absence of bleeding when the locus of the center of gravity of the bleeding area is included in the peripheral area of the photographed image. The program described in.
  10.  前記撮影画像間の出血領域の差分に基づいて、ミストが発生したか否かを判断し、
     ミストが発生したと判断した場合、前記撮影画像を前記出血の有無の判定対象から除外する
     処理を前記コンピュータに実行させる請求項1~9のいずれかひとつに記載のプログラム。
    Determining whether mist has occurred based on the difference in the bleeding area between the photographed images,
    The program according to any one of claims 1 to 9, which causes the computer to execute a process of excluding the photographed image from the determination target for the presence or absence of bleeding when it is determined that mist has occurred.
  11.  前記撮影画像中の処置具の領域を特定し、
     時系列に前後する撮影画像に基づいて、前記撮影画像中の処置具の移動領域を検出し、
     前記撮影画像中の出血領域のうちで、前記処置具の移動領域に含まれる画素を前記出血領域から除外する
     処理を前記コンピュータに実行させる請求項1~10のいずれかひとつに記載のプログラム。
    identifying the area of the treatment tool in the photographed image;
    Detecting a movement area of the treatment instrument in the photographed image based on the photographed images that precede and follow in chronological order,
    The program according to any one of claims 1 to 10, which causes the computer to execute a process of excluding pixels included in a movement area of the treatment instrument from the bleeding area in the captured image.
  12.  時系列に前後する撮影画像に基づいて、前記撮影画像中の各画素の移動量を検出し、
     前記撮影画像中の出血領域のうちで、前記移動量が所定値以上である画素を前記出血領域から除外する
     処理を前記コンピュータに実行させる請求項1~11のいずれかひとつに記載のプログラム。
    Detecting the amount of movement of each pixel in the photographed image based on the photographed images that come before and after in time series,
    The program according to any one of claims 1 to 11, which causes the computer to execute a process of excluding pixels whose movement amount is equal to or greater than a predetermined value from the bleeding area in the captured image.
  13.  前記撮影画像中の低周波領域を抽出し、
     前記撮影画像中の出血領域のうちで、前記低周波領域に含まれる画素を前記出血領域から除外する
     処理を前記コンピュータに実行させる請求項1~12のいずれかひとつに記載のプログラム。
    extracting a low frequency region in the captured image;
    13. The program according to claim 1, which causes the computer to execute a process of excluding pixels included in the low frequency region from the bleeding region in the photographed image.
  14.  前記撮影画像中の出血領域の数、大きさ又は形状に基づいて、前記出血領域が非出血領域であるか否かを判定し、
     非出血領域であると判定された前記出血領域を、前記出血の有無の判定対象から除外する
     処理を前記コンピュータに実行させる請求項1~13のいずれかひとつに記載のプログラム。
    Determining whether the bleeding area is a non-bleeding area based on the number, size, or shape of the bleeding area in the photographed image,
    The program according to any one of claims 1 to 13, which causes the computer to execute a process of excluding the bleeding area determined to be a non-bleeding area from the determination target of the presence or absence of bleeding.
  15.  前記撮影画像中の止血器具の有無を検出し、
     前記治療部位からの出血有りと判定した場合、前記撮影画像中に止血器具を検出した後に前記止血器具の除去を検出したか否かを判断し、
     前記止血器具の除去を検出してから所定時間が経過した場合、前記治療部位からの出血の有無を判定する
     処理を前記コンピュータに実行させる請求項1~14のいずれかひとつに記載のプログラム。
    detecting the presence or absence of a hemostatic device in the captured image;
    When it is determined that there is bleeding from the treatment site, determining whether removal of the hemostatic device is detected after detecting the hemostatic device in the captured image,
    15. The program according to claim 1, which causes the computer to execute a process of determining whether or not there is bleeding from the treatment site when a predetermined period of time has elapsed since removal of the hemostatic device was detected.
  16.  治療部位を時系列で撮影した、複数の色成分を有する複数の撮影画像を取得し、
     取得した複数の撮影画像のそれぞれについて、前記複数の色成分の差分に基づいて、前記撮影画像中の出血領域を示す出血領域画像を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した前記出血領域画像間における出血領域の差分を算出し、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータが実行する情報処理方法。
    Obtain multiple images of the treatment area in chronological order that have multiple color components,
    For each of the plurality of acquired captured images, based on the difference between the plurality of color components, obtain a bleeding area image indicating the bleeding area in the captured image,
    Calculating the difference in the bleeding area between the bleeding area images acquired for each of the images taken before and after in time series,
    An information processing method in which a computer executes a process of determining the presence or absence of bleeding from the treatment site based on the difference in the bleeding area.
  17.  治療部位を時系列で撮影した複数の撮影画像を取得し、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の差分を算出し、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータが実行する情報処理方法。
    Acquire multiple images of the treatment area taken in chronological order,
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Get information about the bleeding area,
    Calculating the difference in the bleeding area between the photographed images based on information regarding the bleeding area acquired for each of the photographed images before and after the photographed images in time series,
    An information processing method in which a computer executes a process of determining the presence or absence of bleeding from the treatment site based on the difference in the bleeding area.
  18.  治療部位を時系列で撮影した複数の撮影画像を取得し、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の増加領域を抽出し、
     前記撮影画像を所定画素数のセル毎に分割した各セルについて、前記増加領域に含まれる画素数を計数し、
     各セルについて計数した前記増加領域に含まれる画素数に基づいて、前記治療部位からの出血の有無を判定する
     処理をコンピュータが実行する情報処理方法。
    Acquire multiple images of the treatment area taken in chronological order,
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Get information about the bleeding area,
    Based on the information regarding the bleeding area acquired for each of the photographed images before and after the photographed images in time series, extracting the increasing area of the bleeding area between the photographed images,
    For each cell obtained by dividing the photographed image into cells each having a predetermined number of pixels, count the number of pixels included in the increased area,
    An information processing method in which a computer executes a process of determining the presence or absence of bleeding from the treatment site based on the number of pixels included in the increased area counted for each cell.
  19.  治療部位を時系列で撮影した、複数の色成分を有する複数の撮影画像を取得する撮影画像取得部と、
     取得した複数の撮影画像のそれぞれについて、前記複数の色成分の差分に基づいて、前記撮影画像中の出血領域を示す出血領域画像を取得する出血領域取得部と、
     時系列に前後する撮影画像のそれぞれについて取得した前記出血領域画像間における出血領域の差分を算出する算出部と、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する判定部と
     を備える情報処理装置。
    a photographed image acquisition unit that acquires a plurality of photographed images having a plurality of color components, photographing the treatment area in chronological order;
    a bleeding area acquisition unit that acquires a bleeding area image indicating a bleeding area in the captured image based on the difference between the plurality of color components for each of the plurality of captured images;
    a calculation unit that calculates a difference in the bleeding area between the bleeding area images acquired for each of the images taken before and after in time series;
    An information processing apparatus comprising: a determination unit that determines whether or not there is bleeding from the treatment site based on the difference in the bleeding area.
  20.  治療部位を時系列で撮影した複数の撮影画像を取得する撮影画像取得部と、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得する出血領域取得部と、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の差分を算出する算出部と、
     前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する判定部と
     を備える情報処理装置。
    a photographed image acquisition unit that acquires a plurality of photographed images of the treatment area taken in chronological order;
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. a bleeding area acquisition unit that acquires information about the bleeding area;
    a calculation unit that calculates a difference in the bleeding area between the photographed images based on information regarding the bleeding area acquired for each of the photographed images that precede and follow in time series;
    An information processing apparatus comprising: a determination unit that determines whether or not there is bleeding from the treatment site based on the difference in the bleeding area.
  21.  治療部位を時系列で撮影した複数の撮影画像を取得する撮影画像取得部と、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得する出血領域取得部と、
     時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の増加領域を抽出する抽出部と、
     前記撮影画像を所定画素数のセル毎に分割した各セルについて、前記増加領域に含まれる画素数を計数する画素計数部と、
     各セルについて計数した前記増加領域に含まれる画素数に基づいて、前記治療部位からの出血の有無を判定する判定部と
     を備える情報処理装置。
    a photographed image acquisition unit that acquires a plurality of photographed images of the treatment area taken in chronological order;
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. a bleeding area acquisition unit that acquires information about the bleeding area;
    an extraction unit that extracts an area where the bleeding area increases between the photographed images based on information regarding the bleeding area acquired for each of the photographed images that precede and follow in time series;
    a pixel counting unit that counts the number of pixels included in the increased area for each cell obtained by dividing the photographed image into cells each having a predetermined number of pixels;
    An information processing device comprising: a determining unit that determines whether or not there is bleeding from the treatment site based on the number of pixels included in the increased area counted for each cell.
  22.  治療部位を時系列で撮影した、複数の色成分を有する複数の撮影画像を取得し、
     取得した複数の撮影画像のそれぞれについて、前記複数の色成分の差分に基づいて、前記撮影画像中の出血領域を示す出血領域画像を取得し、時系列に前後する撮影画像のそれぞれについて取得した前記出血領域画像間における出血領域の差分を算出し、前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する第1判定処理と、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の差分を算出し、前記出血領域の差分に基づいて、前記治療部位からの出血の有無を判定する第2判定処理と、
     治療部位の撮影画像を入力した場合に、前記撮影画像中の出血領域に関する情報を出力するように学習された学習モデルに、取得した前記複数の撮影画像をそれぞれ入力し、それぞれの撮影画像中の出血領域に関する情報を取得し、時系列に前後する撮影画像のそれぞれについて取得した出血領域に関する情報に基づいて、前記撮影画像間における出血領域の増加領域を抽出し、前記撮影画像を所定画素数のセル毎に分割した各セルについて、前記増加領域に含まれる画素数を計数し、各セルについて計数した前記増加領域に含まれる画素数に基づいて、前記治療部位からの出血の有無を判定する第3判定処理との少なくとも1つの判定処理を選択し、
     選択した判定処理を実行し、
     実行した判定処理の判定結果に基づいて、前記治療部位からの出血の有無を特定する
     処理をコンピュータが実行する情報処理方法。
     
    Obtain multiple images of the treatment area in chronological order that have multiple color components,
    For each of the plurality of acquired captured images, a bleeding area image indicating the bleeding area in the captured image is acquired based on the difference between the plurality of color components, and the bleeding area image obtained for each of the captured images before and after in time series is obtained. a first determination process of calculating a difference in bleeding areas between bleeding area images and determining the presence or absence of bleeding from the treatment site based on the difference in bleeding areas;
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Obtaining information regarding the bleeding area, calculating a difference in the bleeding area between the captured images based on the information regarding the bleeding area acquired for each of the captured images before and after the time series, and based on the difference in the bleeding area, a second determination process for determining the presence or absence of bleeding from the treatment site;
    When a photographed image of a treatment area is input, each of the acquired plurality of photographed images is input to a learning model that has been trained to output information regarding the bleeding area in the photographed image. Information regarding the bleeding area is acquired, and based on the information regarding the bleeding area acquired for each of the photographed images before and after the photographed images, an area where the bleeding area increases between the photographed images is extracted, and the photographed image is divided into a predetermined number of pixels. A step of counting the number of pixels included in the increased area for each divided cell, and determining whether there is bleeding from the treatment site based on the number of pixels included in the increased area counted for each cell. 3. Select at least one judgment process with the judgment process,
    Execute the selected judgment process,
    An information processing method in which a computer executes a process of specifying the presence or absence of bleeding from the treatment site based on a determination result of an executed determination process.
PCT/JP2022/025668 2022-06-28 2022-06-28 Program, information processing method, and information processing device WO2024004013A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025668 WO2024004013A1 (en) 2022-06-28 2022-06-28 Program, information processing method, and information processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025668 WO2024004013A1 (en) 2022-06-28 2022-06-28 Program, information processing method, and information processing device

Publications (1)

Publication Number Publication Date
WO2024004013A1 true WO2024004013A1 (en) 2024-01-04

Family

ID=89382190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/025668 WO2024004013A1 (en) 2022-06-28 2022-06-28 Program, information processing method, and information processing device

Country Status (1)

Country Link
WO (1) WO2024004013A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015529489A (en) * 2012-07-25 2015-10-08 インテュイティブ サージカル オペレーションズ, インコーポレイテッド Efficient and interactive bleeding detection in surgical systems
WO2018235178A1 (en) * 2017-06-21 2018-12-27 オリンパス株式会社 Image processing device, endoscope device, method for operating image processing device, and image processing program
JP2021112220A (en) * 2018-04-17 2021-08-05 ソニーグループ株式会社 Image processing system, image processing device, image processing method and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015529489A (en) * 2012-07-25 2015-10-08 インテュイティブ サージカル オペレーションズ, インコーポレイテッド Efficient and interactive bleeding detection in surgical systems
WO2018235178A1 (en) * 2017-06-21 2018-12-27 オリンパス株式会社 Image processing device, endoscope device, method for operating image processing device, and image processing program
JP2021112220A (en) * 2018-04-17 2021-08-05 ソニーグループ株式会社 Image processing system, image processing device, image processing method and program

Similar Documents

Publication Publication Date Title
JP6931121B2 (en) Surgical recognition system
CN112220562A (en) Method and system for enhancing surgical tool control during surgery using computer vision
JP6419080B2 (en) Surgical procedure management system and method
JP2021100555A (en) Medical image processing device, endoscope system, diagnosis support method and program
US8811676B2 (en) Image processing apparatus, image processing method, and computer readable recording device
US11944262B2 (en) Endoscope processor, information processing device, and endoscope system
JP2022548237A (en) Interactive Endoscopy for Intraoperative Virtual Annotation in VATS and Minimally Invasive Surgery
WO2023095492A1 (en) Surgery assisting system, surgery assisting method, and surgery assisting program
JP2022145658A (en) Artificial intelligence based large intestine endoscope video diagnostic supporting system and method
EP4309075A1 (en) Prediction of structures in surgical data using machine learning
US11564560B2 (en) Image processing apparatus, operating method of image processing apparatus, and computer-readable recording medium
US20220304555A1 (en) Systems and methods for use of stereoscopy and color change magnification to enable machine learning for minimally invasive robotic surgery
JP7162744B2 (en) Endoscope processor, endoscope system, information processing device, program and information processing method
JP7376677B2 (en) Image processing system, endoscope system and method of operating the endoscope system
WO2024004013A1 (en) Program, information processing method, and information processing device
AU2022256978A1 (en) Systems and methods for ai-assisted medical image annotation
JP7116849B2 (en) Endoscope processor, endoscope system, information processing device, program and information processing method
KR20200016694A (en) Image quality assessment and automatic extraction system of endoscope in minimally invasive surgery, and Image quality assessment and automatic extraction method of endoscope using the same
US20230000319A1 (en) Method and apparatus for biometric tissue imaging
JP7148193B1 (en) Surgery support system, surgery support method, and surgery support program
WO2024122027A1 (en) Image processing device, surgical support system, image processing method, and program
WO2024053697A1 (en) Surgery assistance program, surgery assistance device, and surgery assistance method
US20220225860A1 (en) Medical imaging system, medical imaging processing method, and medical information processing apparatus
WO2023144570A1 (en) Detecting and distinguishing critical structures in surgical procedures using machine learning
Jabbar Automated Stand-alone Surgical Safety Evaluation for Laparoscopic Cholecystectomy (LC) using Convolutional Neural Network and Constrained Local Models (CNN-CLM)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949291

Country of ref document: EP

Kind code of ref document: A1