WO2023199923A1 - Surgical image processing platform and computer program - Google Patents

Surgical image processing platform and computer program Download PDF

Info

Publication number
WO2023199923A1
WO2023199923A1 PCT/JP2023/014776 JP2023014776W WO2023199923A1 WO 2023199923 A1 WO2023199923 A1 WO 2023199923A1 JP 2023014776 W JP2023014776 W JP 2023014776W WO 2023199923 A1 WO2023199923 A1 WO 2023199923A1
Authority
WO
WIPO (PCT)
Prior art keywords
platform
inference
surgical
display
surgical image
Prior art date
Application number
PCT/JP2023/014776
Other languages
French (fr)
Japanese (ja)
Inventor
直 小林
勇太 熊頭
成昊 銭谷
栄二 阿武
Original Assignee
アナウト株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アナウト株式会社 filed Critical アナウト株式会社
Priority to JP2023551148A priority Critical patent/JPWO2023199923A1/ja
Publication of WO2023199923A1 publication Critical patent/WO2023199923A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention relates to a surgical image processing platform and computer program for processing surgical images taken of a surgery performed by a surgeon.
  • Patent Document 1 for a trained model generated using learning data including a second surgical image that is a surgical image different from the first surgical image and information regarding the risk of complications due to the surgery,
  • an analysis unit By applying the first surgical image acquired by the acquisition unit, an analysis unit generates risk analysis information of the first surgical image, and surgical support information based on the risk analysis information generated by the analysis unit is applied to the surgical image.
  • a surgical support system has been proposed that includes an output unit that outputs a superimposed image.
  • the learned model used in the surgical support system may be generated using learning data accumulated for each user.
  • the surgical support information generated by the learned model differs in the content of necessary information depending on the user.
  • the display mode in which the surgical support information is superimposed on the surgical image differs depending on the user in terms of visibility and display mode that supports the surgery. For this reason, users of surgical support systems have a desire to freely select and combine trained models, the content of surgical support information, and the display mode for superimposing surgical support information on surgical images. There is.
  • the learned model, the contents of the surgical support information, and the display mode for superimposing the surgical support information on the surgical image are set by the provider of the surgical support system. It is not something that the user using the surgical support system can freely select.
  • the present invention has been made in view of the above points, and provides a surgical image in which a model to be implemented in a layer that executes analysis, drawing, and display processing can be freely set in the processing of surgical images. Its purpose is to provide processing platforms and computer programs.
  • a surgical image processing platform that processes surgical images taken of a surgery performed by a surgeon, an inference means in which an inference model for analyzing the surgical image is set; a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image; Display setting means for setting a display mode for displaying the drawn image in a predetermined mode on a display means, A surgical image processing platform, wherein the inference model, the drawing mode, and the display mode are each set individually.
  • the surgical image processing platform includes an inference means, an arithmetic means, and a display setting means, and processes a surgical image taken of a surgery performed by a surgeon.
  • an inference model for analyzing surgical images is set.
  • the calculation means is set with a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image.
  • the display setting means sets a display mode for displaying the drawn image on the display means in a predetermined manner.
  • the inference model, drawing mode, and display mode are individually set.
  • an inference means capable of setting an inference model
  • an arithmetic means capable of setting a drawing mode
  • a display setting means capable of setting a display mode.
  • the inference model, drawing mode, and display mode can each be set individually.
  • users using the surgical image processing platform can, for example, analyze surgical images in their own manner by individually setting their own inference model, drawing mode, and display mode, and based on the analysis results. It becomes possible to generate a drawn image in a unique manner and display this drawn image in a unique manner. Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
  • a plurality of the inference models can be set in the inference means, A plurality of the drawing modes can be set in the calculation means, The surgical image processing platform according to (1), wherein the display setting means is capable of setting a plurality of display modes.
  • a plurality of inference models can be set in the inference means, a plurality of drawing modes can be set in the calculation means, and a plurality of display modes can be set in the display setting means.
  • an input port for inputting data and an output port for outputting data are each set individually (1) or The surgical image processing platform described in (2).
  • the input port for inputting data and the output port for outputting data can be individually set, so that surgical images can be In each layer of processing (inference means, calculation means, display setting means), it becomes possible to individually set the data input source and data output destination. This increases the freedom of input sources and output destinations for each layer.
  • the invention (4), it is possible to set a plurality of input ports and a plurality of output ports in the inference means, the calculation means, and the display setting means, respectively.
  • This allows each layer (inference means, calculation means, display setting means) in the processing of surgical images to input various data and output the data in various directions (for example, to another device or another layer). It becomes possible to do so.
  • the input port of the inference means is an external device and the output port of the display setting means are connectable; Data output from the external device and data output from the display setting means can be input, The input port of the calculation means is connectable with the output port of the inference means and/or the display setting means; data output from the inference means and/or the display setting means can be input; The input port of the display setting means is connectable to the output port of the calculation means; The surgical image processing platform according to (3) or (4), wherein data output from the calculation means can be input.
  • the invention (5) it becomes possible to input data from the display setting means to the inference means in addition to data from the external device. Further, it is possible to input data from the display setting means to the calculation means in addition to data from the inference means. For this reason, data is fed back from the means located downstream of the processing (for example, the inference means and the display setting means for the inference means) to the means located upstream of the processing (for example, the inference means and the calculation means for the display setting means). becomes possible. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
  • (6) further comprising preprocessing means for converting the image quality of the surgical image,
  • the preprocessing means converts the image quality of a surgical image taken during surgery, and the inference means analyzes the surgical image whose image quality has been converted.
  • the image quality of the surgical images is converted to an image quality that improves the analysis accuracy of the inference means, and the surgical images with the converted image quality are By analyzing the above, it is possible to prevent the accuracy of the analysis results from decreasing.
  • the preprocessing means can output a converter that converts the image quality of the surgical image.
  • the image quality of the surgical images captured by these external devices may also be affected. , different from each other. In such a case, there is a risk that the accuracy of the analysis results will decrease.
  • the preprocessing means converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference model, so that the accuracy of the analysis result is can be prevented from decreasing.
  • a surgical image processing platform that processes surgical images taken of surgeries performed by surgeons.
  • inference means in which an inference model for analyzing the surgical image is set; a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image; functioning as display setting means for setting a display mode for displaying the drawn image on a display means in a predetermined manner;
  • a program in which the inference model, the drawing mode, and the display mode are each individually set.
  • a surgical image processing platform and a computer program that can freely set a model to be implemented in a layer that executes each process of analysis, drawing, and display in processing of surgical images. can.
  • FIG. 1 is a diagram showing a functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied. It is an example of the display aspect by the surgical image processing platform based on embodiment of this invention.
  • FIG. 1 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 1 is a diagram showing a functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied. It is an example of the display aspect by the surgical image processing platform based on embodiment of this invention.
  • FIG. 1 is a diagram illustrating an overview of
  • FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings in a preprocessing platform of a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to an application example of the embodiment of the present invention.
  • FIG. 7 is a diagram illustrating inference model device information output by the inference platform to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating connected device information output from a display platform to a preprocessing platform in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 1 is a diagram showing the functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied.
  • a surgical image processing platform 1 (an inference platform 10, a calculation platform 20, a display platform 30) according to an embodiment of the present invention is a surgical image processing platform 1 (inference platform 10, calculation platform 20, display platform 30) that depicts a surgical operation performed by a surgeon at a medical institution (for example, an institution such as a hospital where the surgeon performs the surgery). It is applied to a surgical support system that supports surgeons by processing and displaying surgical images that are captured images.
  • the surgical support system 100 uses an acquisition unit (Camera Capture Module, etc.) that acquires surgical images to capture a surgical image, which is an image of a surgeon's operation at a medical institution, from an external device (Camera Capture Module, etc.). & Imager, etc.) via a connection unit (Camera Driver integrated IF, etc.).
  • Surgical images include images of the patient's body undergoing surgery and instruments operated by the surgeon and assistants (for example, forceps, electric scissors, electric scalpels, energy devices such as ultrasonic coagulation and cutting devices, etc.) ) was photographed.
  • the surgical support system 100 includes an acquisition unit and a surgical image processing platform 1 that processes surgical images, as well as a system control unit (Ubuntu OS/Kernel, various system libraries, various connected devices) that controls the entire surgical support system 100. It is equipped with various modules that process data from various sensors, various devices), and connection parts with various connected devices (camera driver integrated IF, sensor integrated IF, etc.).
  • a system control unit Ubuntu OS/Kernel, various system libraries, various connected devices
  • the functional configuration of the surgical support system 100 is just an example, and one functional block (database and functional processing unit) may be divided, or multiple functional blocks may be combined into one functional block.
  • Each functional processing unit includes a CPU (Central Processing Unit) as a first control unit built into the device or terminal, a GPU (Graphics Processing Unit) as a second control unit, a ROM (Read Only Memory), and a flash memory.
  • a computer program (for example, core software or an application that causes the CPU to execute the various processes described above) stored in a storage device (storage unit) such as a SSD (Solid State Drive) or hard disk is read out and executed by the CPU or GPU. This is realized by a computer program.
  • each functional processing unit may be configured with an FPGA (Field-Programmable Gate Array).
  • each functional processing unit reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area in memory, and in some cases, reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area on a memory.
  • DB Database
  • the database (DB) in the embodiment of the present invention may be a commercial database, but it also means a simple collection of tables and files, and the internal structure of the database itself does not matter.
  • the CPU as the first control unit mainly realizes the function of the system control section
  • the GPU as the second control unit mainly realizes the function of the surgical image processing platform 1 (inference platform 10). , computing platform 20, and display platform 30).
  • the surgical image processing platform 1 processes surgical images acquired by an acquisition unit (Camera Capture Module, etc.). Further, the surgical image processing platform 1 classifies data for each external device (Camera & Imager, etc.) and records it in the DB according to the internal clock. This allows the user to check data for a single item or multiple items in chronological order.
  • the inference platform 10 which is an example of an inference means, inputs physical information indicating the state of the body and/or instrument information indicating the state of the instrument being operated by the surgeon in a surgical image into an inference model, and uses AI (Artificial Intelligence). ) to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc.
  • An inference model is set in the inference platform 10 by a user who uses the surgical support system 100.
  • the calculation platform 20 which is an example of calculation means, converts the analysis results of the inference platform 10 (anatomical structure, structure of the anatomical structure, trajectory of the instrument, state of the instrument with respect to the anatomical structure, etc.) into a predetermined drawing format and performs surgery. Generate a drawn image that is reflected in the image.
  • the drawing mode is a mode in which the analysis results of the inference platform 10 are displayed superimposed on the surgical image (for example, a different color is displayed for each type of organ or body fluid in a part that has been analyzed as a specific organ or body fluid). ).
  • the drawing mode includes other aspects that support the surgeon performing the surgery (for example, information indicating the elapsed time of the surgery, information indicating the ideal trajectory of the instrument, information indicating the timing of resection of the resected organ, etc.), etc. Also includes information indicating.
  • Such a drawing mode is realized by an arithmetic processing unit that is an algorithm or the like that generates such a mode. That is, the user who uses the surgical support system 100 sets the drawing mode by selecting the arithmetic processing unit.
  • the display platform (also referred to as "UI platform") 30, which is an example of display setting means, displays the drawn image generated by the calculation platform 20 on a display means (console/LCD (Liquid Crystal Display), etc.) in a predetermined display mode.
  • the display mode is a method of displaying a drawn image (for example, an original surgical image and a drawn image generated on the calculation platform 20 are displayed side by side).
  • a display processing unit such as a UI (User Interface) that is a user operation screen. That is, the user using the surgical support system 100 sets the display mode by selecting the display processing section.
  • FIG. 2 is an example of a display mode by the surgical image processing platform according to the embodiment of the present invention.
  • the inference platform 10 analyzes the surgical image acquired by the acquisition unit (Camera Capture Module, etc.) (in the example shown in FIG. 2, analyzes the portion that becomes neural tissue).
  • the calculation platform 20 converts this analysis result into a predetermined drawing mode (in the example shown in FIG. 2, a mode in which the portion analyzed as neural tissue is colored), and generates a drawn image that is reflected in the surgical image.
  • the display platform 30 displays this drawn image in a predetermined display mode (in the example shown in FIG. 2, the original surgical image and the drawn image generated on the calculation platform 20 are displayed side by side). Display on a means (console/LCD (Liquid Crystal Display), etc.).
  • FIG. 3 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention.
  • the surgical image processing platform 1 inference platform 10, calculation platform 20, display platform 30
  • the inference model, drawing mode (computation processing section), and display mode (display processing section) are each individually set by the user.
  • the diagram on the left shows an example of settings for company X, which is a user
  • the diagram on the right shows an example of settings for company Y, which is a user.
  • a company model, model A, and a model F, which is a model exclusive to company X are set as inference models by company X, which is a user.
  • Algo A which is an in-house rendering algorithm
  • Algo D which is an open rendering algorithm from another company
  • the GUID of another company's open GUI is set by Company X as the display mode (display processing unit).
  • Company In the inference platform 10, Company ) is set to be input. Furthermore, in the calculation platform 20, Algo A is set to receive data (analysis results) from Model A, and Algo D is set to receive data (analysis results) from Model F. ing. Furthermore, in the display platform 30, the GUID is set so that data (drawn images) from Argo A and Argo D are input. Further, in the display platform 30, data from the GUID is set to be output to a display means (LCD (Liquid Crystal Display)) and a storage means.
  • LCD Liquid Crystal Display
  • the inference models set by company Y which is the user, are the in-house model Model B, the open model **Doctor model, and the models of other companies.
  • Algo F which is an algorithm exclusive to Y company
  • GUIF which is a GUI exclusive to Y company
  • display processing unit is set by Y company as a display mode (display processing unit).
  • Company Y sets Model B to input data from Device A (device position information, etc.), and the doctor model and other companies' models include Endoscope A and Endoscope. Data from B (surgical images, etc.) is set to be input. Further, in the calculation platform 20, Argo F is set to receive data (analysis results) from model B, **doctor model, and other companies' models. Further, in the calculation platform 20, data from Argo F is set to be output (feedback) to Device A. Further, in the display platform 30, data (drawn image) from Argo F is set to be input to the GUIF. Furthermore, in the display platform 30, data from the GUIF is set to be output to a display means (console) and a storage means (storage).
  • the inference platform 10 can be set with a plurality of inference models
  • the calculation platform 20 can be set with a plurality of drawing modes (calculation processing units)
  • the display platform 30 can be set with a plurality of drawing modes (calculation processing units). It is possible to set a plurality of display modes (display processing units).
  • data from a plurality of external devices may be input to each of the plurality of inference models, or one inference The model may be input with data from multiple external devices.
  • data (analysis results) from a plurality of inference models may be respectively input to a plurality of drawing modes (calculation processing units), or data (analysis results) from a plurality of inference models may be inputted to one drawing mode (calculation processing unit). Data from multiple inference models may be input.
  • data generated by the drawing mode (the calculation processing unit) is mainly output to the display platform 30, but the data is not limited to the display platform 30, but may be output to an external device.
  • data (drawn images) from a plurality of drawing modes may be respectively input to the plurality of display modes (display processing sections), or data (drawn images) from a plurality of drawing modes (arithmetic processing sections) may be respectively input to the plurality of display modes (display processing sections).
  • the processing unit) may input data from a plurality of drawing modes (arithmetic processing units).
  • data from the display mode is mainly output to a display means (console/LCD (Liquid Crystal Display), etc.), but is not limited to the display means, and is output to a storage device. Alternatively, it may be output to the inference platform 10 or the calculation platform 20.
  • FIG. 4 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention.
  • the surgical image processing platform 1 has input data formats that can be input, output data formats that can be output, and implementation files that can be set (inference models of the inference platform 10) for each layer (inference platform 10, calculation platform 20, display platform 30). , an arithmetic processing section of the arithmetic platform 20, and a display processing section of the display platform 30).
  • the input data format is preprocessed input image data developed on the GPU, and the size and dtype of the data are defined.
  • the output data format representing the analysis result is confidence data developed on the GPU, and the size and dtype of the data are defined.
  • a general format (model converted to ONNX) is defined for the implementation file.
  • the input data format is defined as the same as the output data format of the inference platform 10. This makes it possible to input data output from the inference platform 10 to the calculation platform 20.
  • the output data format representing the drawn image is a display image developed on the GPU, and the size and dtype of the data are defined.
  • a computation file written in a designated language is defined as the implementation file.
  • the input data format is defined as the same as the output data format of the calculation platform 20. This makes it possible to input data output from the calculation platform 20 to the display platform 30.
  • the output data format indicating the display mode is a data group including display of the designated library, event processing, and the like.
  • the UI file already described in the specified library is defined as the implementation file.
  • the input data formats that can be input is not limited to content prepared in advance by the provider of the surgical support system 100 (inference model, arithmetic processing unit, display processing unit), but also content unique to the user, content provided by a third party, or provided as open content. Contents can be set on the surgical image processing platform 1.
  • FIG. 5 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention.
  • the calculation platform 20, and the display platform 30 input ports for inputting data and output ports for outputting data are individually set by the user. Furthermore, the inference platform 10, calculation platform 20, and display platform 30 can be configured with a plurality of input ports and a plurality of output ports, respectively, by the user.
  • the input ports (PortII1, PortII2, etc. shown in Figure 5) set in the inference platform 10 are connected by the user to the connection parts (Device FI1, CameraIFI1, etc.) connected to external devices (Camera & Imager, etc.), and the connection Data (surgical images (physical information, instrument information)) from an external device is input through the section.
  • This input data is supplied to the inference model.
  • the output ports (PortIO1, PortIO2, etc. shown in FIG. 5) set on the inference platform 10 can be configured by the user as input ports (PortDI1, PortDI2, etc. shown in FIG. 5) of the calculation platform 20, input ports of the display platform 30, or storage ports. It is connected to the IF of means and external devices, and outputs output data (analysis results) from the inference model to these.
  • the input ports (PortDI1, PortDI2, etc. shown in FIG. 5) set on the calculation platform 20 can be configured by the user as the output ports (PortIO1, PortIO2, etc. shown in FIG. 5) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 5), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to the arithmetic processing section.
  • the output ports (PortDO1, PortDO2, etc. shown in FIG. 5) set on the calculation platform 20 can be used by the user to input the input ports of the display platform 30 (PortGI1 shown in FIG.
  • the output data (drawn image) from the arithmetic processing unit is output to these devices.
  • the input port (PortGI1 shown in FIG. 5) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 5) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input.
  • This input data is supplied to the display processing section.
  • the output ports set on the display platform 30 can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (PortII4 shown in FIG. 5), It is connected to the input port of the calculation platform 20 (PortDI4 shown in FIG. 5), storage means, and IF of an external device, and outputs output data (display mode) from the display processing section to these.
  • 6 and 7 are diagrams illustrating settings in the reasoning platform of the surgical image processing platform according to the embodiment of the present invention.
  • a user performs a setting operation when setting contents such as an inference model, other calculation elements, input ports, and output ports on the inference platform 10.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 6, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the inference platform 10 sets input ports (Port II1, etc. in the example shown in FIG. 6) and output ports (PortIO1, etc. in the example shown in FIG. 6) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the setting selection screen in the inference platform 10 includes multiple types of inference models (in the example shown in FIG. 6, basic model, open model, closed model, etc.) and multiple types of analysis results of the inference models, as content that the user can select.
  • the calculation method ADD, MATMUL, ABS, etc. in the example shown in Figure 6
  • multiple types of constants SCALAR, etc. in the example shown in Figure 6
  • conditional branches SELECTOR, etc. in the example shown in Figure 6
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), or by surgery content.
  • a plurality of inference models are grouped by supply source (basic model, open model, closed model).
  • multiple inference models are grouped into multiple layers on the settings selection screen. Specifically, in the example shown in Fig. 7, the top layer is grouped by analysis target (connective tissue, nerves, etc.), and in each group, the details of the surgery (thoracoscopic lung resection, robot-assisted endoscopic surgery, etc.) are grouped. (stomach/large intestine region, etc.).
  • the user can add content (inference models, calculation methods, conditional branches, etc.) to the inference platform 10 by selecting content on the settings selection screen and dragging the selected content into the inference platform 10 on the settings screen. Can be set.
  • content inference models, calculation methods, conditional branches, etc.
  • the user performs an operation to connect the content set in the inference platform 10, an input port, and an output port (for example, in the example shown in FIG. 6, connect the connection source (for example, PortII1) on the setting screen.
  • the connection source for example, PortII1
  • the connection source and connection destination are connected, and data flows from the connection source to the connection destination.
  • FIGS. 8 and 9 are diagrams illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • the user performs a setting operation when setting contents such as the calculation processing unit, other calculation elements, input ports, and output ports.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 8, the part displayed as +) displayed on a setting screen displayed on the display means by the display platform 30 functioning as a setting means.
  • the calculation platform 20 sets input ports (PortDI1, etc. in the example shown in FIG. 8) and output ports (PortDO1, etc. in the example shown in FIG. 8) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the settings selection screen in the calculation platform 20 includes multiple types of calculation processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the calculation processing units, as contents that the user can select.
  • Various types of processing for example, multiple types of filters (in the example shown in Figure 8, Low pass, High pass, Bandpass, etc.), multiple types of analysis methods (in the example shown in Figure 8, Peak, Histogram, FFT, etc.), multiple types
  • the calculation method in the example shown in FIG. 8, ADD, MATMUL, ABS, etc.
  • the constant in the example shown in FIG. 8, SCALAR, etc.
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgical content, or by drawing expression.
  • analysis target for example, anatomical structure, etc.
  • surgical content or by drawing expression.
  • supply source basic model, open model, closed model
  • the content of the drawing expression realized by the arithmetic processing unit may be adjustable on the setting selection screen.
  • multiple calculation processing units can be selected for each drawing expression (drawing color, confidence threshold, opacity, drawing method, blinking display, etc.), and each In the drawing expression item, it is possible to adjust the drawing expression (for example, input numerical values, select colors, etc.).
  • the user selects content on the settings selection screen and drags the selected content into the calculation platform 20 on the settings screen to add the content (calculation processing unit, calculation method, conditional branch, etc.) to the calculation platform 20. can be set.
  • connection source for example, PortDI1 to PortDI4
  • connection destination for example, model F
  • FIG. 10 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention.
  • the user performs a setting operation when setting content such as the display processing unit, other calculation elements, input ports, and output ports.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 10, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the display platform 30 sets input ports (PortGI1, etc. in the example shown in FIG. 10) and output ports (PortGO1, etc. in the example shown in FIG. 10) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the setting selection screen on the display platform 30 includes multiple types of display processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the display processing unit, as content that can be selected by the user. Multiple types of calculation methods (ADD, MATMUL, ABS, etc. in the example shown in FIG. 8) and multiple types of conditional branches (SELECTOR, etc. in the example shown in FIG. 8) are displayed.
  • display processing units in the example shown in FIG. 8, basic model, open model, closed model, etc.
  • SELECTOR multiple types of conditional branches
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgery content, or by display mode.
  • analysis target for example, anatomical structure, etc.
  • surgery content for example, anatomical structure, etc.
  • display mode for example, anatomical structure, etc.
  • a plurality of display processing units are grouped by supply source (basic model, open model, closed model).
  • the user selects content on the settings selection screen and drags the selected content into the display platform 30 on the settings screen to add the content (display processing unit, calculation method, conditional branching, etc.) to the display platform 30. can be set.
  • connection source for example, PortGI1
  • GUI F graphical user interface
  • the surgical image processing platform 1 may output test results of the constructed platform as a template for standard application to a predetermined organization. Furthermore, when introducing something other than the inference model, arithmetic processing unit, or display processing unit provided in advance in the surgical image processing platform 1, the surgical image processing platform 1 may be provided with a performance confirmation function for these.
  • a user who uses the surgical image processing platform 1 can create a unique surgical image by, for example, individually setting a unique inference model, drawing mode, and display mode. It becomes possible to analyze the image in a unique manner, generate a drawn image in a unique manner based on the analysis result, and display this drawn image in a unique manner. Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
  • a plurality of inference models can be set in the inference platform 10
  • a plurality of drawing modes can be set in the calculation platform 20
  • a plurality of display modes can be set in the display platform 30. It is. This makes it possible to mount various functions (functions realized by each model) in each layer (inference platform 10, calculation platform 20, and display platform 30) in the processing of surgical images.
  • the input port for inputting data and the output port for outputting data can be individually set.
  • each layer in the processing of surgical images, it is possible to individually set the data input source and the data output destination. This increases the freedom of input sources and output destinations for each layer.
  • the surgical image processing platform 1 it is possible to set a plurality of input ports and a plurality of output ports in the inference platform 10, the calculation platform 20, and the display platform 30, respectively. This allows various data to be input to each layer (inference platform 10, calculation platform 20, and display platform 30) in surgical image processing, and the data to be sent in various directions (for example, to another device or another layer). It becomes possible to output.
  • the surgical image processing platform 1 it is possible to input data from the display platform 30 into the inference platform 10 in addition to data from an external device. Further, in addition to data from the inference platform 10, data from the display platform 30 can be input to the calculation platform 20. As a result, from a platform located downstream of processing (for example, the display platform 30 for the inference platform 10 and the calculation platform 20) to a means located upstream of the processing (for example, the inference platform 10 and the calculation platform 20 for the display platform 30). It becomes possible to feed back data. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
  • FIG. 11 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • the surgical image processing platform 1A includes a preprocessing platform 40, which is an example of preprocessing means.
  • the inference platform 10 collects physical information indicating the state of the body and/or Instrument information indicating the state of the instrument being operated by the surgeon is input into the inference model and analyzed by AI to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc. do.
  • external devices used in medical institutions e.g. endoscope systems/endoscopes, etc.
  • types models used vary depending on the medical institution. be.
  • the type of external device is different, the image quality of the obtained surgical image will also be different.
  • the inference model used by the inference platform 10 for analysis is learned from surgical images captured by a predetermined external device.
  • the external device that captured the surgical image learned by the inference model and the external device used by the medical institution that uses the surgical image processing platform 1 are connected to each other. It may be different. In such cases, the image quality of surgical images captured by these external devices also differs from each other. If there is a large difference between the image quality of the surgical images learned by the inference model and the image quality of the surgical images acquired when using the surgical image processing platform 1, the accuracy of the analysis results of the inference platform 10 may decrease. .
  • the preprocessing platform 40 suppresses the discrepancy between the image quality of the surgical image input to the surgical image processing platform 1 and the image quality of the surgical image learned by the inference model, and prevents the accuracy of the analysis results of the inference platform 10 from decreasing. It is something to do.
  • the preprocessing platform 40 which is an example of preprocessing means, converts the image quality of the surgical image using a conversion formula according to the image quality of the image learned by the inference model of the inference platform 10.
  • the preprocessing platform 40 includes a camera image quality converter, and in the camera image quality converter, an acquisition unit (Camera Capture Module (see FIG. 1), etc.) acquires a surgical image according to the external device that captured the acquired surgical image. Then, the image quality of the surgical image is converted to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10, and a preprocessed surgical image with the image quality converted is generated and provided to the inference platform 10.
  • an acquisition unit Camera Capture Module (see FIG. 1), etc.) acquires a surgical image according to the external device that captured the acquired surgical image.
  • the image quality of the surgical image is converted to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10, and a preprocessed
  • a conversion formula is set in the preprocessing platform 40 by a user using the surgical support system 100 or automatically as described below.
  • the "conversion formula” is not limited to one that converts the image quality of the surgical image to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10. Any method can be used as the "conversion formula" as long as it converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. For example, it is possible to use any method that converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. It may also be a method of image conversion.
  • the preprocessing platform 40 has input ports (PortPPI1, PortPPI2, etc. shown in FIG. 11) into which data is input, and output ports (PortPPI2, etc. shown in FIG. 11) that output data. PortPPO1, PortPPO2) shown in 11 are individually set by the user. Furthermore, the preprocessing platform 40 can be configured with a plurality of input ports and a plurality of output ports, respectively, similarly to the inference platform 10, calculation platform 20, and display platform 30.
  • connection section (Device FI1, Camera IFI1, etc.), and data (surgical images (physical information, instrument information)) from an external device (for example, an endoscope system/endoscope, etc.) is input via the connection section.
  • data surgical images (physical information, instrument information)
  • an external device for example, an endoscope system/endoscope, etc.
  • the input ports set in the preprocessing platform 40 are the output ports of the inference platform 10 (PortIO3 shown in FIG. 11) and the output ports of the display platform 30 (PortPPI5 shown in FIG. 11).
  • PortGO4 output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to a camera image quality converter, and is converted into an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10.
  • the output ports (PortPPO1, PortPPO2 shown in FIG. 11) set in the preprocessing platform 40 are connected by the user to the input ports (PortII1, PortII2 shown in FIG. 11) of the inference platform 10, and output data ( Output preprocessed surgical images).
  • the input ports set in the inference platform 10 are connected by the user to the output ports (PortPPO1, PortPPO2 shown in FIG. 11) of the preprocessing platform 40, and the output data (preprocessed surgical image) is input.
  • This input data is supplied to the inference model.
  • the output ports (PortIO1, PortIO2, etc. shown in FIG. 11) set in the inference platform 10 can be set by the user to the input ports (PortDI1, PortDI2, etc. shown in FIG. 11) of the calculation platform 20, or the input ports (PortIO1, PortDI2, etc. shown in FIG. PortPPI4 shown in FIG. 11), and output data (analysis results) from the inference model to these.
  • the input ports (PortDI1, PortDI2, etc. shown in FIG. 11) set on the calculation platform 20 can be changed by the user to the output ports (PortIO1, PortIO2, etc. shown in FIG. 11) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 11), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to the arithmetic processing section.
  • the output port (PortDO1 shown in FIG. 11) set on the calculation platform 20 is connected by the user to the input port (PortGI1 shown in FIG. 11) of the display platform 30, and output data (drawn image ) is output.
  • the input port (PortGI1 shown in FIG. 11) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 11) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input.
  • This input data is supplied to the display processing section.
  • the output ports (PortGO1, PortGO2, PortGO3, etc. shown in FIG. 11) set on the display platform 30 can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (Port II3 shown in FIG. 11), It is connected to the input port of the calculation platform 20 (PortDI3 shown in FIG. 11) and the input port of the preprocessing platform 40 (PortPPI5, etc. shown in FIG. 11), and outputs the output data (display mode) from the display processing unit to these. .
  • FIG. 12 is a diagram illustrating settings in the preprocessing platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the user performs a setting operation when setting contents such as a camera image quality converter, other calculation elements, input ports, and output ports on the preprocessing platform 40.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 12, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the preprocessing platform 40 sets input ports (PortPPI1, etc. in the example shown in FIG. 12) and output ports (PortPPO1, etc. in the example shown in FIG. 12) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the settings selection screen in the preprocessing platform 40 includes, as user-selectable content, multiple types of conversion elements included in the camera image quality converter, and multiple types of preprocessing for the conversion results of the camera image quality converter (as shown in FIG. 12).
  • multiple types of conversion elements included in the camera image quality converter as shown in FIG. 12.
  • Normalize, Standardize, Grayscale, Binalize, etc. and conditional branches (SELECTOR, etc. in the example shown in FIG. 12) are displayed.
  • the camera image quality converter is a database that stores a series of conversion formulas for converting the conversion source to the image quality (spatial frequency, brightness, color tone, etc.) equivalent to the conversion destination.
  • the preprocessing platform 40 may acquire (read) the camera image quality converter from the storage means of the surgical support system 100 (for example, the Data Base shown in FIG. The information may be obtained (downloaded) from a server of a provider of the surgical image processing platform 1A, another user using the surgical image processing platform 1A, or the like. Further, the preprocessing platform 40 may output (store) the camera image quality converter to the storage means of the surgical support system 100 (for example, the Data Base shown in FIG.
  • the camera image quality converter uses an "endoscope system (S) + endoscope (ES) conversion element” and an “endoscope system (S) setting value conversion element” as conversion elements. include.
  • Endoscope system (S) + endoscope (ES) conversion element has multiple types of conversion destinations (a combination of an endoscope system and an endoscope that are connected to the surgical image processing platform 1A and acquire surgical images). , respectively, are associated with conversion formulas for multiple types of conversion sources (combinations of the endoscope system and endoscope that captured the surgical images learned by the inference model).
  • the image quality (spatial frequency, brightness, color tone, etc.) of the combined surgical image is applied to the combination of the destination endoscope system (S:A) and endoscope (ES:A) as the conversion source.
  • Conversion A is associated as a conversion formula that approximates the image quality of a surgical image obtained by a combination of an endoscope system (S:A) and an endoscope (ES:B).
  • an endoscope system (S) setting value conversion element multiple types of conversion destinations (default setting values of the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are specified.
  • Types of conversion sources (setting values used in the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are associated with each other.
  • information indicating values for each of multiple types of conversion destinations is associated with each of multiple types of setting values ("brightness”, “color tone”, “color mode”, and "contrast”). There is.
  • the user selects content on the settings selection screen and drags the selected content into the preprocessing platform 40 on the settings screen to add the content (conversion formula, calculation method, conditional branch, etc.) to the preprocessing platform 40. ) can be set.
  • the preprocessing platform 40 may automatically set the conversion formula.
  • the user performs an operation to connect the content set in the preprocessing platform 40, an input port, and an output port (for example, in the example shown in FIG. 12, multiple connection sources (for example, PortPPI1 ⁇ PortPPI5) to multiple connection destinations (for example, "Conversion C” and "Conversion A”), the connection source and connection destination are connected, and data is transferred from the connection source to the connection destination. flows.
  • multiple connection sources for example, PortPPI1 ⁇ PortPPI5
  • connection destinations for example, "Conversion C" and "Conversion A
  • FIG. 13 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
  • a basic model model A, an open model **hospital model, and a closed model model F are endoscopic models that have captured surgical images learned by these models (inference models). It is grouped into information (S:A+ES:A) indicating the type of mirror system and the type of endoscope. That is, the inference platform 10 associates and stores each inference model and the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by each inference model. There is.
  • the display platform 30 is connected to the surgical image processing platform 1A as setting items for each model (GUI A to F) in the display processing section of the setting selection screen (see FIG. 10), and Information indicating the type of endoscope system (S), the type of endoscope (ES), and the setting values of the endoscope system (S) may be settable.
  • the preprocessing platform 40 is an endoscope system ( Information indicating the type of endoscope (ES), information indicating the type of endoscope (ES), and setting values of the endoscope system (S) may be acquired.
  • the preprocessing platform 40 determines the combination of types of endoscope system (S) and endoscope (ES) from the serial signal of the endoscope system input from the input port (Port PP1 in the example shown in FIG. 11). The information shown may be acquired as the conversion destination information. Further, the preprocessing platform 40 may acquire the setting values of the endoscope system (S) input from the input port (Port PP1 in the example shown in FIG. 11) as the conversion source information.
  • the preprocessing platform 40 determines the type of endoscope system (S) and endoscope (ES) learned by the inference model set in the inference platform 10 input from the input port (Port PP4 in the example shown in FIG. 11). Information indicating the combination is acquired as conversion source information.
  • S endoscope system
  • ES endoscope
  • FIG. 14 is a diagram illustrating inference model device information that the inference platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the inference platform 10 uses the preprocessing platform to input inference model equipment information indicating the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by the inference model selected by the user. Output to 40.
  • the inference platform 10 if the inference model selected by the user is model B, the inference platform 10 will determine the type of endoscope system that captured the surgical image learned by model B. Information indicating the type of endoscope (S:A+ES:B) is output to the preprocessing platform 40 as inference model device information.
  • FIG. 15 is a diagram illustrating connected device information that the display platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the display platform 30 displays information indicating the type of endoscope system (S) that captures surgical images connected to the surgical image processing platform 1A, the type of endoscope (ES), and information on the endoscope system (S).
  • Connected device information indicating setting values (“brightness”, “color tone”, “color mode”, “contrast”) is output to the preprocessing platform 40.
  • the display platform 30 displays information (S:A) indicating the type of endoscope system (S) that captures surgical images, and information (ES) indicating the type of endoscope (ES).
  • S:A) and the endoscope system (S) settings (“Brightness” (Def (default)), “Tone” (R-/B+), “Color Mode” (Def), “Contrast” (High )) is output to the preprocessing platform 40.
  • the preprocessing platform 40 includes information (S:A+ ES:B) as information indicating the conversion source.
  • the preprocessing platform 40 also identifies the type of endoscope system (S) and the type of endoscope (ES) connected to the surgical image processing platform 1A that captures surgical images in the connected device information shown in FIG. 15.
  • the information shown (S:A+ES:A) is acquired as the information showing the conversion destination.
  • the preprocessing platform 40 is a conversion element between the conversion source (S:A+ES:B) and the conversion destination Select the conversion formula "Conversion A" that is associated with (S:A+ES:A).
  • the preprocessing platform 40 also provides conversion sources (“brightness” (Def (default)), “color tone” (R -/B+), “Color mode” (Def), “Contrast” (High)) as the default of the conversion destination (endoscope system (S:A) connected to surgical image processing platform 1A and acquiring surgical images) Select the conversion formula to convert to (set value).
  • the preprocessing platform 40 converts the image quality of a surgical image obtained by photographing a surgery, and the inference platform 10 analyzes the surgical image whose image quality has been converted.
  • the image quality of the surgical image is converted to, for example, an image quality that improves the analysis accuracy of the inference platform 10, and the image quality is converted.
  • the surgical image processing platform 1A it is possible to provide a converter used by a certain user to, for example, the provider of the surgical image processing platform 1A or another user. This makes it possible to improve or reuse a converter that converts the image quality of surgical images, and improves usability.
  • the preprocessing platform 40 converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference platform 10. It is possible to prevent the accuracy of analysis results from decreasing.

Abstract

The present invention makes it possible to freely set a model to be implemented in a layer for performing analysis, rendering, and display processes when processing surgical images. A surgical image processing platform 1 processes a surgical image of a surgery performed by a surgeon, and comprises: an inference platform 10 that sets an inference model for analyzing the surgical image; a computation platform 20 that sets a rendering mode for generating a rendered image in which the analysis result from the inference model is reflected in the surgical image; and a display platform 30 that sets a display mode for displaying the rendered image in a prescribed mode on a display means. The inference model, the rendering mode, and the display mode are set individually.

Description

手術画像処理プラットフォーム及びコンピュータプログラムSurgical image processing platform and computer program
 本発明は、外科医によって行われた手術を撮影した手術画像を処理する手術画像処理プラットフォーム及びコンピュータプログラムに関する。 The present invention relates to a surgical image processing platform and computer program for processing surgical images taken of a surgery performed by a surgeon.
 従来、外科医によって行われた手術を撮影した手術画像を取得し、この手術画像を解析した解析結果を、手術画像に適用することで、手術を支援する技術が知られている。 BACKGROUND ART Conventionally, there is a known technique for supporting surgery by acquiring a surgical image of a surgery performed by a surgeon and applying the analysis results of the surgical image to the surgical image.
 例えば、特許文献1には、第1手術画像とは異なる手術画像である第2手術画像と、手術による合併症リスクに関する情報とを含む学習データを用いて生成された学習済みモデルに対して、取得部により取得された第1手術画像を適用することにより、第1手術画像のリスク解析情報を生成する解析部と、解析部により生成されたリスク解析情報に基づく手術支援情報を、手術画像に重畳させて出力する出力部と、を有する手術支援システムが提案されている。 For example, in Patent Document 1, for a trained model generated using learning data including a second surgical image that is a surgical image different from the first surgical image and information regarding the risk of complications due to the surgery, By applying the first surgical image acquired by the acquisition unit, an analysis unit generates risk analysis information of the first surgical image, and surgical support information based on the risk analysis information generated by the analysis unit is applied to the surgical image. A surgical support system has been proposed that includes an output unit that outputs a superimposed image.
特開2021-29258号公報JP2021-29258A
 ところで、手術支援システムに使用される学習済みモデルは、ユーザ毎に蓄積された学習データを用いて生成される場合がある。また、学習済みモデルにより生成された手術支援情報は、ユーザ毎に、必要となる情報の内容等が異なる。また、手術支援情報を手術画像に重畳する表示態様も、ユーザ毎に、見やすさや、手術の支援となる表示態様が異なる。このため、手術支援システムを利用するユーザには、学習済みモデルや、手術支援情報の内容や、手術支援情報を手術画像に重畳する表示態様を、自由に選択し、組み合わせて使用したいとの要望がある。 By the way, the learned model used in the surgical support system may be generated using learning data accumulated for each user. In addition, the surgical support information generated by the learned model differs in the content of necessary information depending on the user. Furthermore, the display mode in which the surgical support information is superimposed on the surgical image differs depending on the user in terms of visibility and display mode that supports the surgery. For this reason, users of surgical support systems have a desire to freely select and combine trained models, the content of surgical support information, and the display mode for superimposing surgical support information on surgical images. There is.
 しかしながら、特許文献1の手術支援システムにおいて、学習済みモデルや、手術支援情報の内容や、手術支援情報を手術画像に重畳する表示態様は、手術支援システムの提供者により設定されたものであり、当該手術支援システムを使用するユーザが自由に選択できるものではない。 However, in the surgical support system of Patent Document 1, the learned model, the contents of the surgical support information, and the display mode for superimposing the surgical support information on the surgical image are set by the provider of the surgical support system. It is not something that the user using the surgical support system can freely select.
 本発明は、上記の点に鑑みてなされたものであり、手術画像の処理において、解析、描画、表示の各処理を実行するレイヤーに実装するモデルを、自由に設定することが可能な手術画像処理プラットフォーム及びコンピュータプログラムを提供することを目的とする。 The present invention has been made in view of the above points, and provides a surgical image in which a model to be implemented in a layer that executes analysis, drawing, and display processing can be freely set in the processing of surgical images. Its purpose is to provide processing platforms and computer programs.
 (1) 外科医によって行われた手術を撮影した手術画像を処理する手術画像処理プラットフォームであって、
 前記手術画像を解析する推論モデルが設定される推論手段と、
 前記推論モデルによる解析結果を手術画像に反映させた描画画像を生成する描画態様が設定される演算手段と、
 前記描画画像を表示手段に所定の態様で表示する表示態様が設定される表示設定手段と、を備え、
 前記推論モデル、前記描画態様及び表示態様が、それぞれ個別に設定されることを特徴とする手術画像処理プラットフォーム。
(1) A surgical image processing platform that processes surgical images taken of a surgery performed by a surgeon,
an inference means in which an inference model for analyzing the surgical image is set;
a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image;
Display setting means for setting a display mode for displaying the drawn image in a predetermined mode on a display means,
A surgical image processing platform, wherein the inference model, the drawing mode, and the display mode are each set individually.
 (1)の発明では、手術画像処理プラットフォームは、推論手段と、演算手段と、表示設定手段と、を備え、外科医によって行われた手術を撮影した手術画像を処理する。
 推論手段は、手術画像を解析する推論モデルが設定される。
 演算手段は、推論モデルによる解析結果を手術画像に反映させた描画画像を生成する描画態様が設定される。
 表示設定手段は、描画画像を表示手段に所定の態様で表示する表示態様が設定される。
 そして、手術画像処理プラットフォームは、推論モデル、描画態様及び表示態様が、それぞれ個別に設定される。
In the invention (1), the surgical image processing platform includes an inference means, an arithmetic means, and a display setting means, and processes a surgical image taken of a surgery performed by a surgeon.
In the inference means, an inference model for analyzing surgical images is set.
The calculation means is set with a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image.
The display setting means sets a display mode for displaying the drawn image on the display means in a predetermined manner.
In the surgical image processing platform, the inference model, drawing mode, and display mode are individually set.
 (1)の発明によれば、推論モデルを設定可能な推論手段と、描画態様を設定可能な演算手段と、表示態様を設定可能な表示設定手段と、を備える。そして、推論モデル、描画態様及び表示態様を、それぞれ個別に設定可能である。 According to the invention (1), there is provided an inference means capable of setting an inference model, an arithmetic means capable of setting a drawing mode, and a display setting means capable of setting a display mode. The inference model, drawing mode, and display mode can each be set individually.
 これにより、手術画像処理プラットフォームを利用するユーザは、例えば、独自の推論モデル、描画態様及び表示態様を、それぞれ個別に設定することで、手術画像を独自の態様で解析し、この解析結果に基づき独自の態様の描画画像を生成し、この描画画像を独自の態様で表示することが可能となる。
 したがって、手術画像の処理において、解析、描画、表示の各処理を実行するレイヤーに実装するモデルを、自由に設定することが可能な手術画像処理プラットフォームを提供できる。
As a result, users using the surgical image processing platform can, for example, analyze surgical images in their own manner by individually setting their own inference model, drawing mode, and display mode, and based on the analysis results. It becomes possible to generate a drawn image in a unique manner and display this drawn image in a unique manner.
Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
 (2) 前記推論手段には、複数の前記推論モデルを設定可能であり、
 前記演算手段には、複数の前記描画態様を設定可能であり、
 前記表示設定手段には、複数の前記表示態様を設定可能であることを特徴とする(1)に記載の手術画像処理プラットフォーム。
(2) A plurality of the inference models can be set in the inference means,
A plurality of the drawing modes can be set in the calculation means,
The surgical image processing platform according to (1), wherein the display setting means is capable of setting a plurality of display modes.
 (2)の発明によれば、推論手段において複数の推論モデルを設定可能であり、演算手段において複数の描画態様を設定可能であり、表示設定手段において複数の表示態様を設定可能である。これにより、手術画像の処理における各レイヤー(推論手段、演算手段、表示設定手段)において、それぞれ多様な機能(各モデルで実現される機能)を搭載することが可能となる。 According to the invention (2), a plurality of inference models can be set in the inference means, a plurality of drawing modes can be set in the calculation means, and a plurality of display modes can be set in the display setting means. This makes it possible to mount various functions (functions realized by each model) in each layer (inference means, calculation means, display setting means) in the processing of surgical images.
 (3) 前記推論手段、前記演算手段及び前記表示設定手段は、データが入力される入力ポートと、データを出力する出力ポートとが、それぞれ個別に設定されることを特徴とする(1)又は(2)に記載の手術画像処理プラットフォーム。 (3) In the inference means, the calculation means, and the display setting means, an input port for inputting data and an output port for outputting data are each set individually (1) or The surgical image processing platform described in (2).
 (3)の発明によれば、推論手段、演算手段及び表示設定手段において、データが入力される入力ポートと、データを出力する出力ポートを、それぞれ個別に設定可能とすることで、手術画像の処理における各レイヤー(推論手段、演算手段、表示設定手段)において、データの入力元と、データの出力先を、それぞれ個別に設定することが可能となる。これにより、各レイヤーの入力元と出力先の自由度が向上する。 According to the invention (3), in the inference means, calculation means, and display setting means, the input port for inputting data and the output port for outputting data can be individually set, so that surgical images can be In each layer of processing (inference means, calculation means, display setting means), it becomes possible to individually set the data input source and data output destination. This increases the freedom of input sources and output destinations for each layer.
 (4) 前記推論手段、前記演算手段及び前記表示設定手段には、それぞれ、複数の前記入力ポートと複数の前記出力ポートを設定可能であることを特徴とする(3)に記載の手術画像処理プラットフォーム。 (4) The surgical image processing according to (3), wherein a plurality of the input ports and a plurality of the output ports can be set in the inference means, the calculation means, and the display setting means, respectively. platform.
 (4)の発明によれば、推論手段、演算手段及び表示設定手段には、それぞれ、複数の入力ポートと複数の出力ポートを設定可能である。これにより、手術画像の処理における各レイヤー(推論手段、演算手段、表示設定手段)において、それぞれ多様なデータを入力し、多様な方向(例えば、別の装置や、他のレイヤー)にデータを出力することが可能となる。 According to the invention (4), it is possible to set a plurality of input ports and a plurality of output ports in the inference means, the calculation means, and the display setting means, respectively. This allows each layer (inference means, calculation means, display setting means) in the processing of surgical images to input various data and output the data in various directions (for example, to another device or another layer). It becomes possible to do so.
 (5) 前記推論手段の前記入力ポートは、
  外部装置と、前記表示設定手段の前記出力ポートと、接続可能であり、
  前記外部装置から出力されたデータと、前記表示設定手段から出力されたデータと、が入力可能であり、
 前記演算手段の前記入力ポートは、
  前記推論手段及び/又は前記表示設定手段の前記出力ポートと、接続可能であり、
  前記推論手段及び/又は前記表示設定手段から出力されたデータと、が入力可能であり、
 前記表示設定手段の前記入力ポートは、
  前記演算手段の前記出力ポートと、接続可能であり、
  前記演算手段から出力されたデータが入力可能であることを特徴とする(3)又は(4)に記載の手術画像処理プラットフォーム。
(5) The input port of the inference means is
an external device and the output port of the display setting means are connectable;
Data output from the external device and data output from the display setting means can be input,
The input port of the calculation means is
connectable with the output port of the inference means and/or the display setting means;
data output from the inference means and/or the display setting means can be input;
The input port of the display setting means is
connectable to the output port of the calculation means;
The surgical image processing platform according to (3) or (4), wherein data output from the calculation means can be input.
 (5)の発明によれば、推論手段には、外部装置からのデータに加え、表示設定手段からのデータを入力することが可能となる。また、演算手段には、推論手段からのデータに加え、表示設定手段からのデータを入力することが可能となる。このため、処理の下流に位置する手段(例えば、推論手段や推論手段に対する表示設定手段)から、処理の上流に位置する手段(例えば、表示設定手段に対する推論手段や演算手段)にデータをフィードバックすることが可能となる。これにより、例えば、外部装置から入力されたデータを繰り返し用いて、互いに異なる推論モデルや描画態様で処理することで、複数種類の結果を得ることが可能となる。 According to the invention (5), it becomes possible to input data from the display setting means to the inference means in addition to data from the external device. Further, it is possible to input data from the display setting means to the calculation means in addition to data from the inference means. For this reason, data is fed back from the means located downstream of the processing (for example, the inference means and the display setting means for the inference means) to the means located upstream of the processing (for example, the inference means and the calculation means for the display setting means). becomes possible. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
 (6) 前記手術画像の画質を変換する前処理手段を、更に備え、
 前記推論手段は、前記画質が変換された前記手術画像を解析する推論モデルが設定されることを特徴とする(1)に記載の手術画像処理プラットフォーム。
(6) further comprising preprocessing means for converting the image quality of the surgical image,
The surgical image processing platform according to (1), wherein the inference means is configured with an inference model for analyzing the surgical image whose image quality has been converted.
 ここで、手術を撮影し手術画像を取得する外部装置(例えば、内視鏡システム/エンドスコープ等)には、様々な機種があり、機種によって取得された手術画像の画質が異なる。そして、手術画像の画質によっては、解析結果の精度が低下するおそれがある。 Here, there are various models of external devices (for example, endoscope systems/endoscopes, etc.) that photograph the surgery and obtain surgical images, and the image quality of the obtained surgical images differs depending on the model. Then, depending on the image quality of the surgical image, the accuracy of the analysis result may decrease.
 (6)の発明によれば、前処理手段が、手術を撮影した手術画像の画質を変換し、推論手段が、画質が変換された手術画像を解析する。 According to the invention (6), the preprocessing means converts the image quality of a surgical image taken during surgery, and the inference means analyzes the surgical image whose image quality has been converted.
 これにより、手術画像処理プラットフォームに入力された手術画像を、そのまま解析するのではなく、手術画像の画質を、例えば、推論手段の解析精度が向上する画質に変換し、画質が変換された手術画像を解析することで、解析結果の精度が低下するのを防止できる。 As a result, instead of analyzing the surgical images input to the surgical image processing platform as they are, the image quality of the surgical images is converted to an image quality that improves the analysis accuracy of the inference means, and the surgical images with the converted image quality are By analyzing the above, it is possible to prevent the accuracy of the analysis results from decreasing.
 (7) 前記前処理手段は、前記手術画像の前記画質を変換させる変換器を出力可能であることを特徴とする(6)に記載の手術画像処理プラットフォーム。 (7) The surgical image processing platform according to (6), wherein the preprocessing means is capable of outputting a converter that converts the image quality of the surgical image.
 (7)の発明によれば、前処理手段は、手術画像の画質を変換させる変換器を出力可能である。 According to the invention (7), the preprocessing means can output a converter that converts the image quality of the surgical image.
 このような構成によれば、あるユーザにより使用されている手術画像処理プラットフォームの変換器を、例えば、手術画像処理プラットフォームの提供者や別のユーザに提供することが可能となる。
 これにより、手術画像の画質を変換させた変換器の改良や流用等が可能となり、ユーザビリティが向上する。
According to such a configuration, it is possible to provide a converter of a surgical image processing platform used by a certain user to, for example, a provider of the surgical image processing platform or another user.
This makes it possible to improve or reuse a converter that converts the image quality of surgical images, and improves usability.
 (8) 前記前処理手段は、前記手術画像の前記画質を、前記推論モデルが学習した画像の画質に応じた変換式により変換することを特徴とする(6)又は(7)に記載の手術画像処理プラットフォーム。 (8) The surgery according to (6) or (7), wherein the preprocessing means converts the image quality of the surgical image using a conversion formula according to the image quality of the image learned by the inference model. Image processing platform.
 ここで、推論モデルが学習した手術画像を撮像した外部装置と、推論モデルによる解析対象となる手術画像を取得する外部装置と、が互いに異なると、これらの外部装置で撮像した手術画像の画質も、互いに異なる。このような場合、解析結果の精度が低下するおそれがある。 Here, if the external device that captured the surgical image that the inference model has learned and the external device that acquires the surgical image that is the subject of analysis by the inference model are different from each other, the image quality of the surgical images captured by these external devices may also be affected. , different from each other. In such a case, there is a risk that the accuracy of the analysis results will decrease.
 (8)の発明によれば、前処理手段は、解析対象となる手術を撮影した手術画像の画質を、推論モデルが学習した画像の画質に応じた変換式により変換するので、解析結果の精度が低下するのを防止できる。 According to the invention (8), the preprocessing means converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference model, so that the accuracy of the analysis result is can be prevented from decreasing.
 (9) 外科医によって行われた手術を撮影した手術画像を処理する手術画像処理プラットフォームを、
 前記手術画像を解析する推論モデルが設定される推論手段、
 前記推論モデルによる解析結果を手術画像に反映させた描画画像を生成する描画態様が設定される演算手段、
 前記描画画像を表示手段に所定の態様で表示する表示態様が設定される表示設定手段、として機能させ、
 前記推論モデル、前記描画態様及び前記表示態様が、それぞれ個別に設定されるプログラム。
(9) A surgical image processing platform that processes surgical images taken of surgeries performed by surgeons.
inference means in which an inference model for analyzing the surgical image is set;
a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image;
functioning as display setting means for setting a display mode for displaying the drawn image on a display means in a predetermined manner;
A program in which the inference model, the drawing mode, and the display mode are each individually set.
 (9)の発明によれば、(1)の発明と同様の作用効果を奏する。 According to the invention (9), the same effects as the invention (1) are achieved.
 本発明によれば、手術画像の処理において、解析、描画、表示の各処理を実行するレイヤーに実装するモデルを、自由に設定することが可能な手術画像処理プラットフォーム及びコンピュータプログラムを提供することができる。 According to the present invention, it is possible to provide a surgical image processing platform and a computer program that can freely set a model to be implemented in a layer that executes each process of analysis, drawing, and display in processing of surgical images. can.
本発明の実施形態に係る手術画像処理プラットフォームが適用された手術支援システムの機能構成を示す図である。1 is a diagram showing a functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied. 本発明の実施形態に係る手術画像処理プラットフォームによる表示態様の一例である。It is an example of the display aspect by the surgical image processing platform based on embodiment of this invention. 本発明の実施形態に係る手術画像処理プラットフォームの概要を説明する図である。FIG. 1 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームにおけるデータの仕様を説明する図である。FIG. 3 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームにおけるデータの流れを説明する図である。FIG. 2 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームの推論プラットフォームにおける設定を説明する図である。FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームの推論プラットフォームにおける設定を説明する図である。FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームの演算プラットフォームにおける設定を説明する図である。FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームの演算プラットフォームにおける設定を説明する図である。FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態に係る手術画像処理プラットフォームの表示プラットフォームにおける設定を説明する図である。FIG. 3 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention. 本発明の実施形態の応用例に係る手術画像処理プラットフォームにおけるデータの流れを説明する図である。FIG. 3 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention. 本発明の実施形態の応用例に係る手術画像処理プラットフォームの前処理プラットフォームにおける設定を説明する図である。FIG. 3 is a diagram illustrating settings in a preprocessing platform of a surgical image processing platform according to an application example of an embodiment of the present invention. 本発明の実施形態の応用例に係る手術画像処理プラットフォームの推論プラットフォームにおける設定を説明する図である。FIG. 3 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to an application example of the embodiment of the present invention. 本発明の実施形態の応用例に係る手術画像処理プラットフォームにおいて、推論プラットフォームが前処理プラットフォームに出力する推論モデル機器情報を説明する図である。FIG. 7 is a diagram illustrating inference model device information output by the inference platform to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention. 本発明の実施形態の応用例に係る手術画像処理プラットフォームにおいて、表示プラットフォームが前処理プラットフォームに出力する接続機器情報を説明する図である。FIG. 3 is a diagram illustrating connected device information output from a display platform to a preprocessing platform in a surgical image processing platform according to an application example of an embodiment of the present invention.
 以下、添付図面を参照して、本発明を実施するための形態(以下、実施形態)について詳細に説明する。以降の図においては、実施形態の説明の全体を通して同じ要素には同じ番号又は符号を付している。 Hereinafter, modes for carrying out the present invention (hereinafter referred to as embodiments) will be described in detail with reference to the accompanying drawings. In the subsequent figures, the same numbers or symbols are given to the same elements throughout the description of the embodiments.
(手術支援システム)
 図1は、本発明の実施形態に係る手術画像処理プラットフォームが適用された手術支援システムの機能構成を示す図である。
 本発明の実施形態に係る手術画像処理プラットフォーム1(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)は、医療機関(例えば、病院等の外科医によって手術を行う機関)での、外科医における手術の様子を撮像した画像である手術画像を処理して表示することで、外科医を支援する手術支援システムに適用される。
(Surgical support system)
FIG. 1 is a diagram showing the functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied.
A surgical image processing platform 1 (an inference platform 10, a calculation platform 20, a display platform 30) according to an embodiment of the present invention is a surgical image processing platform 1 (inference platform 10, calculation platform 20, display platform 30) that depicts a surgical operation performed by a surgeon at a medical institution (for example, an institution such as a hospital where the surgeon performs the surgery). It is applied to a surgical support system that supports surgeons by processing and displaying surgical images that are captured images.
 具体的には、手術支援システム100は、手術画像を取得する取得部(Camera Capture Module等)において、医療機関での、外科医における手術の様子を撮像した画像である手術画像を、外部装置(Camera & Imager等)から、接続部(Camera Driver統合IF等)を介して取得する。手術画像は、手術中の手術が行われている患者の身体及び外科医や助手等によって操作されている器具(例えば、鉗子や、電動のハサミや電気メス、超音波凝固切開装置等のエネルギーデバイス等)の様子が撮像されている。 Specifically, the surgical support system 100 uses an acquisition unit (Camera Capture Module, etc.) that acquires surgical images to capture a surgical image, which is an image of a surgeon's operation at a medical institution, from an external device (Camera Capture Module, etc.). & Imager, etc.) via a connection unit (Camera Driver integrated IF, etc.). Surgical images include images of the patient's body undergoing surgery and instruments operated by the surgeon and assistants (for example, forceps, electric scissors, electric scalpels, energy devices such as ultrasonic coagulation and cutting devices, etc.) ) was photographed.
 手術支援システム100は、取得部や、手術画像を処理する手術画像処理プラットフォーム1の他に、手術支援システム100全体を制御するシステム制御部(Ubuntu OS/Kernel、各種システムライブラリ、各種の接続機器(各種Sensor、各種device)等)からのデータを処理する各種Module、各種の接続機器との接続部(Camera Driver 統合IF、Sensor 統合IF等)等を備える。 The surgical support system 100 includes an acquisition unit and a surgical image processing platform 1 that processes surgical images, as well as a system control unit (Ubuntu OS/Kernel, various system libraries, various connected devices) that controls the entire surgical support system 100. It is equipped with various modules that process data from various sensors, various devices), and connection parts with various connected devices (camera driver integrated IF, sensor integrated IF, etc.).
 手術支援システム100の機能構成は、あくまで一例であり、1つの機能ブロック(データベース及び機能処理部)を分割したり、複数の機能ブロックをまとめて1つの機能ブロックとして構成したりしてもよい。各機能処理部は、装置や端末に内蔵された第1制御ユニットとしてのCPU(Central Processing Unit)や、第2制御ユニットとしてのGPU(Graphics Processing Unit)が、ROM(Read Only Memory)、フラッシュメモリ、SSD(Solid State Drive)、ハードディスク等の記憶装置(記憶部)に格納されたコンピュータプログラム(例えば、基幹ソフトや上述の各種処理をCPUに実行させるアプリ等)を読み出し、CPUやGPUにより実行されるコンピュータプログラムによって実現される。なお、各機能処理部は、FPGA(Field-Programmable Gate Array)で構成してもよい。すなわち、各機能処理部は、このコンピュータプログラムが、記憶装置に格納されたデータベース(DB;Data Base)やメモリ上の記憶領域からテーブル等の必要なデータを読み書きし、場合によっては、関連するハードウェア、入出力装置、表示装置、通信インターフェース装置)を制御することによって実現される。また、本発明の実施形態におけるデータベース(DB)は、商用データベースであってよいが、単なるテーブルやファイルの集合体をも意味し、データベースの内部構造自体は問わないものとする。 The functional configuration of the surgical support system 100 is just an example, and one functional block (database and functional processing unit) may be divided, or multiple functional blocks may be combined into one functional block. Each functional processing unit includes a CPU (Central Processing Unit) as a first control unit built into the device or terminal, a GPU (Graphics Processing Unit) as a second control unit, a ROM (Read Only Memory), and a flash memory. A computer program (for example, core software or an application that causes the CPU to execute the various processes described above) stored in a storage device (storage unit) such as a SSD (Solid State Drive) or hard disk is read out and executed by the CPU or GPU. This is realized by a computer program. Note that each functional processing unit may be configured with an FPGA (Field-Programmable Gate Array). In other words, each functional processing unit reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area in memory, and in some cases, reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area on a memory. This is realized by controlling the hardware, input/output devices, display devices, communication interface devices). Further, the database (DB) in the embodiment of the present invention may be a commercial database, but it also means a simple collection of tables and files, and the internal structure of the database itself does not matter.
 また、手術支援システム100は、第1制御ユニットとしてのCPUは、主に、システム制御部の機能を実現し、第2制御ユニットとしてのGPUは、主に、手術画像処理プラットフォーム1(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)の機能を実現する。 Further, in the surgical support system 100, the CPU as the first control unit mainly realizes the function of the system control section, and the GPU as the second control unit mainly realizes the function of the surgical image processing platform 1 (inference platform 10). , computing platform 20, and display platform 30).
(手術画像処理プラットフォーム)
 手術画像処理プラットフォーム1(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)は、取得部(Camera Capture Module等)が取得した手術画像を処理する。また、手術画像処理プラットフォーム1は、外部装置(Camera & Imager等)毎に、データを分類し、内部時計に準じてDBに記録する。これにより、ユーザは単項目又は複数項目のデータを時系列で確認することが可能となる。
(Surgical image processing platform)
The surgical image processing platform 1 (inference platform 10, calculation platform 20, display platform 30) processes surgical images acquired by an acquisition unit (Camera Capture Module, etc.). Further, the surgical image processing platform 1 classifies data for each external device (Camera & Imager, etc.) and records it in the DB according to the internal clock. This allows the user to check data for a single item or multiple items in chronological order.
 推論手段の一例である推論プラットフォーム10は、手術画像において、身体の状態を示す身体情報及び/又は外科医に操作されている器具の状態を示す器具情報を、推論モデルに入力し、AI(Artificial Intelligence)により解析し、解剖構造や、解剖構造の構造や、器具の軌跡や、解剖構造に対する器具の状態等を推論する。推論プラットフォーム10は、手術支援システム100を使用するユーザにより、推論モデルが設定される。 The inference platform 10, which is an example of an inference means, inputs physical information indicating the state of the body and/or instrument information indicating the state of the instrument being operated by the surgeon in a surgical image into an inference model, and uses AI (Artificial Intelligence). ) to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc. An inference model is set in the inference platform 10 by a user who uses the surgical support system 100.
 演算手段の一例である演算プラットフォーム20は、推論プラットフォーム10の解析結果(解剖構造や、解剖構造の構造や、器具の軌跡や、解剖構造に対する器具の状態等)を所定の描画態様にして、手術画像に反映させた描画画像を生成する。本実施形態において、描画態様は、推論プラットフォーム10の解析結果を、手術画像に重ねて表示する態様(例えば、特定の臓器や体液と解析された部分に、臓器や体液の種類毎に異なる色を付する態様等)である。また、描画態様には、その他、施術する外科医の手術を支援する態様(例えば、手術の経過時間や、器具の理想的な軌跡を示す情報や、切除臓器の切除のタイミングを示す情報等)等を示す情報も含まれる。このような描画態様は、このような態様を生成するアルゴリズム等である演算処理部により実現される。すなわち、手術支援システム100を使用するユーザは、演算処理部を選択することで、描画態様を設定する。 The calculation platform 20, which is an example of calculation means, converts the analysis results of the inference platform 10 (anatomical structure, structure of the anatomical structure, trajectory of the instrument, state of the instrument with respect to the anatomical structure, etc.) into a predetermined drawing format and performs surgery. Generate a drawn image that is reflected in the image. In this embodiment, the drawing mode is a mode in which the analysis results of the inference platform 10 are displayed superimposed on the surgical image (for example, a different color is displayed for each type of organ or body fluid in a part that has been analyzed as a specific organ or body fluid). ). In addition, the drawing mode includes other aspects that support the surgeon performing the surgery (for example, information indicating the elapsed time of the surgery, information indicating the ideal trajectory of the instrument, information indicating the timing of resection of the resected organ, etc.), etc. Also includes information indicating. Such a drawing mode is realized by an arithmetic processing unit that is an algorithm or the like that generates such a mode. That is, the user who uses the surgical support system 100 sets the drawing mode by selecting the arithmetic processing unit.
 表示設定手段の一例である表示プラットフォーム(「UIプラットフォーム」ともいう。)30は、演算プラットフォーム20が生成した描画画像を、表示手段(コンソール/LCD(Liquid Crystal Display)等)に、所定の表示態様で表示する。本実施形態において、表示態様は、描画画像の表示の仕方(例えば、オリジナルとなる手術画像と、演算プラットフォーム20に生成された描画画像と、並べて表示する等)である。このような表示態様は、ユーザの操作画面であるUI(User Interface)等である表示処理部により実現される。すなわち、手術支援システム100を使用するユーザは、表示処理部を選択することで、表示態様を設定する。 The display platform (also referred to as "UI platform") 30, which is an example of display setting means, displays the drawn image generated by the calculation platform 20 on a display means (console/LCD (Liquid Crystal Display), etc.) in a predetermined display mode. Display in . In this embodiment, the display mode is a method of displaying a drawn image (for example, an original surgical image and a drawn image generated on the calculation platform 20 are displayed side by side). Such a display mode is realized by a display processing unit such as a UI (User Interface) that is a user operation screen. That is, the user using the surgical support system 100 sets the display mode by selecting the display processing section.
 図2は、本発明の実施形態に係る手術画像処理プラットフォームによる表示態様の一例である。
 手術画像処理プラットフォーム1において、推論プラットフォーム10は、取得部(Camera Capture Module等)が取得した手術画像を解析(図2に示す例では、神経組織となる部分を解析)する。次に、演算プラットフォーム20は、この解析結果を所定の描画態様(図2に示す例では、神経組織と解析した部分を着色する態様)にして、手術画像に反映させた描画画像を生成する。そして、表示プラットフォーム30は、この描画画像を所定の表示態様(図2に示す例では、オリジナルとなる手術画像と、演算プラットフォーム20に生成された描画画像と、を並べて表示する態様)で、表示手段(コンソール/LCD(Liquid Crystal Display)等)に表示する。
FIG. 2 is an example of a display mode by the surgical image processing platform according to the embodiment of the present invention.
In the surgical image processing platform 1, the inference platform 10 analyzes the surgical image acquired by the acquisition unit (Camera Capture Module, etc.) (in the example shown in FIG. 2, analyzes the portion that becomes neural tissue). Next, the calculation platform 20 converts this analysis result into a predetermined drawing mode (in the example shown in FIG. 2, a mode in which the portion analyzed as neural tissue is colored), and generates a drawn image that is reflected in the surgical image. Then, the display platform 30 displays this drawn image in a predetermined display mode (in the example shown in FIG. 2, the original surgical image and the drawn image generated on the calculation platform 20 are displayed side by side). Display on a means (console/LCD (Liquid Crystal Display), etc.).
 図3は、本発明の実施形態に係る手術画像処理プラットフォームの概要を説明する図である。
 手術画像処理プラットフォーム1(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)は、ユーザにより、推論モデル、描画態様(演算処理部)及び表示態様(表示処理部)が、それぞれ個別に設定される。
 図3に示す例において、左側の図はユーザであるX社の設定例を示し、右側の図はユーザであるY社の設定例を示している。
FIG. 3 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention.
In the surgical image processing platform 1 (inference platform 10, calculation platform 20, display platform 30), the inference model, drawing mode (computation processing section), and display mode (display processing section) are each individually set by the user.
In the example shown in FIG. 3, the diagram on the left shows an example of settings for company X, which is a user, and the diagram on the right shows an example of settings for company Y, which is a user.
 推論プラットフォーム10は、ユーザであるX社により、推論モデルとして、自社モデルのモデルAと、X社専用モデルのモデルFが設定されている。
 また、演算プラットフォーム20は、X社により、描画態様(演算処理部)として、自社描画アルゴリズムのアルゴAと、他社オープン描写アルゴリズムのアルゴDが設定されている。
 また、表示プラットフォーム30は、X社により、表示態様(表示処理部)として、他社オープンGUI(Graphical User Interface)のGUIDが設定されている。
In the inference platform 10, a company model, model A, and a model F, which is a model exclusive to company X, are set as inference models by company X, which is a user.
Further, in the computation platform 20, Algo A, which is an in-house rendering algorithm, and Algo D, which is an open rendering algorithm from another company, are set by Company X as rendering modes (computation processing unit).
Further, in the display platform 30, the GUID of another company's open GUI (Graphical User Interface) is set by Company X as the display mode (display processing unit).
 そして、推論プラットフォーム10において、X社により、モデルAには内視鏡Aからのデータ(手術画像等)が入力されるように設定され、モデルFにはsensorAからのデータ(デバイスの位置情報等)が入力されるように設定されている。
 また、演算プラットフォーム20において、アルゴAにはモデルAからのデータ(解析結果)が入力されるように設定され、アルゴDにはモデルFからのデータ(解析結果)が入力されるように設定されている。
 また、表示プラットフォーム30において、GUIDには、アルゴA及びアルゴDからのデータ(描画画像)が入力されるように設定されている。また、表示プラットフォーム30において、GUIDからのデータは、表示手段(LCD(Liquid Crystal Display))と記憶手段(ストレージ)に出力するように設定されている。
In the inference platform 10, Company ) is set to be input.
Furthermore, in the calculation platform 20, Algo A is set to receive data (analysis results) from Model A, and Algo D is set to receive data (analysis results) from Model F. ing.
Furthermore, in the display platform 30, the GUID is set so that data (drawn images) from Argo A and Argo D are input. Further, in the display platform 30, data from the GUID is set to be output to a display means (LCD (Liquid Crystal Display)) and a storage means.
 一方、推論プラットフォーム10は、ユーザであるY社により、推論モデルとして、自社モデルのモデルBと、オープンモデルの**医師モデル及び他社モデルとが設定されている。
 また、演算プラットフォーム20は、Y社により、描画態様(演算処理部)として、Y社専用アルゴリズムのアルゴFが設定されている。
 また、表示プラットフォーム30は、Y社により、表示態様(表示処理部)として、Y社専用GUIのGUIFが設定されている。
On the other hand, in the inference platform 10, the inference models set by company Y, which is the user, are the in-house model Model B, the open model **Doctor model, and the models of other companies.
Further, in the computation platform 20, Algo F, which is an algorithm exclusive to Y company, is set by Y company as a drawing mode (computation processing unit).
Further, in the display platform 30, GUIF, which is a GUI exclusive to Y company, is set by Y company as a display mode (display processing unit).
 推論プラットフォーム10において、Y社により、モデルBにはDeviceAからのデータ(デバイスの位置情報等)が入力されるように設定され、**医師モデル及び他社モデルには内視鏡A及び内視鏡Bからのデータ(手術画像等)が入力されるように設定されている。
 また、演算プラットフォーム20において、アルゴFにはモデルB、**医師モデル及び他社モデルからのデータ(解析結果)が入力されるように設定されている。また、演算プラットフォーム20において、アルゴFからのデータは、DeviceAに出力(フィードバック)するように設定されている。
 また、表示プラットフォーム30において、GUIFには、アルゴFからのデータ(描画画像)が入力されるように設定されている。また、表示プラットフォーム30において、GUIFからのデータは、表示手段(コンソール)と記憶手段(ストレージ)に出力するように設定されている。
In the inference platform 10, Company Y sets Model B to input data from Device A (device position information, etc.), and the doctor model and other companies' models include Endoscope A and Endoscope. Data from B (surgical images, etc.) is set to be input.
Further, in the calculation platform 20, Argo F is set to receive data (analysis results) from model B, **doctor model, and other companies' models. Further, in the calculation platform 20, data from Argo F is set to be output (feedback) to Device A.
Further, in the display platform 30, data (drawn image) from Argo F is set to be input to the GUIF. Furthermore, in the display platform 30, data from the GUIF is set to be output to a display means (console) and a storage means (storage).
 このように、手術画像処理プラットフォームにおいて、推論プラットフォーム10には、複数の推論モデルを設定可能であり、演算プラットフォーム20には、複数の描画態様(演算処理部)を設定可能であり、表示プラットフォーム30には、複数の表示態様(表示処理部)を設定可能である。 In this way, in the surgical image processing platform, the inference platform 10 can be set with a plurality of inference models, the calculation platform 20 can be set with a plurality of drawing modes (calculation processing units), and the display platform 30 can be set with a plurality of drawing modes (calculation processing units). It is possible to set a plurality of display modes (display processing units).
 また、推論プラットフォーム10において、複数の推論モデルには、複数の外部装置(例えば、内視鏡のカメラ、デバイスに設けられたセンサ等)からのデータをそれぞれ入力させてもよいし、1つの推論モデルに、複数の外部装置からのデータを入力させてもよい。 Furthermore, in the inference platform 10, data from a plurality of external devices (for example, an endoscope camera, a sensor provided in a device, etc.) may be input to each of the plurality of inference models, or one inference The model may be input with data from multiple external devices.
 また、演算プラットフォーム20において、複数の描画態様(演算処理部)には、複数の推論モデルからのデータ(解析結果)をそれぞれ入力させてもよいし、1つの描画態様(演算処理部)に、複数の推論モデルからのデータを入力させてもよい。また、演算プラットフォーム20において、描画態様(演算処理部)によるデータは、主に、表示プラットフォーム30に出力されるが、表示プラットフォーム30に限らず、外部装置に出力してもよい。 Further, in the calculation platform 20, data (analysis results) from a plurality of inference models may be respectively input to a plurality of drawing modes (calculation processing units), or data (analysis results) from a plurality of inference models may be inputted to one drawing mode (calculation processing unit). Data from multiple inference models may be input. Further, in the calculation platform 20, data generated by the drawing mode (the calculation processing unit) is mainly output to the display platform 30, but the data is not limited to the display platform 30, but may be output to an external device.
 また、表示プラットフォーム30において、複数の表示態様(表示処理部)には、複数の描画態様(演算処理部)からのデータ(描画画像)をそれぞれ入力させてもよいし、1つの表示態様(表示処理部)に、複数の描画態様(演算処理部)からのデータを入力させてもよい。また、表示プラットフォーム30において、表示態様(表示処理部)によるデータは、主に、表示手段(コンソール/LCD(Liquid Crystal Display)等)に出力されるが、表示手段に限らず、記憶装置に出力してもよいし、推論プラットフォーム10や演算プラットフォーム20に出力してもよい。 Furthermore, in the display platform 30, data (drawn images) from a plurality of drawing modes (arithmetic processing sections) may be respectively input to the plurality of display modes (display processing sections), or data (drawn images) from a plurality of drawing modes (arithmetic processing sections) may be respectively input to the plurality of display modes (display processing sections). The processing unit) may input data from a plurality of drawing modes (arithmetic processing units). In addition, in the display platform 30, data from the display mode (display processing unit) is mainly output to a display means (console/LCD (Liquid Crystal Display), etc.), but is not limited to the display means, and is output to a storage device. Alternatively, it may be output to the inference platform 10 or the calculation platform 20.
 図4は、本発明の実施形態に係る手術画像処理プラットフォームにおけるデータの仕様を説明する図である。
 手術画像処理プラットフォーム1は、各レイヤー(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)毎に、入力可能な入力データ形式、出力する出力データ形式、設定可能な実装ファイル(推論プラットフォーム10の推論モデル、演算プラットフォーム20の演算処理部、表示プラットフォーム30の表示処理部)が規定されている。
FIG. 4 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention.
The surgical image processing platform 1 has input data formats that can be input, output data formats that can be output, and implementation files that can be set (inference models of the inference platform 10) for each layer (inference platform 10, calculation platform 20, display platform 30). , an arithmetic processing section of the arithmetic platform 20, and a display processing section of the display platform 30).
 図4に示す例では、推論プラットフォーム10において、入力データ形式は、GPU上に展開された前処理後の入力画データであり、データのSizeやdtypeが規定されている。また、解析結果を示す出力データ形式は、GPU上に展開された確信度データであり、データのSizeやdtypeが規定されている。また、実装ファイルは、汎用フォーマット(ONNXに変換されたモデル)が規定されている。 In the example shown in FIG. 4, in the inference platform 10, the input data format is preprocessed input image data developed on the GPU, and the size and dtype of the data are defined. Furthermore, the output data format representing the analysis result is confidence data developed on the GPU, and the size and dtype of the data are defined. Furthermore, a general format (model converted to ONNX) is defined for the implementation file.
 演算プラットフォーム20において、入力データ形式は、推論プラットフォーム10の出力データ形式と同様のものが規定されている。これにより、推論プラットフォーム10から出力されたデータを、演算プラットフォーム20に入力することが可能となる。また、描画画像を示す出力データ形式は、GPU上に展開された表示画像であり、データのSizeやdtypeが規定されている。また、実装ファイルは、指定言語で記述された演算ファイルが規定されている。 In the calculation platform 20, the input data format is defined as the same as the output data format of the inference platform 10. This makes it possible to input data output from the inference platform 10 to the calculation platform 20. Further, the output data format representing the drawn image is a display image developed on the GPU, and the size and dtype of the data are defined. Further, as the implementation file, a computation file written in a designated language is defined.
 表示プラットフォーム30において、入力データ形式は、演算プラットフォーム20の出力データ形式と同様のものが規定されている。これにより、演算プラットフォーム20から出力されたデータを、表示プラットフォーム30に入力することが可能となる。また、表示態様を示す出力データ形式は、指定ライブラリの表示やイベント処理等を含んだデータ群である。また、実装ファイルは、指定ライブラリで既述されたUIファイルが規定されている。 In the display platform 30, the input data format is defined as the same as the output data format of the calculation platform 20. This makes it possible to input data output from the calculation platform 20 to the display platform 30. Further, the output data format indicating the display mode is a data group including display of the designated library, event processing, and the like. Furthermore, the UI file already described in the specified library is defined as the implementation file.
 このように、推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30毎に、入力可能な入力データ形式、出力する出力データ形式、設定可能な実装ファイルを規定することで、この規定に準ずるコンテンツであれば、手術支援システム100の提供者が予め用意したコンテンツ(推論モデル、演算処理部、表示処理部)に限らず、ユーザ独自のコンテンツや、サードパーティが提供するコンテンツや、オープンコンテンツとして提供されているコンテンツを、手術画像処理プラットフォーム1で設定することが可能となる。 In this way, by defining the input data formats that can be input, the output data formats that can be output, and the implementation files that can be set for each of the inference platform 10, calculation platform 20, and display platform 30, if the content conforms to these regulations, , is not limited to content prepared in advance by the provider of the surgical support system 100 (inference model, arithmetic processing unit, display processing unit), but also content unique to the user, content provided by a third party, or provided as open content. Contents can be set on the surgical image processing platform 1.
 図5は、本発明の実施形態に係る手術画像処理プラットフォームにおけるデータの流れを説明する図である。
 推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30は、データが入力される入力ポートと、データを出力する出力ポートとが、ユーザにより、それぞれ個別に設定される。また、推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30には、ユーザにより、それぞれ、複数の入力ポートと複数の出力ポートを設定可能である。
FIG. 5 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention.
In the inference platform 10, the calculation platform 20, and the display platform 30, input ports for inputting data and output ports for outputting data are individually set by the user. Furthermore, the inference platform 10, calculation platform 20, and display platform 30 can be configured with a plurality of input ports and a plurality of output ports, respectively, by the user.
 推論プラットフォーム10に設定された入力ポート(図5に示すPortII1、PortII2等)は、ユーザにより、外部装置(Camera & Imager等)に接続された接続部(Device FI1、CameraIFI1等)に接続され、接続部を介して、外部装置からのデータ(手術画像(身体情報、器具情報)が入力される。この入力されたデータは、推論モデルに供給される。 The input ports (PortII1, PortII2, etc. shown in Figure 5) set in the inference platform 10 are connected by the user to the connection parts (Device FI1, CameraIFI1, etc.) connected to external devices (Camera & Imager, etc.), and the connection Data (surgical images (physical information, instrument information)) from an external device is input through the section. This input data is supplied to the inference model.
 推論プラットフォーム10に設定された出力ポート(図5に示すPortIO1、PortIO2等)は、ユーザにより、演算プラットフォーム20の入力ポート(図5に示すPortDI1、PortDI2等)や表示プラットフォーム30の入力ポートや、記憶手段や外部装置のIFに接続され、これらに推論モデルからの出力データ(解析結果)を出力する。 The output ports (PortIO1, PortIO2, etc. shown in FIG. 5) set on the inference platform 10 can be configured by the user as input ports (PortDI1, PortDI2, etc. shown in FIG. 5) of the calculation platform 20, input ports of the display platform 30, or storage ports. It is connected to the IF of means and external devices, and outputs output data (analysis results) from the inference model to these.
 演算プラットフォーム20に設定された入力ポート(図5に示すPortDI1、PortDI2等)は、ユーザにより、推論プラットフォーム10の出力ポート(図5に示すPortIO1、PortIO2等)や、表示プラットフォーム30の出力ポート(図5に示すPortGO3等)に接続され、推論プラットフォーム10からの出力データ(解析結果)や、表示プラットフォーム30からの出力データ(表示態様)が入力される。この入力されたデータは、演算処理部に供給される。
 演算プラットフォーム20に設定された出力ポート(図5に示すPortDO1、PortDO2等)は、ユーザにより、表示プラットフォーム30の入力ポート(図5に示すPortGI1)や、記憶手段や外部装置のIF(図5に示すDeviceIFO1)に接続され、これらに演算処理部からの出力データ(描画画像)を出力する。
The input ports (PortDI1, PortDI2, etc. shown in FIG. 5) set on the calculation platform 20 can be configured by the user as the output ports (PortIO1, PortIO2, etc. shown in FIG. 5) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 5), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input. This input data is supplied to the arithmetic processing section.
The output ports (PortDO1, PortDO2, etc. shown in FIG. 5) set on the calculation platform 20 can be used by the user to input the input ports of the display platform 30 (PortGI1 shown in FIG. The output data (drawn image) from the arithmetic processing unit is output to these devices.
 表示プラットフォーム30に設定された入力ポート(図5に示すPortGI1)は、ユーザにより、演算プラットフォーム20の出力ポート(図5に示すPortDO1)に接続され、演算プラットフォーム20からの出力データ(描画画像)が入力される。この入力されたデータは、表示処理部に供給される。
 表示プラットフォーム30に設定された出力ポート(図5に示すPortGO1、PortGO2、PortGO3等)は、ユーザにより、表示手段(LCD/コンソール)や、推論プラットフォーム10の入力ポート(図5に示すPortII4)や、演算プラットフォーム20の入力ポート(図5に示すPortDI4)や、記憶手段や外部装置のIFに接続され、これらに表示処理部からの出力データ(表示態様)を出力する。
The input port (PortGI1 shown in FIG. 5) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 5) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input. This input data is supplied to the display processing section.
The output ports set on the display platform 30 (PortGO1, PortGO2, PortGO3, etc. shown in FIG. 5) can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (PortII4 shown in FIG. 5), It is connected to the input port of the calculation platform 20 (PortDI4 shown in FIG. 5), storage means, and IF of an external device, and outputs output data (display mode) from the display processing section to these.
 次に、手術画像処理プラットフォーム1にける、ユーザによるコンテンツ(推論モデル、演算処理部、表示処理部等)の設定について説明する。
 図6及び図7は、本発明の実施形態に係る手術画像処理プラットフォームの推論プラットフォームにおける設定を説明する図である。
Next, the settings of content (inference model, arithmetic processing unit, display processing unit, etc.) by the user in the surgical image processing platform 1 will be explained.
6 and 7 are diagrams illustrating settings in the reasoning platform of the surgical image processing platform according to the embodiment of the present invention.
 ユーザは、推論プラットフォーム10において、推論モデルやその他の演算要素や、入力ポート及び出力ポート等のコンテンツを設定する場合、設定操作を行う。設定操作は、例えば、設定手段として機能する表示プラットフォーム30により表示手段に表示された設定画面に表示された追加ボタン(図6に示す例では、+と表示された部分)を操作する。これにより、推論プラットフォーム10は、入力ポート(図6に示す例では、PortII1等)や出力ポート(図6に示す例では、PortIO1等)を1つずつ設定する。また、ユーザにより追加ボタンが操作された場合、設定手段として機能する表示プラットフォーム30は、複数種類のコンテンツを選択可能な設定選択画面を、表示手段に表示する。 A user performs a setting operation when setting contents such as an inference model, other calculation elements, input ports, and output ports on the inference platform 10. The setting operation is performed, for example, by operating an add button (in the example shown in FIG. 6, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit. As a result, the inference platform 10 sets input ports (Port II1, etc. in the example shown in FIG. 6) and output ports (PortIO1, etc. in the example shown in FIG. 6) one by one. Further, when the user operates the add button, the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
 推論プラットフォーム10における設定選択画面には、ユーザが選択可能なコンテンツとして、複数種類の推論モデル(図6に示す例では、基本モデル、オープンモデル、クローズモデル等)、推論モデルの解析結果に対する複数種類の演算方法(図6に示す例では、ADD、MATMUL、ABS等)、複数種類の定数(図6に示す例では、SCALAR等)、条件分岐(図6に示す例では、SELECTOR等)が表示されている。 The setting selection screen in the inference platform 10 includes multiple types of inference models (in the example shown in FIG. 6, basic model, open model, closed model, etc.) and multiple types of analysis results of the inference models, as content that the user can select. The calculation method (ADD, MATMUL, ABS, etc. in the example shown in Figure 6), multiple types of constants (SCALAR, etc. in the example shown in Figure 6), and conditional branches (SELECTOR, etc. in the example shown in Figure 6) are displayed. has been done.
 設定選択画面では、ユーザが選択可能なコンテンツを、供給元毎や、解析対象(例えば、解剖構造等)毎や、手術の内容毎に、グルーピングしてもよい。
 図6に示す例において、設定選択画面では、複数の推論モデルが、供給元(基本モデル、オープンモデル、クローズモデル)毎にグルーピングされている。
 図7に示す例において、設定選択画面では、複数の推論モデルが、複数階層にグルーピングされている。詳細には、図7に示す例では、最上層では解析対象(結合組織、神経等)毎にグルーピングされ、各グループにおいて、更に、手術の内容(胸腔鏡肺切除、ロボット支援下内視鏡手術の胃/大腸領域等)毎に、グルーピングされている。
On the setting selection screen, contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), or by surgery content.
In the example shown in FIG. 6, on the setting selection screen, a plurality of inference models are grouped by supply source (basic model, open model, closed model).
In the example shown in FIG. 7, multiple inference models are grouped into multiple layers on the settings selection screen. Specifically, in the example shown in Fig. 7, the top layer is grouped by analysis target (connective tissue, nerves, etc.), and in each group, the details of the surgery (thoracoscopic lung resection, robot-assisted endoscopic surgery, etc.) are grouped. (stomach/large intestine region, etc.).
 ユーザは、例えば、設定選択画面でコンテンツを選択し、設定画面における推論プラットフォーム10内に、選択したコンテンツをドラッグすることで、推論プラットフォーム10に、コンテンツ(推論モデル、演算方法、条件分岐等)を設定することができる。 For example, the user can add content (inference models, calculation methods, conditional branches, etc.) to the inference platform 10 by selecting content on the settings selection screen and dragging the selected content into the inference platform 10 on the settings screen. Can be set.
 次に、ユーザは、例えば、設定画面において、推論プラットフォーム10に設定したコンテンツや、入力ポートや、出力ポートを接続する操作(例えば、図6に示す例では、接続元(例えば、PortII1)を接続先(例えば、モデルB)にドラッグする操作等)を行うことで、接続元と接続先が接続され、接続元から接続先にデータが流れる。 Next, the user performs an operation to connect the content set in the inference platform 10, an input port, and an output port (for example, in the example shown in FIG. 6, connect the connection source (for example, PortII1) on the setting screen. By performing an operation such as dragging it to the destination (for example, model B), the connection source and connection destination are connected, and data flows from the connection source to the connection destination.
 図8及び図9は、本発明の実施形態に係る手術画像処理プラットフォームの演算プラットフォームにおける設定を説明する図である。 FIGS. 8 and 9 are diagrams illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
 ユーザは、演算プラットフォーム20において、演算処理部やその他の演算要素や、入力ポート及び出力ポート等のコンテンツを設定する場合、設定操作を行う。設定操作は、例えば、設定手段として機能する表示プラットフォーム30により表示手段に表示された設定画面に表示された追加ボタン(図8に示す例では、+と表示された部分)を操作する。これにより、演算プラットフォーム20は、入力ポート(図8に示す例では、PortDI1等)や出力ポート(図8に示す例では、PortDO1等)を1つずつ設定する。また、ユーザにより追加ボタンが操作された場合、設定手段として機能する表示プラットフォーム30は、複数種類のコンテンツを選択可能な設定選択画面を、表示手段に表示する。 In the calculation platform 20, the user performs a setting operation when setting contents such as the calculation processing unit, other calculation elements, input ports, and output ports. The setting operation is performed, for example, by operating an add button (in the example shown in FIG. 8, the part displayed as +) displayed on a setting screen displayed on the display means by the display platform 30 functioning as a setting means. As a result, the calculation platform 20 sets input ports (PortDI1, etc. in the example shown in FIG. 8) and output ports (PortDO1, etc. in the example shown in FIG. 8) one by one. Further, when the user operates the add button, the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
 演算プラットフォーム20における設定選択画面には、ユーザが選択可能なコンテンツとして、複数種類の演算処理部(図8に示す例では、基本モデル、オープンモデル、クローズモデル等)、演算処理部の処理結果に対する各種処理(例えば、複数種類のフィルタ(図8に示す例では、Low pass、High pass、Bandpass等)、複数種類の解析方法(図8に示す例では、Peak、Histogram、FFT等)、複数種類の演算方法(図8に示す例では、ADD、MATMUL、ABS等)、定数(図8に示す例では、SCALAR等))が表示されている。 The settings selection screen in the calculation platform 20 includes multiple types of calculation processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the calculation processing units, as contents that the user can select. Various types of processing (for example, multiple types of filters (in the example shown in Figure 8, Low pass, High pass, Bandpass, etc.), multiple types of analysis methods (in the example shown in Figure 8, Peak, Histogram, FFT, etc.), multiple types The calculation method (in the example shown in FIG. 8, ADD, MATMUL, ABS, etc.) and the constant (in the example shown in FIG. 8, SCALAR, etc.) are displayed.
 設定選択画面では、ユーザが選択可能なコンテンツを、供給元毎や、解析対象(例えば、解剖構造等)毎や、手術の内容や、描画表現毎に、グルーピングしてもよい。
 図8に示す例において、設定選択画面では、複数の演算処理部が、供給元(基本モデル、オープンモデル、クローズモデル)毎にグルーピングされている。
On the setting selection screen, contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgical content, or by drawing expression.
In the example shown in FIG. 8, on the setting selection screen, a plurality of arithmetic processing units are grouped by supply source (basic model, open model, closed model).
 また、設定選択画面において、演算処理部が実現する描画表現の内容を調整可能としてもよい。
 図9に示す例において、設定選択画面では、複数の演算処理部が、描画表現(描画色、確信度のスレッシュ、不透明度、描画方法、点滅表示等)毎に選択可能となっており、各描画表現の項目において、当該描画表現の調整(例えば、数値入力や色等の選択等)可能となっている。
Furthermore, the content of the drawing expression realized by the arithmetic processing unit may be adjustable on the setting selection screen.
In the example shown in FIG. 9, on the settings selection screen, multiple calculation processing units can be selected for each drawing expression (drawing color, confidence threshold, opacity, drawing method, blinking display, etc.), and each In the drawing expression item, it is possible to adjust the drawing expression (for example, input numerical values, select colors, etc.).
 ユーザは、例えば、設定選択画面でコンテンツを選択し、設定画面における演算プラットフォーム20内に、選択したコンテンツをドラッグすることで、演算プラットフォーム20に、コンテンツ(演算処理部、演算方法、条件分岐等)を設定することができる。 For example, the user selects content on the settings selection screen and drags the selected content into the calculation platform 20 on the settings screen to add the content (calculation processing unit, calculation method, conditional branch, etc.) to the calculation platform 20. can be set.
 次に、ユーザは、例えば、設定画面において、演算プラットフォーム20に設定したコンテンツや、入力ポートや、出力ポートを接続する操作(例えば、図8に示す例では、接続元(例えば、PortDI1~PortDI4)を接続先(例えば、モデルF)にドラッグする操作等)を行うことで、接続元と接続先が接続され、接続元から接続先にデータが流れる。 Next, on the setting screen, the user performs an operation to connect the content set in the computing platform 20, input ports, and output ports (for example, in the example shown in FIG. 8, the connection source (for example, PortDI1 to PortDI4) By performing an operation such as dragging it to the connection destination (for example, model F), the connection source and connection destination are connected, and data flows from the connection source to the connection destination.
 図10は、本発明の実施形態に係る手術画像処理プラットフォームの表示プラットフォームにおける設定を説明する図である。 FIG. 10 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention.
 ユーザは、表示プラットフォーム30において、表示処理部やその他の演算要素や、入力ポート及び出力ポート等のコンテンツを設定する場合、設定操作を行う。設定操作は、例えば、設定手段として機能する表示プラットフォーム30により表示手段に表示された設定画面に表示された追加ボタン(図10に示す例では、+と表示された部分)を操作する。これにより、表示プラットフォーム30は、入力ポート(図10に示す例では、PortGI1等)や出力ポート(図10に示す例では、PortGO1等)を1つずつ設定する。また、ユーザにより追加ボタンが操作された場合、設定手段として機能する表示プラットフォーム30は、複数種類のコンテンツを選択可能な設定選択画面を、表示手段に表示する。 In the display platform 30, the user performs a setting operation when setting content such as the display processing unit, other calculation elements, input ports, and output ports. The setting operation is performed, for example, by operating an add button (in the example shown in FIG. 10, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit. Thereby, the display platform 30 sets input ports (PortGI1, etc. in the example shown in FIG. 10) and output ports (PortGO1, etc. in the example shown in FIG. 10) one by one. Further, when the user operates the add button, the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
 表示プラットフォーム30における設定選択画面には、ユーザが選択可能なコンテンツとして、複数種類の表示処理部(図8に示す例では、基本モデル、オープンモデル、クローズモデル等)、表示処理部の処理結果に対する複数種類の演算方法(図8に示す例では、ADD、MATMUL、ABS等)、複数種類の条件分岐(図8に示す例では、SELECTOR等)が表示されている。 The setting selection screen on the display platform 30 includes multiple types of display processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the display processing unit, as content that can be selected by the user. Multiple types of calculation methods (ADD, MATMUL, ABS, etc. in the example shown in FIG. 8) and multiple types of conditional branches (SELECTOR, etc. in the example shown in FIG. 8) are displayed.
 設定選択画面では、ユーザが選択可能なコンテンツを、供給元毎や、解析対象(例えば、解剖構造等)毎や、手術の内容や、表示態様毎に、グルーピングしてもよい。
 図10に示す例において、設定選択画面では、複数の表示処理部が、供給元(基本モデル、オープンモデル、クローズモデル)毎にグルーピングされている。
On the setting selection screen, contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgery content, or by display mode.
In the example shown in FIG. 10, on the setting selection screen, a plurality of display processing units are grouped by supply source (basic model, open model, closed model).
 ユーザは、例えば、設定選択画面でコンテンツを選択し、設定画面における表示プラットフォーム30内に、選択したコンテンツをドラッグすることで、表示プラットフォーム30に、コンテンツ(表示処理部、演算方法、条件分岐等)を設定することができる。 For example, the user selects content on the settings selection screen and drags the selected content into the display platform 30 on the settings screen to add the content (display processing unit, calculation method, conditional branching, etc.) to the display platform 30. can be set.
 次に、ユーザは、例えば、設定画面において、表示プラットフォーム30に設定したコンテンツや、入力ポートや、出力ポートを接続する操作(例えば、図10に示す例では、接続元(例えば、PortGI1)を接続先(例えば、GUI F)にドラッグする操作等)を行うことで、接続元と接続先が接続され、接続元から接続先にデータが流れる。 Next, the user performs an operation to connect the content set in the display platform 30, an input port, and an output port (for example, in the example shown in FIG. 10, connect the connection source (for example, PortGI1) on the setting screen. By performing an operation such as dragging to the destination (for example, GUI F), the connection source and connection destination are connected, and data flows from the connection source to the connection destination.
 ユーザは、推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30において、それぞれ設定が完了したら、図5に示すように、プラットフォーム間の接続(例えば、推論プラットフォーム10の出力ポート(例えば、PortIO1)と演算プラットフォーム20の入力ポート(例えば、PortDI1)の接続等)や、手術画像処理プラットフォームと外部機器との接続(例えば、DeviceIFI1と、推論プラットフォーム10の入力ポート(例えば、PortII1)の接続等)を行う。
 以上の手順により、各ユーザ独自の手術支援システムを構築することができる。
When the user completes the settings on the inference platform 10, calculation platform 20, and display platform 30, as shown in FIG. 20 input ports (for example, PortDI1)), and connections between the surgical image processing platform and external equipment (for example, connection between DeviceIFI1 and the input port (for example, PortII1) of the inference platform 10).
Through the above steps, it is possible to construct a surgical support system unique to each user.
 また、手術画像処理プラットフォーム1は、上記機能に加え、構築したプラットフォームのテスト結果を、所定機関への規格申請のテンプレートとして出力してもよい。また、手術画像処理プラットフォーム1は、手術画像処理プラットフォーム1において予め備える推論モデル、演算処理部又は表示処理部以外のものを導入する場合、これらの性能確認機能を備えてもよい。 Additionally, in addition to the above functions, the surgical image processing platform 1 may output test results of the constructed platform as a template for standard application to a predetermined organization. Furthermore, when introducing something other than the inference model, arithmetic processing unit, or display processing unit provided in advance in the surgical image processing platform 1, the surgical image processing platform 1 may be provided with a performance confirmation function for these.
 このような手術画像処理プラットフォーム1によれば、手術画像処理プラットフォーム1を利用するユーザは、例えば、独自の推論モデル、描画態様及び表示態様を、それぞれ個別に設定することで、手術画像を独自の態様で解析し、この解析結果に基づき独自の態様の描画画像を生成し、この描画画像を独自の態様で表示することが可能となる。
 したがって、手術画像の処理において、解析、描画、表示の各処理を実行するレイヤーに実装するモデルを、自由に設定することが可能な手術画像処理プラットフォームを提供できる。
According to such a surgical image processing platform 1, a user who uses the surgical image processing platform 1 can create a unique surgical image by, for example, individually setting a unique inference model, drawing mode, and display mode. It becomes possible to analyze the image in a unique manner, generate a drawn image in a unique manner based on the analysis result, and display this drawn image in a unique manner.
Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
 また、手術画像処理プラットフォーム1によれば、推論プラットフォーム10において複数の推論モデルを設定可能であり、演算プラットフォーム20において複数の描画態様を設定可能であり、表示プラットフォーム30において複数の表示態様を設定可能である。これにより、手術画像の処理における各レイヤー(推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30)において、それぞれ多様な機能(各モデルで実現される機能)を搭載することが可能となる。 Further, according to the surgical image processing platform 1, a plurality of inference models can be set in the inference platform 10, a plurality of drawing modes can be set in the calculation platform 20, and a plurality of display modes can be set in the display platform 30. It is. This makes it possible to mount various functions (functions realized by each model) in each layer (inference platform 10, calculation platform 20, and display platform 30) in the processing of surgical images.
 また、手術画像処理プラットフォーム1によれば、推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30において、データが入力される入力ポートと、データを出力する出力ポートを、それぞれ個別に設定可能とすることで、手術画像の処理における各レイヤー(推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30)において、データの入力元と、データの出力先を、それぞれ個別に設定することが可能となる。これにより、各レイヤーの入力元と出力先の自由度が向上する。 Further, according to the surgical image processing platform 1, in the inference platform 10, the calculation platform 20, and the display platform 30, the input port for inputting data and the output port for outputting data can be individually set. In each layer (inference platform 10, calculation platform 20, and display platform 30) in the processing of surgical images, it is possible to individually set the data input source and the data output destination. This increases the freedom of input sources and output destinations for each layer.
 また、手術画像処理プラットフォーム1によれば、推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30には、それぞれ、複数の入力ポートと複数の出力ポートを設定可能である。これにより、手術画像の処理における各レイヤー(推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30)において、それぞれ多様なデータを入力し、多様な方向(例えば、別の装置や、他のレイヤー)にデータを出力することが可能となる。 Furthermore, according to the surgical image processing platform 1, it is possible to set a plurality of input ports and a plurality of output ports in the inference platform 10, the calculation platform 20, and the display platform 30, respectively. This allows various data to be input to each layer (inference platform 10, calculation platform 20, and display platform 30) in surgical image processing, and the data to be sent in various directions (for example, to another device or another layer). It becomes possible to output.
 また、手術画像処理プラットフォーム1によれば、推論プラットフォーム10には、外部装置からのデータに加え、表示プラットフォーム30からのデータを入力することが可能となる。また、演算プラットフォーム20には、推論プラットフォーム10からのデータに加え、表示プラットフォーム30からのデータを入力することが可能となる。これにより、処理の下流に位置するプラットフォーム(例えば、推論プラットフォーム10や演算プラットフォーム20に対する表示プラットフォーム30)から、処理の上流に位置する手段(例えば、表示プラットフォーム30に対する推論プラットフォーム10や演算プラットフォーム20)にデータをフィードバックすることが可能となる。これにより、例えば、外部装置から入力されたデータを繰り返し用いて、互いに異なる推論モデルや描画態様で処理することで、複数種類の結果を得ることが可能となる。 Furthermore, according to the surgical image processing platform 1, it is possible to input data from the display platform 30 into the inference platform 10 in addition to data from an external device. Further, in addition to data from the inference platform 10, data from the display platform 30 can be input to the calculation platform 20. As a result, from a platform located downstream of processing (for example, the display platform 30 for the inference platform 10 and the calculation platform 20) to a means located upstream of the processing (for example, the inference platform 10 and the calculation platform 20 for the display platform 30). It becomes possible to feed back data. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
[応用例]
 次に、本発明の実施形態の応用例について説明する。以下の説明において、本実施形態と同様の構成には、同一の符号を付し、その説明を省略又は簡略する。
 図11は、本発明の実施形態の応用例に係る手術画像処理プラットフォームにおけるデータの流れを説明する図である。
[Application example]
Next, an application example of the embodiment of the present invention will be described. In the following description, components similar to those in this embodiment are denoted by the same reference numerals, and the description thereof will be omitted or simplified.
FIG. 11 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention.
 応用例に係る手術画像処理プラットフォーム1Aは、本実施形態の構成(推論プラットフォーム10、演算プラットフォーム20、表示プラットフォーム30)に加え、前処理手段の一例である前処理プラットフォーム40を備える。 In addition to the configuration of this embodiment (inference platform 10, calculation platform 20, display platform 30), the surgical image processing platform 1A according to the application example includes a preprocessing platform 40, which is an example of preprocessing means.
 まず、上述のとおり、推論プラットフォーム10は、医療機関で使用されている外部装置(例えば、内視鏡システム/エンドスコープ等)により取得された手術画像において、身体の状態を示す身体情報及び/又は外科医に操作されている器具の状態を示す器具情報を、推論モデルに入力し、AIにより解析し、解剖構造や、解剖構造の構造や、器具の軌跡や、解剖構造に対する器具の状態等を推論する。 First, as described above, the inference platform 10 collects physical information indicating the state of the body and/or Instrument information indicating the state of the instrument being operated by the surgeon is input into the inference model and analyzed by AI to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc. do.
 ここで、医療機関で使用されている外部装置(例えば、内視鏡システム/エンドスコープ等)は、様々なメーカにより提供されており、医療機関によって、使用されている種類(機種)も様々である。また、外部装置の種類が異なれば、取得される手術画像の画質も異なる。 Here, external devices used in medical institutions (e.g. endoscope systems/endoscopes, etc.) are provided by various manufacturers, and the types (models) used vary depending on the medical institution. be. Furthermore, if the type of external device is different, the image quality of the obtained surgical image will also be different.
 また、推論プラットフォーム10が解析に使用する推論モデルは、所定の外部装置で撮像した手術画像で学習したものである。例えば、推論モデルが学習した手術画像を撮像した外部装置と、手術画像処理プラットフォーム1を利用する医療機関で使用する外部装置(手術画像処理プラットフォーム1に手術画像を提供する外部装置)と、が互いに異なる場合がある。このような場合、これらの外部装置で撮像した手術画像の画質も、互いに異なる。そして、推論モデルが学習した手術画像の画質と、手術画像処理プラットフォーム1の利用時に取得した手術画像の画質と、の相違が大きい場合、推論プラットフォーム10の解析結果の精度が低下する可能性がある。 Furthermore, the inference model used by the inference platform 10 for analysis is learned from surgical images captured by a predetermined external device. For example, the external device that captured the surgical image learned by the inference model and the external device used by the medical institution that uses the surgical image processing platform 1 (the external device that provides the surgical images to the surgical image processing platform 1) are connected to each other. It may be different. In such cases, the image quality of surgical images captured by these external devices also differs from each other. If there is a large difference between the image quality of the surgical images learned by the inference model and the image quality of the surgical images acquired when using the surgical image processing platform 1, the accuracy of the analysis results of the inference platform 10 may decrease. .
 前処理プラットフォーム40は、手術画像処理プラットフォーム1に入力された手術画像の画質と、推論モデルが学習した手術画像の画質との乖離を抑え、推論プラットフォーム10の解析結果の精度が低下するのを防止するものである。 The preprocessing platform 40 suppresses the discrepancy between the image quality of the surgical image input to the surgical image processing platform 1 and the image quality of the surgical image learned by the inference model, and prevents the accuracy of the analysis results of the inference platform 10 from decreasing. It is something to do.
 前処理手段の一例である前処理プラットフォーム40は、手術画像の画質を、推論プラットフォーム10の推論モデルが学習した画像の画質に応じた変換式により変換する。具体的には、前処理プラットフォーム40は、カメラ画質変換器を有し、カメラ画質変換器において、取得部(Camera Capture Module(図1参照)等)が取得した手術画像を撮像した外部装置に応じて、当該手術画像の画質を、推論プラットフォーム10の推論モデルが学習した手術画像の画質に近似した画質に変換し、画質を変換した前処理済み手術画像を生成し、推論プラットフォーム10に提供する。前処理プラットフォーム40は、手術支援システム100を使用するユーザにより、又は、後述するように、自動的に変換式が設定される。なお、「変換式」は、手術画像の画質を、推論プラットフォーム10の推論モデルが学習した手術画像の画質に近似した画質に変換するものに限らない。「変換式」は、手術画像の画質を、推論プラットフォーム10の解析精度が向上する画質に変換するものであれば任意の手法を用いることができ、例えば、画像を周波数解析し周波数的なアプローチで画像変換する手法であってもよい。 The preprocessing platform 40, which is an example of preprocessing means, converts the image quality of the surgical image using a conversion formula according to the image quality of the image learned by the inference model of the inference platform 10. Specifically, the preprocessing platform 40 includes a camera image quality converter, and in the camera image quality converter, an acquisition unit (Camera Capture Module (see FIG. 1), etc.) acquires a surgical image according to the external device that captured the acquired surgical image. Then, the image quality of the surgical image is converted to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10, and a preprocessed surgical image with the image quality converted is generated and provided to the inference platform 10. A conversion formula is set in the preprocessing platform 40 by a user using the surgical support system 100 or automatically as described below. Note that the "conversion formula" is not limited to one that converts the image quality of the surgical image to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10. Any method can be used as the "conversion formula" as long as it converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. For example, it is possible to use any method that converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. It may also be a method of image conversion.
 また、前処理プラットフォーム40は、推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30と同様に、データが入力される入力ポート(図11に示すPortPPI1、PortPPI2等)と、データを出力する出力ポート(図11に示すPortPPO1、PortPPO2)とが、ユーザにより、それぞれ個別に設定される。また、前処理プラットフォーム40には、ユーザにより、それぞれ、複数の入力ポートと複数の出力ポートを設定可能である点も、推論プラットフォーム10、演算プラットフォーム20及び表示プラットフォーム30と同様である。 Also, like the inference platform 10, calculation platform 20, and display platform 30, the preprocessing platform 40 has input ports (PortPPI1, PortPPI2, etc. shown in FIG. 11) into which data is input, and output ports (PortPPI2, etc. shown in FIG. 11) that output data. PortPPO1, PortPPO2) shown in 11 are individually set by the user. Furthermore, the preprocessing platform 40 can be configured with a plurality of input ports and a plurality of output ports, respectively, similarly to the inference platform 10, calculation platform 20, and display platform 30.
 図11を参照して、手術画像処理プラットフォーム1Aにおけるデータの流れを説明する。
 手術画像処理プラットフォーム1Aでは、前処理プラットフォーム40に設定された入力ポート(図11に示すPortPPI1、PortPPI2等)に、ユーザにより、外部装置(Camera & Imager等)に接続された接続部(Device FI1、CameraIFI1等)に接続され、接続部を介して、外部装置(例えば、内視鏡システム/エンドスコープ等)からのデータ(手術画像(身体情報、器具情報))が入力される。また、前処理プラットフォーム40に設定された入力ポート(図11に示すPortPPI4、PortPPI5等)は、推論プラットフォーム10の出力ポート(図11に示すPortIO3)や、表示プラットフォーム30の出力ポート(図11に示すPortGO4)に接続され、推論プラットフォーム10からの出力データ(解析結果)や、表示プラットフォーム30からの出力データ(表示態様)が入力される。この入力されたデータは、カメラ画質変換器に供給され、推論プラットフォーム10の推論モデルが学習した手術画像の画質に近似した画質に変換される。
The flow of data in the surgical image processing platform 1A will be described with reference to FIG. 11.
In the surgical image processing platform 1A, a connection section (Device FI1, Camera IFI1, etc.), and data (surgical images (physical information, instrument information)) from an external device (for example, an endoscope system/endoscope, etc.) is input via the connection section. In addition, the input ports set in the preprocessing platform 40 (PortPPI4, PortPPI5, etc. shown in FIG. 11) are the output ports of the inference platform 10 (PortIO3 shown in FIG. 11) and the output ports of the display platform 30 (PortPPI5 shown in FIG. 11). PortGO4), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input. This input data is supplied to a camera image quality converter, and is converted into an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10.
 前処理プラットフォーム40に設定された出力ポート(図11に示すPortPPO1、PortPPO2)は、ユーザにより、推論プラットフォーム10の入力ポート(図11に示すPortII1、PortII2)に接続され、推論プラットフォーム10に出力データ(前処理済み手術画像)を出力する。 The output ports (PortPPO1, PortPPO2 shown in FIG. 11) set in the preprocessing platform 40 are connected by the user to the input ports (PortII1, PortII2 shown in FIG. 11) of the inference platform 10, and output data ( Output preprocessed surgical images).
 推論プラットフォーム10に設定された入力ポート(図11に示すPortII1、PortII2等)は、ユーザにより、前処理プラットフォーム40の出力ポート(図11に示すPortPPO1、PortPPO2)に接続され、出力データ(前処理済み手術画像)が入力される。この入力されたデータは、推論モデルに供給される。 The input ports set in the inference platform 10 (PortII1, PortII2, etc. shown in FIG. 11) are connected by the user to the output ports (PortPPO1, PortPPO2 shown in FIG. 11) of the preprocessing platform 40, and the output data (preprocessed surgical image) is input. This input data is supplied to the inference model.
 推論プラットフォーム10に設定された出力ポート(図11に示すPortIO1、PortIO2等)は、ユーザにより、演算プラットフォーム20の入力ポート(図11に示すPortDI1、PortDI2等)や、前処理プラットフォーム40の入力ポート(図11に示すPortPPI4等)に接続され、これらに推論モデルからの出力データ(解析結果)を出力する。 The output ports (PortIO1, PortIO2, etc. shown in FIG. 11) set in the inference platform 10 can be set by the user to the input ports (PortDI1, PortDI2, etc. shown in FIG. 11) of the calculation platform 20, or the input ports (PortIO1, PortDI2, etc. shown in FIG. PortPPI4 shown in FIG. 11), and output data (analysis results) from the inference model to these.
 演算プラットフォーム20に設定された入力ポート(図11に示すPortDI1、PortDI2等)は、ユーザにより、推論プラットフォーム10の出力ポート(図11に示すPortIO1、PortIO2等)や、表示プラットフォーム30の出力ポート(図11に示すPortGO3)に接続され、推論プラットフォーム10からの出力データ(解析結果)や、表示プラットフォーム30からの出力データ(表示態様)が入力される。この入力されたデータは、演算処理部に供給される。 The input ports (PortDI1, PortDI2, etc. shown in FIG. 11) set on the calculation platform 20 can be changed by the user to the output ports (PortIO1, PortIO2, etc. shown in FIG. 11) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 11), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input. This input data is supplied to the arithmetic processing section.
 演算プラットフォーム20に設定された出力ポート(図11に示すPortDO1)は、ユーザにより、表示プラットフォーム30の入力ポート(図11に示すPortGI1)に接続され、これらに演算処理部からの出力データ(描画画像)を出力する。 The output port (PortDO1 shown in FIG. 11) set on the calculation platform 20 is connected by the user to the input port (PortGI1 shown in FIG. 11) of the display platform 30, and output data (drawn image ) is output.
 表示プラットフォーム30に設定された入力ポート(図11に示すPortGI1)は、ユーザにより、演算プラットフォーム20の出力ポート(図11に示すPortDO1)に接続され、演算プラットフォーム20からの出力データ(描画画像)が入力される。この入力されたデータは、表示処理部に供給される。 The input port (PortGI1 shown in FIG. 11) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 11) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input. This input data is supplied to the display processing section.
 表示プラットフォーム30に設定された出力ポート(図11に示すPortGO1、PortGO2、PortGO3等)は、ユーザにより、表示手段(LCD/コンソール)や、推論プラットフォーム10の入力ポート(図11に示すPortII3)や、演算プラットフォーム20の入力ポート(図11に示すPortDI3)や、前処理プラットフォーム40の入力ポート(図11に示すPortPPI5等)に接続され、これらに表示処理部からの出力データ(表示態様)を出力する。 The output ports (PortGO1, PortGO2, PortGO3, etc. shown in FIG. 11) set on the display platform 30 can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (Port II3 shown in FIG. 11), It is connected to the input port of the calculation platform 20 (PortDI3 shown in FIG. 11) and the input port of the preprocessing platform 40 (PortPPI5, etc. shown in FIG. 11), and outputs the output data (display mode) from the display processing unit to these. .
 次に、手術画像処理プラットフォーム1Aにける、ユーザによるコンテンツ(カメラ画質変換器、推論モデル等)の設定について説明する。
 図12は、本発明の実施形態の応用例に係る手術画像処理プラットフォームの前処理プラットフォームにおける設定を説明する図である。
Next, the settings of contents (camera image quality converter, inference model, etc.) by the user in the surgical image processing platform 1A will be explained.
FIG. 12 is a diagram illustrating settings in the preprocessing platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
 ユーザは、前処理プラットフォーム40において、カメラ画質変換器やその他の演算要素や、入力ポート及び出力ポート等のコンテンツを設定する場合、設定操作を行う。設定操作は、例えば、設定手段として機能する表示プラットフォーム30により表示手段に表示された設定画面に表示された追加ボタン(図12に示す例では、+と表示された部分)を操作する。これにより、前処理プラットフォーム40は、入力ポート(図12に示す例では、PortPPI1等)や出力ポート(図12に示す例では、PortPPO1等)を1つずつ設定する。また、ユーザにより追加ボタンが操作された場合、設定手段として機能する表示プラットフォーム30は、複数種類のコンテンツを選択可能な設定選択画面を、表示手段に表示する。 The user performs a setting operation when setting contents such as a camera image quality converter, other calculation elements, input ports, and output ports on the preprocessing platform 40. The setting operation is performed, for example, by operating an add button (in the example shown in FIG. 12, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit. Thereby, the preprocessing platform 40 sets input ports (PortPPI1, etc. in the example shown in FIG. 12) and output ports (PortPPO1, etc. in the example shown in FIG. 12) one by one. Further, when the user operates the add button, the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
 前処理プラットフォーム40における設定選択画面には、ユーザが選択可能なコンテンツとして、カメラ画質変換器に含まれる複数種類の変換要素、カメラ画質変換器の変換結果に対する複数種類の前処理(図12に示す例では、Normalize、Standardize、Grayscale、Binalize等)、条件分岐(図12に示す例では、SELECTOR等)が表示されている。 The settings selection screen in the preprocessing platform 40 includes, as user-selectable content, multiple types of conversion elements included in the camera image quality converter, and multiple types of preprocessing for the conversion results of the camera image quality converter (as shown in FIG. 12). In the example, Normalize, Standardize, Grayscale, Binalize, etc.) and conditional branches (SELECTOR, etc. in the example shown in FIG. 12) are displayed.
 カメラ画質変換器は、変換元を変換先相当の画質(空間周波数、輝度、色調等)に変換する一連の変換式を格納したデータベースである。前処理プラットフォーム40は、カメラ画質変換器を、手術支援システム100の記憶手段(例えば、図1に示すData Base等)から取得し(読み出し)てもよいし、外部(例えば、手術画像処理プラットフォーム1Aの提供者や手術画像処理プラットフォーム1Aを利用する別のユーザ等)のサーバ等から取得(ダウンロード)してもよい。また、前処理プラットフォーム40は、カメラ画質変換器を、手術支援システム100の記憶手段(例えば、図1に示すData Base等)に出力し(記憶し)てもよいし、外部(例えば、手術画像処理プラットフォーム1Aの提供者や手術画像処理プラットフォーム1Aを利用する別のユーザ等)のコンピュータ等に出力可能(送信可能)としてもよい。詳細には、カメラ画質変換器は、変換要素として、「内視鏡システム(S)+エンドスコープ(ES)間変換要素」と、「内視鏡システム(S)設定値間変換要素」とを含む。 The camera image quality converter is a database that stores a series of conversion formulas for converting the conversion source to the image quality (spatial frequency, brightness, color tone, etc.) equivalent to the conversion destination. The preprocessing platform 40 may acquire (read) the camera image quality converter from the storage means of the surgical support system 100 (for example, the Data Base shown in FIG. The information may be obtained (downloaded) from a server of a provider of the surgical image processing platform 1A, another user using the surgical image processing platform 1A, or the like. Further, the preprocessing platform 40 may output (store) the camera image quality converter to the storage means of the surgical support system 100 (for example, the Data Base shown in FIG. 1), or may output (store) the camera image quality converter to an external device (for example, It may also be possible to output (send) the information to a computer, etc. of the provider of the processing platform 1A, another user using the surgical image processing platform 1A, etc. In detail, the camera image quality converter uses an "endoscope system (S) + endoscope (ES) conversion element" and an "endoscope system (S) setting value conversion element" as conversion elements. include.
 「内視鏡システム(S)+エンドスコープ(ES)間変換要素」では、複数種類の変換先(手術画像処理プラットフォーム1Aに接続され、手術画像を取得する内視鏡システムとエンドスコープの組合せ)に、それぞれ、複数種類の変換元(推論モデルが学習した手術画像を撮像した内視鏡システムとエンドスコープの組合せ)毎の変換式が対応付けられている。図12に示す一例では、変換先の内視鏡システム(S:A)とエンドスコープ(ES:A)の組合せに、当該組合せよる手術画像の画質(空間周波数、輝度、色調等)を変換元の内視鏡システム(S:A)とエンドスコープ(ES:B)の組合せによる手術画像の画質に近似させる変換式として「Conversion A」が対応付けられている。 "Endoscope system (S) + endoscope (ES) conversion element" has multiple types of conversion destinations (a combination of an endoscope system and an endoscope that are connected to the surgical image processing platform 1A and acquire surgical images). , respectively, are associated with conversion formulas for multiple types of conversion sources (combinations of the endoscope system and endoscope that captured the surgical images learned by the inference model). In the example shown in Fig. 12, the image quality (spatial frequency, brightness, color tone, etc.) of the combined surgical image is applied to the combination of the destination endoscope system (S:A) and endoscope (ES:A) as the conversion source. "Conversion A" is associated as a conversion formula that approximates the image quality of a surgical image obtained by a combination of an endoscope system (S:A) and an endoscope (ES:B).
 「内視鏡システム(S)設定値間変換要素」では、複数種類の変換先(手術画像処理プラットフォーム1Aに接続され、手術画像を取得する内視鏡システムのデフォルト設定値)に、それぞれ、複数種類の変換元(手術画像処理プラットフォーム1Aに接続され、手術画像を取得する内視鏡システムで使用している設定値)が対応付けられている。図12に示す一例では、複数種類の設定値(「明るさ」、「色調」、「カラーモード」、「コントラスト」)毎に、複数種類の変換先毎の値を示す情報が対応付けられている。 In the "endoscope system (S) setting value conversion element", multiple types of conversion destinations (default setting values of the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are specified. Types of conversion sources (setting values used in the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are associated with each other. In the example shown in FIG. 12, information indicating values for each of multiple types of conversion destinations is associated with each of multiple types of setting values ("brightness", "color tone", "color mode", and "contrast"). There is.
 ユーザは、例えば、設定選択画面でコンテンツを選択し、設定画面における前処理プラットフォーム40内に、選択したコンテンツをドラッグすることで、前処理プラットフォーム40に、コンテンツ(変換式、演算方法、条件分岐等)を設定することができる。なお、詳しくは後述するが、前処理プラットフォーム40は、変換式を自動的に設定してもよい。 For example, the user selects content on the settings selection screen and drags the selected content into the preprocessing platform 40 on the settings screen to add the content (conversion formula, calculation method, conditional branch, etc.) to the preprocessing platform 40. ) can be set. Note that, as will be described in detail later, the preprocessing platform 40 may automatically set the conversion formula.
 次に、ユーザは、例えば、設定画面において、前処理プラットフォーム40に設定したコンテンツや、入力ポートや、出力ポートを接続する操作(例えば、図12に示す例では、複数の接続元(例えば、PortPPI1~PortPPI5)を、複数の接続先(例えば、「Conversion C」や「Conversion A」)にドラッグする操作等)を行うことで、接続元と接続先が接続され、接続元から接続先にデータが流れる。 Next, on the setting screen, the user performs an operation to connect the content set in the preprocessing platform 40, an input port, and an output port (for example, in the example shown in FIG. 12, multiple connection sources (for example, PortPPI1 ~PortPPI5) to multiple connection destinations (for example, "Conversion C" and "Conversion A"), the connection source and connection destination are connected, and data is transferred from the connection source to the connection destination. flows.
 図13は、本発明の実施形態の応用例に係る手術画像処理プラットフォームの推論プラットフォームにおける設定を説明する図である。
 応用例に係る推論プラットフォーム10の設定選択画面では、図6に示す例に加え、各推論モデルが学習した手術画像を撮像した内視鏡システム(S)の種類とエンドスコープ(ES)の種類の組合せ毎にグルーピングされている。図13に示す例では、例えば、基本モデルのモデルAと、オープンモデルの**病院モデルと、クローズモデルのモデルFとが、これらのモデル(推論モデル)が学習した手術画像を撮像した内視鏡システムの種類とエンドスコープの種類を示す情報(S:A+ES:A)にグルーピングされている。すなわち、推論プラットフォーム10は、各推論モデルと、各推論モデルが学習した手術画像を撮像した内視鏡システム(S)の種類とエンドスコープ(ES)の種類の組合せとを対応付けて記憶している。
FIG. 13 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
On the setting selection screen of the inference platform 10 related to the application example, in addition to the example shown in FIG. Grouped by combination. In the example shown in FIG. 13, for example, a basic model model A, an open model **hospital model, and a closed model model F are endoscopic models that have captured surgical images learned by these models (inference models). It is grouped into information (S:A+ES:A) indicating the type of mirror system and the type of endoscope. That is, the inference platform 10 associates and stores each inference model and the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by each inference model. There is.
 応用例に係る表示プラットフォーム30は、設定選択画面(図10参照)の表示処理部における各モデル(GUI A~F)の設定項目として、手術画像処理プラットフォーム1Aに接続され、手術画像を撮像する内視鏡システム(S)の種類と、エンドスコープ(ES)の種類を示す情報と、内視鏡システム(S)の設定値とを設定可能としてもよい。 The display platform 30 according to the application example is connected to the surgical image processing platform 1A as setting items for each model (GUI A to F) in the display processing section of the setting selection screen (see FIG. 10), and Information indicating the type of endoscope system (S), the type of endoscope (ES), and the setting values of the endoscope system (S) may be settable.
 この場合、前処理プラットフォーム40は、入力ポート(図11に示す例ではPort PP5)から入力された表示プラットフォーム30で設定された、手術画像処理プラットフォーム1Aの使用環境で使用される内視鏡システム(S)の種類を示す情報と、エンドスコープ(ES)の種類を示す情報と、内視鏡システム(S)の設定値を取得してもよい。なお、前処理プラットフォーム40は、入力ポート(図11に示す例ではPort PP1)から入力された内視鏡システムのシリアル信号より内視鏡システム(S)とエンドスコープ(ES)の種類の組合せを示す情報を、変換先の情報として取得してもよい。また、前処理プラットフォーム40は、入力ポート(図11に示す例ではPort PP1)から入力された内視鏡システム(S)の設定値を、変換元の情報として取得してもよい。 In this case, the preprocessing platform 40 is an endoscope system ( Information indicating the type of endoscope (ES), information indicating the type of endoscope (ES), and setting values of the endoscope system (S) may be acquired. Note that the preprocessing platform 40 determines the combination of types of endoscope system (S) and endoscope (ES) from the serial signal of the endoscope system input from the input port (Port PP1 in the example shown in FIG. 11). The information shown may be acquired as the conversion destination information. Further, the preprocessing platform 40 may acquire the setting values of the endoscope system (S) input from the input port (Port PP1 in the example shown in FIG. 11) as the conversion source information.
 次に、前処理プラットフォーム40が、変換式を自動的に設定する例について説明する。
 前処理プラットフォーム40は、入力ポート(図11に示す例ではPort PP4)から入力された推論プラットフォーム10で設定された推論モデルが学習した内視鏡システム(S)とエンドスコープ(ES)の種類の組合せを示す情報を、変換元の情報として取得する。
Next, an example in which the preprocessing platform 40 automatically sets a conversion formula will be described.
The preprocessing platform 40 determines the type of endoscope system (S) and endoscope (ES) learned by the inference model set in the inference platform 10 input from the input port (Port PP4 in the example shown in FIG. 11). Information indicating the combination is acquired as conversion source information.
 図14は、本発明の実施形態の応用例に係る手術画像処理プラットフォームにおいて、推論プラットフォームが前処理プラットフォームに出力する推論モデル機器情報を説明する図である。
 推論プラットフォーム10は、ユーザに選択された推論モデルが学習した手術画像を撮像した内視鏡システム(S)の種類とエンドスコープ(ES)の種類の組合せを示す推論モデル機器情報を、前処理プラットフォーム40に出力する。
FIG. 14 is a diagram illustrating inference model device information that the inference platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
The inference platform 10 uses the preprocessing platform to input inference model equipment information indicating the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by the inference model selected by the user. Output to 40.
 図14に示す例では、例えば、推論プラットフォーム10において、ユーザが選択した推論モデルがモデルBであった場合、推論プラットフォーム10は、モデルBが学習した手術画像を撮像した内視鏡システムの種類とエンドスコープの種類を示す情報(S:A+ES:B)を、推論モデル機器情報として前処理プラットフォーム40に出力する。 In the example shown in FIG. 14, for example, in the inference platform 10, if the inference model selected by the user is model B, the inference platform 10 will determine the type of endoscope system that captured the surgical image learned by model B. Information indicating the type of endoscope (S:A+ES:B) is output to the preprocessing platform 40 as inference model device information.
 図15は、本発明の実施形態の応用例に係る手術画像処理プラットフォームにおいて、表示プラットフォームが前処理プラットフォームに出力する接続機器情報を説明する図である。
 表示プラットフォーム30は、手術画像処理プラットフォーム1Aに接続された手術画像を撮像する内視鏡システム(S)の種類と、エンドスコープ(ES)の種類を示す情報と、内視鏡システム(S)の設定値(「明るさ」、「色調」、「カラーモード」、「コントラスト」)を示す接続機器情報を、前処理プラットフォーム40に出力する。
FIG. 15 is a diagram illustrating connected device information that the display platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
The display platform 30 displays information indicating the type of endoscope system (S) that captures surgical images connected to the surgical image processing platform 1A, the type of endoscope (ES), and information on the endoscope system (S). Connected device information indicating setting values (“brightness”, “color tone”, “color mode”, “contrast”) is output to the preprocessing platform 40.
 図15に示す例では、例えば、表示プラットフォーム30は、手術画像を撮像する内視鏡システム(S)の種類を示す情報(S:A)と、エンドスコープ(ES)の種類を示す情報(ES:A)と、内視鏡システム(S)の設定値(「明るさ」(Def(デフォルト))、「色調」(R-/B+)、「カラーモード」(Def)、「コントラスト」(高))を示す接続機器情報を、前処理プラットフォーム40に出力する。 In the example shown in FIG. 15, for example, the display platform 30 displays information (S:A) indicating the type of endoscope system (S) that captures surgical images, and information (ES) indicating the type of endoscope (ES). :A) and the endoscope system (S) settings (“Brightness” (Def (default)), “Tone” (R-/B+), “Color Mode” (Def), “Contrast” (High )) is output to the preprocessing platform 40.
 上述の場合、前処理プラットフォーム40は、推論モデル機器情報におけるモデルBが学習した手術画像を撮像した内視鏡システム(S)の種類とエンドスコープ(ES)の種類を示す情報(S:A+ES:B)を、変換元を示す情報として取得する。また、前処理プラットフォーム40は、図15に示す例の接続機器情報における手術画像処理プラットフォーム1Aに接続された手術画像を撮像する内視鏡システム(S)の種類とエンドスコープ(ES)の種類を示す情報(S:A+ES:A)を、変換先を示す情報として取得する。 In the above case, the preprocessing platform 40 includes information (S:A+ ES:B) as information indicating the conversion source. The preprocessing platform 40 also identifies the type of endoscope system (S) and the type of endoscope (ES) connected to the surgical image processing platform 1A that captures surgical images in the connected device information shown in FIG. 15. The information shown (S:A+ES:A) is acquired as the information showing the conversion destination.
 この場合、前処理プラットフォーム40は、図12に示す例では、「内視鏡システム(S)+エンドスコープ(ES)間変換要素」として、変換元(S:A+ES:B)と変換先(S:A+ES:A)とが対応付けられた変換式「Conversion A」を選択する。 In this case, in the example shown in FIG. 12, the preprocessing platform 40 is a conversion element between the conversion source (S:A+ES:B) and the conversion destination Select the conversion formula "Conversion A" that is associated with (S:A+ES:A).
 また、前処理プラットフォーム40は、図12に示す例では、「内視鏡システム(S)設定値間変換要素」として、変換元(「明るさ」(Def(デフォルト))、「色調」(R-/B+)、「カラーモード」(Def)、「コントラスト」(高))を、変換先(手術画像処理プラットフォーム1Aに接続され、手術画像を取得する内視鏡システム(S:A)のデフォルト設定値)に変換する変換式を選択する。 In the example shown in FIG. 12, the preprocessing platform 40 also provides conversion sources (“brightness” (Def (default)), “color tone” (R -/B+), "Color mode" (Def), "Contrast" (High)) as the default of the conversion destination (endoscope system (S:A) connected to surgical image processing platform 1A and acquiring surgical images) Select the conversion formula to convert to (set value).
 このような応用例に係る手術画像処理プラットフォーム1Aによれば、前処理プラットフォーム40が、手術を撮影した手術画像の画質を変換し、推論プラットフォーム10が、画質が変換された手術画像を解析する。これにより、手術画像処理プラットフォーム1Aに入力された手術画像を、そのまま解析するのではなく、手術画像の画質を、例えば、推論プラットフォーム10の解析精度が向上する画質に変換し、画質が変換された手術画像を解析することで、解析結果の精度が低下するのを防止できる。 According to the surgical image processing platform 1A according to such an application example, the preprocessing platform 40 converts the image quality of a surgical image obtained by photographing a surgery, and the inference platform 10 analyzes the surgical image whose image quality has been converted. As a result, instead of analyzing the surgical image input to the surgical image processing platform 1A as it is, the image quality of the surgical image is converted to, for example, an image quality that improves the analysis accuracy of the inference platform 10, and the image quality is converted. By analyzing surgical images, it is possible to prevent the accuracy of the analysis results from decreasing.
 また、手術画像処理プラットフォーム1Aによれば、あるユーザにより使用されている変換器を、例えば、手術画像処理プラットフォーム1Aの提供者や別のユーザに提供することが可能となる。
 これにより、手術画像の画質を変換させた変換器の改良や流用等が可能となり、ユーザビリティが向上する。
Further, according to the surgical image processing platform 1A, it is possible to provide a converter used by a certain user to, for example, the provider of the surgical image processing platform 1A or another user.
This makes it possible to improve or reuse a converter that converts the image quality of surgical images, and improves usability.
 また、手術画像処理プラットフォーム1Aによれば、前処理プラットフォーム40は、解析対象となる手術を撮影した手術画像の画質を、推論プラットフォーム10が学習した画像の画質に応じた変換式により変換するので、解析結果の精度が低下するのを防止できる。 Further, according to the surgical image processing platform 1A, the preprocessing platform 40 converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference platform 10. It is possible to prevent the accuracy of analysis results from decreasing.
 以上、実施形態を用いて本発明を説明したが、本発明の技術的範囲は上記実施形態に記載の範囲には限定されないことは言うまでもない。上記実施形態に、多様な変更又は改良を加えることが可能であることが当業者に明らかである。また、そのような変更又は改良を加えた形態も本発明の技術的範囲に含まれ得ることが、特許請求の範囲の記載から明らかである。なお、上記の実施形態では、本発明を物の発明として、手術画像処理プラットフォームについて説明したが、本発明において、手術画像処理プラットフォームを各種部として機能させるプログラムの発明と捉えることもできる。 Although the present invention has been described above using the embodiments, it goes without saying that the technical scope of the present invention is not limited to the scope described in the above embodiments. It will be apparent to those skilled in the art that various changes or improvements can be made to the embodiments described above. Furthermore, it is clear from the claims that forms with such changes or improvements may also be included within the technical scope of the present invention. Note that in the above embodiments, the present invention has been described as an invention of a surgical image processing platform, but the present invention can also be regarded as an invention of a program that causes the surgical image processing platform to function as various units.
1、1A 手術画像処理プラットフォーム
10 推論プラットフォーム
20 演算プラットフォーム
30 表示プラットフォーム
40 前処理プラットフォーム
100 手術支援システム
1, 1A Surgical image processing platform 10 Inference platform 20 Computation platform 30 Display platform 40 Preprocessing platform 100 Surgical support system

Claims (9)

  1.  外科医によって行われた手術を撮影した手術画像を処理する手術画像処理プラットフォームであって、
     前記手術画像を解析する推論モデルが設定される推論手段と、
     前記推論モデルによる解析結果を手術画像に反映させた描画画像を生成する描画態様が設定される演算手段と、
     前記描画画像を表示手段に所定の態様で表示する表示態様が設定される表示設定手段と、を備え、
     前記推論モデル、前記描画態様及び前記表示態様が、それぞれ個別に設定されることを特徴とする手術画像処理プラットフォーム。
    A surgical image processing platform that processes surgical images taken of a surgery performed by a surgeon,
    an inference means in which an inference model for analyzing the surgical image is set;
    a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image;
    Display setting means for setting a display mode for displaying the drawn image in a predetermined mode on a display means,
    A surgical image processing platform, wherein the inference model, the drawing mode, and the display mode are each set individually.
  2.  前記推論手段には、複数の前記推論モデルを設定可能であり、
     前記演算手段には、複数の前記描画態様を設定可能であり、
     前記表示設定手段には、複数の前記表示態様を設定可能であることを特徴とする請求項1に記載の手術画像処理プラットフォーム。
    A plurality of the inference models can be set in the inference means,
    A plurality of the drawing modes can be set in the calculation means,
    The surgical image processing platform according to claim 1, wherein the display setting means is capable of setting a plurality of display modes.
  3.  前記推論手段、前記演算手段及び前記表示設定手段は、データが入力される入力ポートと、データを出力する出力ポートとが、それぞれ個別に設定されることを特徴とする請求項1に記載の手術画像処理プラットフォーム。 The surgery according to claim 1, wherein the inference means, the calculation means, and the display setting means each have an input port for inputting data and an output port for outputting data, respectively. Image processing platform.
  4.  前記推論手段、前記演算手段及び前記表示設定手段には、それぞれ、複数の前記入力ポートと複数の前記出力ポートを設定可能であることを特徴とする請求項3に記載の手術画像処理プラットフォーム。 The surgical image processing platform according to claim 3, wherein a plurality of the input ports and a plurality of the output ports can be set in the inference means, the calculation means, and the display setting means, respectively.
  5.  前記推論手段の前記入力ポートは、
      外部装置と、前記表示設定手段の前記出力ポートと、接続可能であり、
      前記外部装置から出力されたデータと、前記表示設定手段から出力されたデータと、が入力可能であり、
     前記演算手段の前記入力ポートは、
      前記推論手段及び/又は前記表示設定手段の前記出力ポートと、接続可能であり、
      前記推論手段及び/又は前記表示設定手段から出力されたデータと、が入力可能であり、
     前記表示設定手段の前記入力ポートは、
      前記演算手段の前記出力ポートと、接続可能であり、
      前記演算手段から出力されたデータが入力可能であることを特徴とする請求項3又は4に記載の手術画像処理プラットフォーム。
    The input port of the reasoning means is
    an external device and the output port of the display setting means are connectable;
    Data output from the external device and data output from the display setting means can be input,
    The input port of the calculation means is
    connectable with the output port of the inference means and/or the display setting means;
    data output from the inference means and/or the display setting means can be input;
    The input port of the display setting means is
    connectable to the output port of the calculation means;
    The surgical image processing platform according to claim 3 or 4, wherein data output from the calculation means can be input.
  6.  前記手術画像の画質を変換する前処理手段を、更に備え、
     前記推論手段は、前記画質が変換された前記手術画像を解析する推論モデルが設定されることを特徴とする請求項1に記載の手術画像処理プラットフォーム。
    further comprising preprocessing means for converting the image quality of the surgical image,
    The surgical image processing platform according to claim 1, wherein the inference means is configured with an inference model for analyzing the surgical image whose image quality has been converted.
  7.  前記前処理手段は、前記手術画像の前記画質を変換させる変換器を出力可能であることを特徴とする請求項6に記載の手術画像処理プラットフォーム。 The surgical image processing platform according to claim 6, wherein the preprocessing means is capable of outputting a converter that converts the image quality of the surgical image.
  8.  前記前処理手段は、前記手術画像の前記画質を、前記推論モデルが学習した画像の画質に応じた変換式により変換することを特徴とする請求項6又は7に記載の手術画像処理プラットフォーム。 The surgical image processing platform according to claim 6 or 7, wherein the preprocessing means converts the image quality of the surgical image using a conversion formula according to the image quality of the image learned by the inference model.
  9.  外科医によって行われた手術を撮影した手術画像を処理する手術画像処理プラットフォームを、
     前記手術画像を解析する推論モデルが設定される推論手段、
     前記推論モデルによる解析結果を手術画像に反映させた描画画像を生成する描画態様が設定される演算手段、
     前記描画画像を表示手段に所定の態様で表示する表示態様が設定される表示設定手段、として機能させ、
     前記推論モデル、前記描画態様及び前記表示態様が、それぞれ個別に設定されるプログラム。
    A surgical image processing platform that processes surgical images taken of surgeries performed by surgeons.
    inference means in which an inference model for analyzing the surgical image is set;
    a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image;
    functioning as display setting means for setting a display mode for displaying the drawn image on a display means in a predetermined manner;
    A program in which the inference model, the drawing mode, and the display mode are each individually set.
PCT/JP2023/014776 2022-04-11 2023-04-11 Surgical image processing platform and computer program WO2023199923A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023551148A JPWO2023199923A1 (en) 2022-04-11 2023-04-11

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-065302 2022-04-11
JP2022065302 2022-04-11

Publications (1)

Publication Number Publication Date
WO2023199923A1 true WO2023199923A1 (en) 2023-10-19

Family

ID=88329827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/014776 WO2023199923A1 (en) 2022-04-11 2023-04-11 Surgical image processing platform and computer program

Country Status (2)

Country Link
JP (1) JPWO2023199923A1 (en)
WO (1) WO2023199923A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019115664A (en) * 2017-12-26 2019-07-18 バイオセンス・ウエブスター・(イスラエル)・リミテッドBiosense Webster (Israel), Ltd. Use of augmented reality to assist navigation during medical procedures
WO2019181432A1 (en) * 2018-03-20 2019-09-26 ソニー株式会社 Operation assistance system, information processing device, and program
WO2019239854A1 (en) * 2018-06-12 2019-12-19 富士フイルム株式会社 Endoscope image processing device, endoscope image processing method, and endoscope image processing program
WO2022025151A1 (en) * 2020-07-30 2022-02-03 アナウト株式会社 Computer program, method for generating learning model, surgery assistance device, and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019115664A (en) * 2017-12-26 2019-07-18 バイオセンス・ウエブスター・(イスラエル)・リミテッドBiosense Webster (Israel), Ltd. Use of augmented reality to assist navigation during medical procedures
WO2019181432A1 (en) * 2018-03-20 2019-09-26 ソニー株式会社 Operation assistance system, information processing device, and program
WO2019239854A1 (en) * 2018-06-12 2019-12-19 富士フイルム株式会社 Endoscope image processing device, endoscope image processing method, and endoscope image processing program
WO2022025151A1 (en) * 2020-07-30 2022-02-03 アナウト株式会社 Computer program, method for generating learning model, surgery assistance device, and information processing method

Also Published As

Publication number Publication date
JPWO2023199923A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
US20220334787A1 (en) Customization of overlaid data and configuration
JP4717427B2 (en) Operation method and control apparatus of magnetic resonance tomography apparatus
JP2022511604A (en) Indicator system
US7890156B2 (en) Medical image display method and apparatus
WO2019141106A1 (en) C/s architecture-based dental beautification ar smart assistance method and apparatus
CN107405094A (en) For visualizing method, system and the computer program product of anatomical structure and blood flow and perfusion physiological function using imaging technique
US10140888B2 (en) Training and testing system for advanced image processing
JP2013521971A (en) System and method for computerized simulation of medical procedures
JP6876090B2 (en) How to operate the medical system and the medical system for performing surgery
CN114096210A (en) Modifying data from a surgical robotic system
CN111770735B (en) Operation simulation information generation method and program
US20220370135A1 (en) Dynamic Adaptation System for Surgical Simulation
KR20210008220A (en) Method for displaying multi-bone density for dental implant planning and image processing apparatus thereof
WO2023199923A1 (en) Surgical image processing platform and computer program
US20230346392A1 (en) Endoscopic image analysis and control component of an endoscopic system
Castelan et al. Augmented reality anatomy visualization for surgery assistance with hololens: Ar surgery assistance with hololens
CN116097287A (en) Computer program, learning model generation method, operation support device, and information processing method
EP3733050A1 (en) Image processing method, image processing program, image processing device, image display device, and image display method
US11660158B2 (en) Enhanced haptic feedback system
CN115311317A (en) Laparoscope image segmentation method and system based on ScaleFormer algorithm
US20220000567A1 (en) System and method for enhanced data analysis with video enabled software tools for medical environments
WO2022243963A1 (en) Dynamic adaptation system for surgical simulation
JP2009539490A (en) Imaging filter generation based on image analysis
JP7352645B2 (en) Learning support system and learning support method
EP4094668A1 (en) Endoscopy service support device, endoscopy service support system, and method of operating endoscopy service support device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023551148

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23788341

Country of ref document: EP

Kind code of ref document: A1