WO2023199923A1 - Plateforme de traitement des images chirurgicales et programme informatique - Google Patents

Plateforme de traitement des images chirurgicales et programme informatique Download PDF

Info

Publication number
WO2023199923A1
WO2023199923A1 PCT/JP2023/014776 JP2023014776W WO2023199923A1 WO 2023199923 A1 WO2023199923 A1 WO 2023199923A1 JP 2023014776 W JP2023014776 W JP 2023014776W WO 2023199923 A1 WO2023199923 A1 WO 2023199923A1
Authority
WO
WIPO (PCT)
Prior art keywords
platform
inference
surgical
display
surgical image
Prior art date
Application number
PCT/JP2023/014776
Other languages
English (en)
Japanese (ja)
Inventor
直 小林
勇太 熊頭
成昊 銭谷
栄二 阿武
Original Assignee
アナウト株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アナウト株式会社 filed Critical アナウト株式会社
Priority to JP2023551148A priority Critical patent/JPWO2023199923A1/ja
Publication of WO2023199923A1 publication Critical patent/WO2023199923A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention relates to a surgical image processing platform and computer program for processing surgical images taken of a surgery performed by a surgeon.
  • Patent Document 1 for a trained model generated using learning data including a second surgical image that is a surgical image different from the first surgical image and information regarding the risk of complications due to the surgery,
  • an analysis unit By applying the first surgical image acquired by the acquisition unit, an analysis unit generates risk analysis information of the first surgical image, and surgical support information based on the risk analysis information generated by the analysis unit is applied to the surgical image.
  • a surgical support system has been proposed that includes an output unit that outputs a superimposed image.
  • the learned model used in the surgical support system may be generated using learning data accumulated for each user.
  • the surgical support information generated by the learned model differs in the content of necessary information depending on the user.
  • the display mode in which the surgical support information is superimposed on the surgical image differs depending on the user in terms of visibility and display mode that supports the surgery. For this reason, users of surgical support systems have a desire to freely select and combine trained models, the content of surgical support information, and the display mode for superimposing surgical support information on surgical images. There is.
  • the learned model, the contents of the surgical support information, and the display mode for superimposing the surgical support information on the surgical image are set by the provider of the surgical support system. It is not something that the user using the surgical support system can freely select.
  • the present invention has been made in view of the above points, and provides a surgical image in which a model to be implemented in a layer that executes analysis, drawing, and display processing can be freely set in the processing of surgical images. Its purpose is to provide processing platforms and computer programs.
  • a surgical image processing platform that processes surgical images taken of a surgery performed by a surgeon, an inference means in which an inference model for analyzing the surgical image is set; a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image; Display setting means for setting a display mode for displaying the drawn image in a predetermined mode on a display means, A surgical image processing platform, wherein the inference model, the drawing mode, and the display mode are each set individually.
  • the surgical image processing platform includes an inference means, an arithmetic means, and a display setting means, and processes a surgical image taken of a surgery performed by a surgeon.
  • an inference model for analyzing surgical images is set.
  • the calculation means is set with a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image.
  • the display setting means sets a display mode for displaying the drawn image on the display means in a predetermined manner.
  • the inference model, drawing mode, and display mode are individually set.
  • an inference means capable of setting an inference model
  • an arithmetic means capable of setting a drawing mode
  • a display setting means capable of setting a display mode.
  • the inference model, drawing mode, and display mode can each be set individually.
  • users using the surgical image processing platform can, for example, analyze surgical images in their own manner by individually setting their own inference model, drawing mode, and display mode, and based on the analysis results. It becomes possible to generate a drawn image in a unique manner and display this drawn image in a unique manner. Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
  • a plurality of the inference models can be set in the inference means, A plurality of the drawing modes can be set in the calculation means, The surgical image processing platform according to (1), wherein the display setting means is capable of setting a plurality of display modes.
  • a plurality of inference models can be set in the inference means, a plurality of drawing modes can be set in the calculation means, and a plurality of display modes can be set in the display setting means.
  • an input port for inputting data and an output port for outputting data are each set individually (1) or The surgical image processing platform described in (2).
  • the input port for inputting data and the output port for outputting data can be individually set, so that surgical images can be In each layer of processing (inference means, calculation means, display setting means), it becomes possible to individually set the data input source and data output destination. This increases the freedom of input sources and output destinations for each layer.
  • the invention (4), it is possible to set a plurality of input ports and a plurality of output ports in the inference means, the calculation means, and the display setting means, respectively.
  • This allows each layer (inference means, calculation means, display setting means) in the processing of surgical images to input various data and output the data in various directions (for example, to another device or another layer). It becomes possible to do so.
  • the input port of the inference means is an external device and the output port of the display setting means are connectable; Data output from the external device and data output from the display setting means can be input, The input port of the calculation means is connectable with the output port of the inference means and/or the display setting means; data output from the inference means and/or the display setting means can be input; The input port of the display setting means is connectable to the output port of the calculation means; The surgical image processing platform according to (3) or (4), wherein data output from the calculation means can be input.
  • the invention (5) it becomes possible to input data from the display setting means to the inference means in addition to data from the external device. Further, it is possible to input data from the display setting means to the calculation means in addition to data from the inference means. For this reason, data is fed back from the means located downstream of the processing (for example, the inference means and the display setting means for the inference means) to the means located upstream of the processing (for example, the inference means and the calculation means for the display setting means). becomes possible. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
  • (6) further comprising preprocessing means for converting the image quality of the surgical image,
  • the preprocessing means converts the image quality of a surgical image taken during surgery, and the inference means analyzes the surgical image whose image quality has been converted.
  • the image quality of the surgical images is converted to an image quality that improves the analysis accuracy of the inference means, and the surgical images with the converted image quality are By analyzing the above, it is possible to prevent the accuracy of the analysis results from decreasing.
  • the preprocessing means can output a converter that converts the image quality of the surgical image.
  • the image quality of the surgical images captured by these external devices may also be affected. , different from each other. In such a case, there is a risk that the accuracy of the analysis results will decrease.
  • the preprocessing means converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference model, so that the accuracy of the analysis result is can be prevented from decreasing.
  • a surgical image processing platform that processes surgical images taken of surgeries performed by surgeons.
  • inference means in which an inference model for analyzing the surgical image is set; a calculation means configured to set a drawing mode that generates a drawn image in which the analysis result by the inference model is reflected in the surgical image; functioning as display setting means for setting a display mode for displaying the drawn image on a display means in a predetermined manner;
  • a program in which the inference model, the drawing mode, and the display mode are each individually set.
  • a surgical image processing platform and a computer program that can freely set a model to be implemented in a layer that executes each process of analysis, drawing, and display in processing of surgical images. can.
  • FIG. 1 is a diagram showing a functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied. It is an example of the display aspect by the surgical image processing platform based on embodiment of this invention.
  • FIG. 1 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 1 is a diagram showing a functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied. It is an example of the display aspect by the surgical image processing platform based on embodiment of this invention.
  • FIG. 1 is a diagram illustrating an overview of
  • FIG. 2 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings in a preprocessing platform of a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to an application example of the embodiment of the present invention.
  • FIG. 7 is a diagram illustrating inference model device information output by the inference platform to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating connected device information output from a display platform to a preprocessing platform in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • FIG. 1 is a diagram showing the functional configuration of a surgical support system to which a surgical image processing platform according to an embodiment of the present invention is applied.
  • a surgical image processing platform 1 (an inference platform 10, a calculation platform 20, a display platform 30) according to an embodiment of the present invention is a surgical image processing platform 1 (inference platform 10, calculation platform 20, display platform 30) that depicts a surgical operation performed by a surgeon at a medical institution (for example, an institution such as a hospital where the surgeon performs the surgery). It is applied to a surgical support system that supports surgeons by processing and displaying surgical images that are captured images.
  • the surgical support system 100 uses an acquisition unit (Camera Capture Module, etc.) that acquires surgical images to capture a surgical image, which is an image of a surgeon's operation at a medical institution, from an external device (Camera Capture Module, etc.). & Imager, etc.) via a connection unit (Camera Driver integrated IF, etc.).
  • Surgical images include images of the patient's body undergoing surgery and instruments operated by the surgeon and assistants (for example, forceps, electric scissors, electric scalpels, energy devices such as ultrasonic coagulation and cutting devices, etc.) ) was photographed.
  • the surgical support system 100 includes an acquisition unit and a surgical image processing platform 1 that processes surgical images, as well as a system control unit (Ubuntu OS/Kernel, various system libraries, various connected devices) that controls the entire surgical support system 100. It is equipped with various modules that process data from various sensors, various devices), and connection parts with various connected devices (camera driver integrated IF, sensor integrated IF, etc.).
  • a system control unit Ubuntu OS/Kernel, various system libraries, various connected devices
  • the functional configuration of the surgical support system 100 is just an example, and one functional block (database and functional processing unit) may be divided, or multiple functional blocks may be combined into one functional block.
  • Each functional processing unit includes a CPU (Central Processing Unit) as a first control unit built into the device or terminal, a GPU (Graphics Processing Unit) as a second control unit, a ROM (Read Only Memory), and a flash memory.
  • a computer program (for example, core software or an application that causes the CPU to execute the various processes described above) stored in a storage device (storage unit) such as a SSD (Solid State Drive) or hard disk is read out and executed by the CPU or GPU. This is realized by a computer program.
  • each functional processing unit may be configured with an FPGA (Field-Programmable Gate Array).
  • each functional processing unit reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area in memory, and in some cases, reads and writes necessary data such as tables from a database (DB; Data Base) stored in a storage device or a storage area on a memory.
  • DB Database
  • the database (DB) in the embodiment of the present invention may be a commercial database, but it also means a simple collection of tables and files, and the internal structure of the database itself does not matter.
  • the CPU as the first control unit mainly realizes the function of the system control section
  • the GPU as the second control unit mainly realizes the function of the surgical image processing platform 1 (inference platform 10). , computing platform 20, and display platform 30).
  • the surgical image processing platform 1 processes surgical images acquired by an acquisition unit (Camera Capture Module, etc.). Further, the surgical image processing platform 1 classifies data for each external device (Camera & Imager, etc.) and records it in the DB according to the internal clock. This allows the user to check data for a single item or multiple items in chronological order.
  • the inference platform 10 which is an example of an inference means, inputs physical information indicating the state of the body and/or instrument information indicating the state of the instrument being operated by the surgeon in a surgical image into an inference model, and uses AI (Artificial Intelligence). ) to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc.
  • An inference model is set in the inference platform 10 by a user who uses the surgical support system 100.
  • the calculation platform 20 which is an example of calculation means, converts the analysis results of the inference platform 10 (anatomical structure, structure of the anatomical structure, trajectory of the instrument, state of the instrument with respect to the anatomical structure, etc.) into a predetermined drawing format and performs surgery. Generate a drawn image that is reflected in the image.
  • the drawing mode is a mode in which the analysis results of the inference platform 10 are displayed superimposed on the surgical image (for example, a different color is displayed for each type of organ or body fluid in a part that has been analyzed as a specific organ or body fluid). ).
  • the drawing mode includes other aspects that support the surgeon performing the surgery (for example, information indicating the elapsed time of the surgery, information indicating the ideal trajectory of the instrument, information indicating the timing of resection of the resected organ, etc.), etc. Also includes information indicating.
  • Such a drawing mode is realized by an arithmetic processing unit that is an algorithm or the like that generates such a mode. That is, the user who uses the surgical support system 100 sets the drawing mode by selecting the arithmetic processing unit.
  • the display platform (also referred to as "UI platform") 30, which is an example of display setting means, displays the drawn image generated by the calculation platform 20 on a display means (console/LCD (Liquid Crystal Display), etc.) in a predetermined display mode.
  • the display mode is a method of displaying a drawn image (for example, an original surgical image and a drawn image generated on the calculation platform 20 are displayed side by side).
  • a display processing unit such as a UI (User Interface) that is a user operation screen. That is, the user using the surgical support system 100 sets the display mode by selecting the display processing section.
  • FIG. 2 is an example of a display mode by the surgical image processing platform according to the embodiment of the present invention.
  • the inference platform 10 analyzes the surgical image acquired by the acquisition unit (Camera Capture Module, etc.) (in the example shown in FIG. 2, analyzes the portion that becomes neural tissue).
  • the calculation platform 20 converts this analysis result into a predetermined drawing mode (in the example shown in FIG. 2, a mode in which the portion analyzed as neural tissue is colored), and generates a drawn image that is reflected in the surgical image.
  • the display platform 30 displays this drawn image in a predetermined display mode (in the example shown in FIG. 2, the original surgical image and the drawn image generated on the calculation platform 20 are displayed side by side). Display on a means (console/LCD (Liquid Crystal Display), etc.).
  • FIG. 3 is a diagram illustrating an overview of a surgical image processing platform according to an embodiment of the present invention.
  • the surgical image processing platform 1 inference platform 10, calculation platform 20, display platform 30
  • the inference model, drawing mode (computation processing section), and display mode (display processing section) are each individually set by the user.
  • the diagram on the left shows an example of settings for company X, which is a user
  • the diagram on the right shows an example of settings for company Y, which is a user.
  • a company model, model A, and a model F, which is a model exclusive to company X are set as inference models by company X, which is a user.
  • Algo A which is an in-house rendering algorithm
  • Algo D which is an open rendering algorithm from another company
  • the GUID of another company's open GUI is set by Company X as the display mode (display processing unit).
  • Company In the inference platform 10, Company ) is set to be input. Furthermore, in the calculation platform 20, Algo A is set to receive data (analysis results) from Model A, and Algo D is set to receive data (analysis results) from Model F. ing. Furthermore, in the display platform 30, the GUID is set so that data (drawn images) from Argo A and Argo D are input. Further, in the display platform 30, data from the GUID is set to be output to a display means (LCD (Liquid Crystal Display)) and a storage means.
  • LCD Liquid Crystal Display
  • the inference models set by company Y which is the user, are the in-house model Model B, the open model **Doctor model, and the models of other companies.
  • Algo F which is an algorithm exclusive to Y company
  • GUIF which is a GUI exclusive to Y company
  • display processing unit is set by Y company as a display mode (display processing unit).
  • Company Y sets Model B to input data from Device A (device position information, etc.), and the doctor model and other companies' models include Endoscope A and Endoscope. Data from B (surgical images, etc.) is set to be input. Further, in the calculation platform 20, Argo F is set to receive data (analysis results) from model B, **doctor model, and other companies' models. Further, in the calculation platform 20, data from Argo F is set to be output (feedback) to Device A. Further, in the display platform 30, data (drawn image) from Argo F is set to be input to the GUIF. Furthermore, in the display platform 30, data from the GUIF is set to be output to a display means (console) and a storage means (storage).
  • the inference platform 10 can be set with a plurality of inference models
  • the calculation platform 20 can be set with a plurality of drawing modes (calculation processing units)
  • the display platform 30 can be set with a plurality of drawing modes (calculation processing units). It is possible to set a plurality of display modes (display processing units).
  • data from a plurality of external devices may be input to each of the plurality of inference models, or one inference The model may be input with data from multiple external devices.
  • data (analysis results) from a plurality of inference models may be respectively input to a plurality of drawing modes (calculation processing units), or data (analysis results) from a plurality of inference models may be inputted to one drawing mode (calculation processing unit). Data from multiple inference models may be input.
  • data generated by the drawing mode (the calculation processing unit) is mainly output to the display platform 30, but the data is not limited to the display platform 30, but may be output to an external device.
  • data (drawn images) from a plurality of drawing modes may be respectively input to the plurality of display modes (display processing sections), or data (drawn images) from a plurality of drawing modes (arithmetic processing sections) may be respectively input to the plurality of display modes (display processing sections).
  • the processing unit) may input data from a plurality of drawing modes (arithmetic processing units).
  • data from the display mode is mainly output to a display means (console/LCD (Liquid Crystal Display), etc.), but is not limited to the display means, and is output to a storage device. Alternatively, it may be output to the inference platform 10 or the calculation platform 20.
  • FIG. 4 is a diagram illustrating data specifications in the surgical image processing platform according to the embodiment of the present invention.
  • the surgical image processing platform 1 has input data formats that can be input, output data formats that can be output, and implementation files that can be set (inference models of the inference platform 10) for each layer (inference platform 10, calculation platform 20, display platform 30). , an arithmetic processing section of the arithmetic platform 20, and a display processing section of the display platform 30).
  • the input data format is preprocessed input image data developed on the GPU, and the size and dtype of the data are defined.
  • the output data format representing the analysis result is confidence data developed on the GPU, and the size and dtype of the data are defined.
  • a general format (model converted to ONNX) is defined for the implementation file.
  • the input data format is defined as the same as the output data format of the inference platform 10. This makes it possible to input data output from the inference platform 10 to the calculation platform 20.
  • the output data format representing the drawn image is a display image developed on the GPU, and the size and dtype of the data are defined.
  • a computation file written in a designated language is defined as the implementation file.
  • the input data format is defined as the same as the output data format of the calculation platform 20. This makes it possible to input data output from the calculation platform 20 to the display platform 30.
  • the output data format indicating the display mode is a data group including display of the designated library, event processing, and the like.
  • the UI file already described in the specified library is defined as the implementation file.
  • the input data formats that can be input is not limited to content prepared in advance by the provider of the surgical support system 100 (inference model, arithmetic processing unit, display processing unit), but also content unique to the user, content provided by a third party, or provided as open content. Contents can be set on the surgical image processing platform 1.
  • FIG. 5 is a diagram illustrating the flow of data in the surgical image processing platform according to the embodiment of the present invention.
  • the calculation platform 20, and the display platform 30 input ports for inputting data and output ports for outputting data are individually set by the user. Furthermore, the inference platform 10, calculation platform 20, and display platform 30 can be configured with a plurality of input ports and a plurality of output ports, respectively, by the user.
  • the input ports (PortII1, PortII2, etc. shown in Figure 5) set in the inference platform 10 are connected by the user to the connection parts (Device FI1, CameraIFI1, etc.) connected to external devices (Camera & Imager, etc.), and the connection Data (surgical images (physical information, instrument information)) from an external device is input through the section.
  • This input data is supplied to the inference model.
  • the output ports (PortIO1, PortIO2, etc. shown in FIG. 5) set on the inference platform 10 can be configured by the user as input ports (PortDI1, PortDI2, etc. shown in FIG. 5) of the calculation platform 20, input ports of the display platform 30, or storage ports. It is connected to the IF of means and external devices, and outputs output data (analysis results) from the inference model to these.
  • the input ports (PortDI1, PortDI2, etc. shown in FIG. 5) set on the calculation platform 20 can be configured by the user as the output ports (PortIO1, PortIO2, etc. shown in FIG. 5) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 5), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to the arithmetic processing section.
  • the output ports (PortDO1, PortDO2, etc. shown in FIG. 5) set on the calculation platform 20 can be used by the user to input the input ports of the display platform 30 (PortGI1 shown in FIG.
  • the output data (drawn image) from the arithmetic processing unit is output to these devices.
  • the input port (PortGI1 shown in FIG. 5) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 5) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input.
  • This input data is supplied to the display processing section.
  • the output ports set on the display platform 30 can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (PortII4 shown in FIG. 5), It is connected to the input port of the calculation platform 20 (PortDI4 shown in FIG. 5), storage means, and IF of an external device, and outputs output data (display mode) from the display processing section to these.
  • 6 and 7 are diagrams illustrating settings in the reasoning platform of the surgical image processing platform according to the embodiment of the present invention.
  • a user performs a setting operation when setting contents such as an inference model, other calculation elements, input ports, and output ports on the inference platform 10.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 6, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the inference platform 10 sets input ports (Port II1, etc. in the example shown in FIG. 6) and output ports (PortIO1, etc. in the example shown in FIG. 6) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the setting selection screen in the inference platform 10 includes multiple types of inference models (in the example shown in FIG. 6, basic model, open model, closed model, etc.) and multiple types of analysis results of the inference models, as content that the user can select.
  • the calculation method ADD, MATMUL, ABS, etc. in the example shown in Figure 6
  • multiple types of constants SCALAR, etc. in the example shown in Figure 6
  • conditional branches SELECTOR, etc. in the example shown in Figure 6
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), or by surgery content.
  • a plurality of inference models are grouped by supply source (basic model, open model, closed model).
  • multiple inference models are grouped into multiple layers on the settings selection screen. Specifically, in the example shown in Fig. 7, the top layer is grouped by analysis target (connective tissue, nerves, etc.), and in each group, the details of the surgery (thoracoscopic lung resection, robot-assisted endoscopic surgery, etc.) are grouped. (stomach/large intestine region, etc.).
  • the user can add content (inference models, calculation methods, conditional branches, etc.) to the inference platform 10 by selecting content on the settings selection screen and dragging the selected content into the inference platform 10 on the settings screen. Can be set.
  • content inference models, calculation methods, conditional branches, etc.
  • the user performs an operation to connect the content set in the inference platform 10, an input port, and an output port (for example, in the example shown in FIG. 6, connect the connection source (for example, PortII1) on the setting screen.
  • the connection source for example, PortII1
  • the connection source and connection destination are connected, and data flows from the connection source to the connection destination.
  • FIGS. 8 and 9 are diagrams illustrating settings on the calculation platform of the surgical image processing platform according to the embodiment of the present invention.
  • the user performs a setting operation when setting contents such as the calculation processing unit, other calculation elements, input ports, and output ports.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 8, the part displayed as +) displayed on a setting screen displayed on the display means by the display platform 30 functioning as a setting means.
  • the calculation platform 20 sets input ports (PortDI1, etc. in the example shown in FIG. 8) and output ports (PortDO1, etc. in the example shown in FIG. 8) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the settings selection screen in the calculation platform 20 includes multiple types of calculation processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the calculation processing units, as contents that the user can select.
  • Various types of processing for example, multiple types of filters (in the example shown in Figure 8, Low pass, High pass, Bandpass, etc.), multiple types of analysis methods (in the example shown in Figure 8, Peak, Histogram, FFT, etc.), multiple types
  • the calculation method in the example shown in FIG. 8, ADD, MATMUL, ABS, etc.
  • the constant in the example shown in FIG. 8, SCALAR, etc.
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgical content, or by drawing expression.
  • analysis target for example, anatomical structure, etc.
  • surgical content or by drawing expression.
  • supply source basic model, open model, closed model
  • the content of the drawing expression realized by the arithmetic processing unit may be adjustable on the setting selection screen.
  • multiple calculation processing units can be selected for each drawing expression (drawing color, confidence threshold, opacity, drawing method, blinking display, etc.), and each In the drawing expression item, it is possible to adjust the drawing expression (for example, input numerical values, select colors, etc.).
  • the user selects content on the settings selection screen and drags the selected content into the calculation platform 20 on the settings screen to add the content (calculation processing unit, calculation method, conditional branch, etc.) to the calculation platform 20. can be set.
  • connection source for example, PortDI1 to PortDI4
  • connection destination for example, model F
  • FIG. 10 is a diagram illustrating settings on the display platform of the surgical image processing platform according to the embodiment of the present invention.
  • the user performs a setting operation when setting content such as the display processing unit, other calculation elements, input ports, and output ports.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 10, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the display platform 30 sets input ports (PortGI1, etc. in the example shown in FIG. 10) and output ports (PortGO1, etc. in the example shown in FIG. 10) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the setting selection screen on the display platform 30 includes multiple types of display processing units (in the example shown in FIG. 8, basic model, open model, closed model, etc.) and information on the processing results of the display processing unit, as content that can be selected by the user. Multiple types of calculation methods (ADD, MATMUL, ABS, etc. in the example shown in FIG. 8) and multiple types of conditional branches (SELECTOR, etc. in the example shown in FIG. 8) are displayed.
  • display processing units in the example shown in FIG. 8, basic model, open model, closed model, etc.
  • SELECTOR multiple types of conditional branches
  • contents that can be selected by the user may be grouped by supply source, by analysis target (for example, anatomical structure, etc.), by surgery content, or by display mode.
  • analysis target for example, anatomical structure, etc.
  • surgery content for example, anatomical structure, etc.
  • display mode for example, anatomical structure, etc.
  • a plurality of display processing units are grouped by supply source (basic model, open model, closed model).
  • the user selects content on the settings selection screen and drags the selected content into the display platform 30 on the settings screen to add the content (display processing unit, calculation method, conditional branching, etc.) to the display platform 30. can be set.
  • connection source for example, PortGI1
  • GUI F graphical user interface
  • the surgical image processing platform 1 may output test results of the constructed platform as a template for standard application to a predetermined organization. Furthermore, when introducing something other than the inference model, arithmetic processing unit, or display processing unit provided in advance in the surgical image processing platform 1, the surgical image processing platform 1 may be provided with a performance confirmation function for these.
  • a user who uses the surgical image processing platform 1 can create a unique surgical image by, for example, individually setting a unique inference model, drawing mode, and display mode. It becomes possible to analyze the image in a unique manner, generate a drawn image in a unique manner based on the analysis result, and display this drawn image in a unique manner. Therefore, in processing surgical images, it is possible to provide a surgical image processing platform that can freely set models to be implemented in layers that perform analysis, drawing, and display processing.
  • a plurality of inference models can be set in the inference platform 10
  • a plurality of drawing modes can be set in the calculation platform 20
  • a plurality of display modes can be set in the display platform 30. It is. This makes it possible to mount various functions (functions realized by each model) in each layer (inference platform 10, calculation platform 20, and display platform 30) in the processing of surgical images.
  • the input port for inputting data and the output port for outputting data can be individually set.
  • each layer in the processing of surgical images, it is possible to individually set the data input source and the data output destination. This increases the freedom of input sources and output destinations for each layer.
  • the surgical image processing platform 1 it is possible to set a plurality of input ports and a plurality of output ports in the inference platform 10, the calculation platform 20, and the display platform 30, respectively. This allows various data to be input to each layer (inference platform 10, calculation platform 20, and display platform 30) in surgical image processing, and the data to be sent in various directions (for example, to another device or another layer). It becomes possible to output.
  • the surgical image processing platform 1 it is possible to input data from the display platform 30 into the inference platform 10 in addition to data from an external device. Further, in addition to data from the inference platform 10, data from the display platform 30 can be input to the calculation platform 20. As a result, from a platform located downstream of processing (for example, the display platform 30 for the inference platform 10 and the calculation platform 20) to a means located upstream of the processing (for example, the inference platform 10 and the calculation platform 20 for the display platform 30). It becomes possible to feed back data. This makes it possible to obtain a plurality of types of results, for example, by repeatedly using data input from an external device and processing it with mutually different inference models and drawing modes.
  • FIG. 11 is a diagram illustrating a data flow in a surgical image processing platform according to an application example of an embodiment of the present invention.
  • the surgical image processing platform 1A includes a preprocessing platform 40, which is an example of preprocessing means.
  • the inference platform 10 collects physical information indicating the state of the body and/or Instrument information indicating the state of the instrument being operated by the surgeon is input into the inference model and analyzed by AI to infer the anatomical structure, the structure of the anatomical structure, the trajectory of the instrument, the state of the instrument relative to the anatomical structure, etc. do.
  • external devices used in medical institutions e.g. endoscope systems/endoscopes, etc.
  • types models used vary depending on the medical institution. be.
  • the type of external device is different, the image quality of the obtained surgical image will also be different.
  • the inference model used by the inference platform 10 for analysis is learned from surgical images captured by a predetermined external device.
  • the external device that captured the surgical image learned by the inference model and the external device used by the medical institution that uses the surgical image processing platform 1 are connected to each other. It may be different. In such cases, the image quality of surgical images captured by these external devices also differs from each other. If there is a large difference between the image quality of the surgical images learned by the inference model and the image quality of the surgical images acquired when using the surgical image processing platform 1, the accuracy of the analysis results of the inference platform 10 may decrease. .
  • the preprocessing platform 40 suppresses the discrepancy between the image quality of the surgical image input to the surgical image processing platform 1 and the image quality of the surgical image learned by the inference model, and prevents the accuracy of the analysis results of the inference platform 10 from decreasing. It is something to do.
  • the preprocessing platform 40 which is an example of preprocessing means, converts the image quality of the surgical image using a conversion formula according to the image quality of the image learned by the inference model of the inference platform 10.
  • the preprocessing platform 40 includes a camera image quality converter, and in the camera image quality converter, an acquisition unit (Camera Capture Module (see FIG. 1), etc.) acquires a surgical image according to the external device that captured the acquired surgical image. Then, the image quality of the surgical image is converted to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10, and a preprocessed surgical image with the image quality converted is generated and provided to the inference platform 10.
  • an acquisition unit Camera Capture Module (see FIG. 1), etc.) acquires a surgical image according to the external device that captured the acquired surgical image.
  • the image quality of the surgical image is converted to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10, and a preprocessed
  • a conversion formula is set in the preprocessing platform 40 by a user using the surgical support system 100 or automatically as described below.
  • the "conversion formula” is not limited to one that converts the image quality of the surgical image to an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10. Any method can be used as the "conversion formula" as long as it converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. For example, it is possible to use any method that converts the image quality of the surgical image to an image quality that improves the analysis accuracy of the inference platform 10. It may also be a method of image conversion.
  • the preprocessing platform 40 has input ports (PortPPI1, PortPPI2, etc. shown in FIG. 11) into which data is input, and output ports (PortPPI2, etc. shown in FIG. 11) that output data. PortPPO1, PortPPO2) shown in 11 are individually set by the user. Furthermore, the preprocessing platform 40 can be configured with a plurality of input ports and a plurality of output ports, respectively, similarly to the inference platform 10, calculation platform 20, and display platform 30.
  • connection section (Device FI1, Camera IFI1, etc.), and data (surgical images (physical information, instrument information)) from an external device (for example, an endoscope system/endoscope, etc.) is input via the connection section.
  • data surgical images (physical information, instrument information)
  • an external device for example, an endoscope system/endoscope, etc.
  • the input ports set in the preprocessing platform 40 are the output ports of the inference platform 10 (PortIO3 shown in FIG. 11) and the output ports of the display platform 30 (PortPPI5 shown in FIG. 11).
  • PortGO4 output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to a camera image quality converter, and is converted into an image quality that approximates the image quality of the surgical image learned by the inference model of the inference platform 10.
  • the output ports (PortPPO1, PortPPO2 shown in FIG. 11) set in the preprocessing platform 40 are connected by the user to the input ports (PortII1, PortII2 shown in FIG. 11) of the inference platform 10, and output data ( Output preprocessed surgical images).
  • the input ports set in the inference platform 10 are connected by the user to the output ports (PortPPO1, PortPPO2 shown in FIG. 11) of the preprocessing platform 40, and the output data (preprocessed surgical image) is input.
  • This input data is supplied to the inference model.
  • the output ports (PortIO1, PortIO2, etc. shown in FIG. 11) set in the inference platform 10 can be set by the user to the input ports (PortDI1, PortDI2, etc. shown in FIG. 11) of the calculation platform 20, or the input ports (PortIO1, PortDI2, etc. shown in FIG. PortPPI4 shown in FIG. 11), and output data (analysis results) from the inference model to these.
  • the input ports (PortDI1, PortDI2, etc. shown in FIG. 11) set on the calculation platform 20 can be changed by the user to the output ports (PortIO1, PortIO2, etc. shown in FIG. 11) of the inference platform 10, or the output ports of the display platform 30 (PortIO1, PortIO2, etc. shown in FIG. 11), and output data (analysis results) from the inference platform 10 and output data (display mode) from the display platform 30 are input.
  • This input data is supplied to the arithmetic processing section.
  • the output port (PortDO1 shown in FIG. 11) set on the calculation platform 20 is connected by the user to the input port (PortGI1 shown in FIG. 11) of the display platform 30, and output data (drawn image ) is output.
  • the input port (PortGI1 shown in FIG. 11) set on the display platform 30 is connected by the user to the output port (PortDO1 shown in FIG. 11) of the calculation platform 20, and the output data (drawn image) from the calculation platform 20 is is input.
  • This input data is supplied to the display processing section.
  • the output ports (PortGO1, PortGO2, PortGO3, etc. shown in FIG. 11) set on the display platform 30 can be used by the user as a display means (LCD/console), an input port of the inference platform 10 (Port II3 shown in FIG. 11), It is connected to the input port of the calculation platform 20 (PortDI3 shown in FIG. 11) and the input port of the preprocessing platform 40 (PortPPI5, etc. shown in FIG. 11), and outputs the output data (display mode) from the display processing unit to these. .
  • FIG. 12 is a diagram illustrating settings in the preprocessing platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the user performs a setting operation when setting contents such as a camera image quality converter, other calculation elements, input ports, and output ports on the preprocessing platform 40.
  • the setting operation is performed, for example, by operating an add button (in the example shown in FIG. 12, the part displayed as +) displayed on a setting screen displayed on the display unit by the display platform 30 functioning as a setting unit.
  • the preprocessing platform 40 sets input ports (PortPPI1, etc. in the example shown in FIG. 12) and output ports (PortPPO1, etc. in the example shown in FIG. 12) one by one.
  • the display platform 30 functioning as a setting means displays a setting selection screen from which a plurality of types of content can be selected on the display means.
  • the settings selection screen in the preprocessing platform 40 includes, as user-selectable content, multiple types of conversion elements included in the camera image quality converter, and multiple types of preprocessing for the conversion results of the camera image quality converter (as shown in FIG. 12).
  • multiple types of conversion elements included in the camera image quality converter as shown in FIG. 12.
  • Normalize, Standardize, Grayscale, Binalize, etc. and conditional branches (SELECTOR, etc. in the example shown in FIG. 12) are displayed.
  • the camera image quality converter is a database that stores a series of conversion formulas for converting the conversion source to the image quality (spatial frequency, brightness, color tone, etc.) equivalent to the conversion destination.
  • the preprocessing platform 40 may acquire (read) the camera image quality converter from the storage means of the surgical support system 100 (for example, the Data Base shown in FIG. The information may be obtained (downloaded) from a server of a provider of the surgical image processing platform 1A, another user using the surgical image processing platform 1A, or the like. Further, the preprocessing platform 40 may output (store) the camera image quality converter to the storage means of the surgical support system 100 (for example, the Data Base shown in FIG.
  • the camera image quality converter uses an "endoscope system (S) + endoscope (ES) conversion element” and an “endoscope system (S) setting value conversion element” as conversion elements. include.
  • Endoscope system (S) + endoscope (ES) conversion element has multiple types of conversion destinations (a combination of an endoscope system and an endoscope that are connected to the surgical image processing platform 1A and acquire surgical images). , respectively, are associated with conversion formulas for multiple types of conversion sources (combinations of the endoscope system and endoscope that captured the surgical images learned by the inference model).
  • the image quality (spatial frequency, brightness, color tone, etc.) of the combined surgical image is applied to the combination of the destination endoscope system (S:A) and endoscope (ES:A) as the conversion source.
  • Conversion A is associated as a conversion formula that approximates the image quality of a surgical image obtained by a combination of an endoscope system (S:A) and an endoscope (ES:B).
  • an endoscope system (S) setting value conversion element multiple types of conversion destinations (default setting values of the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are specified.
  • Types of conversion sources (setting values used in the endoscope system that is connected to the surgical image processing platform 1A and acquires surgical images) are associated with each other.
  • information indicating values for each of multiple types of conversion destinations is associated with each of multiple types of setting values ("brightness”, “color tone”, “color mode”, and "contrast”). There is.
  • the user selects content on the settings selection screen and drags the selected content into the preprocessing platform 40 on the settings screen to add the content (conversion formula, calculation method, conditional branch, etc.) to the preprocessing platform 40. ) can be set.
  • the preprocessing platform 40 may automatically set the conversion formula.
  • the user performs an operation to connect the content set in the preprocessing platform 40, an input port, and an output port (for example, in the example shown in FIG. 12, multiple connection sources (for example, PortPPI1 ⁇ PortPPI5) to multiple connection destinations (for example, "Conversion C” and "Conversion A”), the connection source and connection destination are connected, and data is transferred from the connection source to the connection destination. flows.
  • multiple connection sources for example, PortPPI1 ⁇ PortPPI5
  • connection destinations for example, "Conversion C" and "Conversion A
  • FIG. 13 is a diagram illustrating settings in the inference platform of the surgical image processing platform according to the application example of the embodiment of the present invention.
  • a basic model model A, an open model **hospital model, and a closed model model F are endoscopic models that have captured surgical images learned by these models (inference models). It is grouped into information (S:A+ES:A) indicating the type of mirror system and the type of endoscope. That is, the inference platform 10 associates and stores each inference model and the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by each inference model. There is.
  • the display platform 30 is connected to the surgical image processing platform 1A as setting items for each model (GUI A to F) in the display processing section of the setting selection screen (see FIG. 10), and Information indicating the type of endoscope system (S), the type of endoscope (ES), and the setting values of the endoscope system (S) may be settable.
  • the preprocessing platform 40 is an endoscope system ( Information indicating the type of endoscope (ES), information indicating the type of endoscope (ES), and setting values of the endoscope system (S) may be acquired.
  • the preprocessing platform 40 determines the combination of types of endoscope system (S) and endoscope (ES) from the serial signal of the endoscope system input from the input port (Port PP1 in the example shown in FIG. 11). The information shown may be acquired as the conversion destination information. Further, the preprocessing platform 40 may acquire the setting values of the endoscope system (S) input from the input port (Port PP1 in the example shown in FIG. 11) as the conversion source information.
  • the preprocessing platform 40 determines the type of endoscope system (S) and endoscope (ES) learned by the inference model set in the inference platform 10 input from the input port (Port PP4 in the example shown in FIG. 11). Information indicating the combination is acquired as conversion source information.
  • S endoscope system
  • ES endoscope
  • FIG. 14 is a diagram illustrating inference model device information that the inference platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the inference platform 10 uses the preprocessing platform to input inference model equipment information indicating the combination of the type of endoscope system (S) and the type of endoscope (ES) that captured the surgical image learned by the inference model selected by the user. Output to 40.
  • the inference platform 10 if the inference model selected by the user is model B, the inference platform 10 will determine the type of endoscope system that captured the surgical image learned by model B. Information indicating the type of endoscope (S:A+ES:B) is output to the preprocessing platform 40 as inference model device information.
  • FIG. 15 is a diagram illustrating connected device information that the display platform outputs to the preprocessing platform in the surgical image processing platform according to the application example of the embodiment of the present invention.
  • the display platform 30 displays information indicating the type of endoscope system (S) that captures surgical images connected to the surgical image processing platform 1A, the type of endoscope (ES), and information on the endoscope system (S).
  • Connected device information indicating setting values (“brightness”, “color tone”, “color mode”, “contrast”) is output to the preprocessing platform 40.
  • the display platform 30 displays information (S:A) indicating the type of endoscope system (S) that captures surgical images, and information (ES) indicating the type of endoscope (ES).
  • S:A) and the endoscope system (S) settings (“Brightness” (Def (default)), “Tone” (R-/B+), “Color Mode” (Def), “Contrast” (High )) is output to the preprocessing platform 40.
  • the preprocessing platform 40 includes information (S:A+ ES:B) as information indicating the conversion source.
  • the preprocessing platform 40 also identifies the type of endoscope system (S) and the type of endoscope (ES) connected to the surgical image processing platform 1A that captures surgical images in the connected device information shown in FIG. 15.
  • the information shown (S:A+ES:A) is acquired as the information showing the conversion destination.
  • the preprocessing platform 40 is a conversion element between the conversion source (S:A+ES:B) and the conversion destination Select the conversion formula "Conversion A" that is associated with (S:A+ES:A).
  • the preprocessing platform 40 also provides conversion sources (“brightness” (Def (default)), “color tone” (R -/B+), “Color mode” (Def), “Contrast” (High)) as the default of the conversion destination (endoscope system (S:A) connected to surgical image processing platform 1A and acquiring surgical images) Select the conversion formula to convert to (set value).
  • the preprocessing platform 40 converts the image quality of a surgical image obtained by photographing a surgery, and the inference platform 10 analyzes the surgical image whose image quality has been converted.
  • the image quality of the surgical image is converted to, for example, an image quality that improves the analysis accuracy of the inference platform 10, and the image quality is converted.
  • the surgical image processing platform 1A it is possible to provide a converter used by a certain user to, for example, the provider of the surgical image processing platform 1A or another user. This makes it possible to improve or reuse a converter that converts the image quality of surgical images, and improves usability.
  • the preprocessing platform 40 converts the image quality of the surgical image taken of the surgery to be analyzed using the conversion formula according to the image quality of the image learned by the inference platform 10. It is possible to prevent the accuracy of analysis results from decreasing.

Abstract

La présente invention permet de définir librement un modèle à mettre en œuvre dans une couche pour effectuer des processus d'analyse, de rendu et d'affichage lors du traitement d'images chirurgicales. Une plateforme de traitement des images chirurgicales (1) traite une image chirurgicale d'une chirurgie effectuée par un chirurgien, et comprend : une plateforme d'inférence (10) qui définit un modèle d'inférence pour analyser l'image chirurgicale ; une plateforme de calcul (20) qui définit un mode de rendu pour générer une image rendue dans laquelle le résultat d'analyse provenant du modèle d'inférence est réfléchi dans l'image chirurgicale ; et une plateforme d'affichage (30) qui définit un mode d'affichage pour afficher l'image rendue dans un mode prescrit sur un moyen d'affichage. Le modèle d'inférence, le mode de rendu et le mode d'affichage sont réglés individuellement.
PCT/JP2023/014776 2022-04-11 2023-04-11 Plateforme de traitement des images chirurgicales et programme informatique WO2023199923A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023551148A JPWO2023199923A1 (fr) 2022-04-11 2023-04-11

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022065302 2022-04-11
JP2022-065302 2022-04-11

Publications (1)

Publication Number Publication Date
WO2023199923A1 true WO2023199923A1 (fr) 2023-10-19

Family

ID=88329827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/014776 WO2023199923A1 (fr) 2022-04-11 2023-04-11 Plateforme de traitement des images chirurgicales et programme informatique

Country Status (2)

Country Link
JP (1) JPWO2023199923A1 (fr)
WO (1) WO2023199923A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019115664A (ja) * 2017-12-26 2019-07-18 バイオセンス・ウエブスター・(イスラエル)・リミテッドBiosense Webster (Israel), Ltd. 医療手技におけるナビゲーションを補助するための拡張現実の使用
WO2019181432A1 (fr) * 2018-03-20 2019-09-26 ソニー株式会社 Système d'assistance opératoire, dispositif de traitement d'informations et programme
WO2019239854A1 (fr) * 2018-06-12 2019-12-19 富士フイルム株式会社 Dispositif de traitement d'image endoscopique, procédé de traitement d'image endoscopique et programme de traitement d'image endoscopique
WO2022025151A1 (fr) * 2020-07-30 2022-02-03 アナウト株式会社 Programme informatique, procédé de génération de modèle d'apprentissage, dispositif d'assistance chirurgicale et procédé de traitement d'informations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019115664A (ja) * 2017-12-26 2019-07-18 バイオセンス・ウエブスター・(イスラエル)・リミテッドBiosense Webster (Israel), Ltd. 医療手技におけるナビゲーションを補助するための拡張現実の使用
WO2019181432A1 (fr) * 2018-03-20 2019-09-26 ソニー株式会社 Système d'assistance opératoire, dispositif de traitement d'informations et programme
WO2019239854A1 (fr) * 2018-06-12 2019-12-19 富士フイルム株式会社 Dispositif de traitement d'image endoscopique, procédé de traitement d'image endoscopique et programme de traitement d'image endoscopique
WO2022025151A1 (fr) * 2020-07-30 2022-02-03 アナウト株式会社 Programme informatique, procédé de génération de modèle d'apprentissage, dispositif d'assistance chirurgicale et procédé de traitement d'informations

Also Published As

Publication number Publication date
JPWO2023199923A1 (fr) 2023-10-19

Similar Documents

Publication Publication Date Title
US20220334787A1 (en) Customization of overlaid data and configuration
JP7308936B2 (ja) インジケータシステム
JP4717427B2 (ja) 磁気共鳴断層撮影装置の作動方法および制御装置
US7890156B2 (en) Medical image display method and apparatus
WO2019141106A1 (fr) Procédé et appareil d'assistance intelligente à réalité augmentée d'embellissement dentaire reposant sur une architecture c/s
US10140888B2 (en) Training and testing system for advanced image processing
JP2013521971A (ja) 医療処置のコンピュータ化シミュレーションを行うシステム及び方法
JP6876090B2 (ja) 医療システムの作動方法及び外科手術を行うための医療システム
JP2010504110A (ja) 患者及び創傷療法治療の履歴を管理するためのシステム及び方法
CN114096210A (zh) 修改来自手术机器人系统的数据
US20220370135A1 (en) Dynamic Adaptation System for Surgical Simulation
CN111770735B (zh) 手术仿真信息生成方法及程序
KR20210008220A (ko) 임플란트 시술 계획 수립을 위한 다중 골밀도 표시방법 및 그 영상 처리장치
JP7194889B2 (ja) コンピュータプログラム、学習モデルの生成方法、手術支援装置、及び情報処理方法
WO2023199923A1 (fr) Plateforme de traitement des images chirurgicales et programme informatique
EP3733050A1 (fr) Procédé de traitement d'image, programme de traitement d'image, dispositif de traitement d'image, dispositif d'affichage d'image et procédé d'affichage d'image
US11730491B2 (en) Endoscopic image analysis and control component of an endoscopic system
US20210393358A1 (en) Enhanced haptic feedback system
CN115311317A (zh) 一种基于Scaleformer类算法的腹腔镜图像分割方法及系统
WO2022243963A1 (fr) Système d'adaptation dynamique pour simulation chirurgicale
JP2009539490A (ja) 画像解析に基づく撮像フィルタの生成
JP7352645B2 (ja) 学習支援システム及び学習支援方法
US20220008145A1 (en) Virtual pointer for real-time endoscopic video using gesture and voice commands
EP4094668A1 (fr) Dispositif de support de service d'endoscopie, système de support de service d'endoscopie, et procédé de fonctionnement d'un dispositif de support de service d'endoscopie
Horn Tools and techniques in disease management: programmes for improving and measuring outcomes

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023551148

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23788341

Country of ref document: EP

Kind code of ref document: A1