WO2020159276A1 - Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image - Google Patents

Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image Download PDF

Info

Publication number
WO2020159276A1
WO2020159276A1 PCT/KR2020/001475 KR2020001475W WO2020159276A1 WO 2020159276 A1 WO2020159276 A1 WO 2020159276A1 KR 2020001475 W KR2020001475 W KR 2020001475W WO 2020159276 A1 WO2020159276 A1 WO 2020159276A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
analysis
recognition
layer
model
Prior art date
Application number
PCT/KR2020/001475
Other languages
French (fr)
Korean (ko)
Inventor
이연주
Original Assignee
주식회사 아이버티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 아이버티 filed Critical 주식회사 아이버티
Priority claimed from KR1020200011504A external-priority patent/KR20200096155A/en
Publication of WO2020159276A1 publication Critical patent/WO2020159276A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Definitions

  • the present invention relates to a surgical image analysis and recognition method.
  • Laparoscopic surgery refers to surgery performed by a medical staff who directly sees and touches the area to be treated, and minimally invasive surgery is also called keyhole surgery.
  • Laparoscopic surgery and robotic surgery are typical. Laparoscopic surgery is performed through a video monitor by inserting a surgical instrument into the body and a laparoscopic instrument with a special camera by making a small hole in the required part without opening, and performing microsurgery using a laser or special instrument.
  • robot surgery is to perform minimally invasive surgery using a surgical robot.
  • radiation therapy refers to the surgical treatment with radiation or laser light from outside the body.
  • endoscopic procedure refers to a procedure performed by inserting an endoscope into a digestive system or the like and inserting a tool into a passage provided in the endoscope.
  • Deep learning has been widely used in the analysis of surgical images in recent years. Deep learning is defined as a set of machine-learning algorithms that attempt high-level abstractions (abstraction of key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation methods. Deep learning can be seen as a field of machine learning that teaches a person's mindset to computers in a large framework.
  • the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel. And programs.
  • the present invention can obtain all or selectively various analysis results required for surgical analysis through a plurality of surgical analysis models capable of calculating each result using multiple surgical element recognition results calculated from the surgical element recognition model.
  • the present invention is to provide a surgical analysis device, a surgical image analysis and recognition system, a method, and a program.
  • the present invention includes a plurality of surgical solution model that easily calculates the service result required for a medical staff based on a number of surgical analysis results through a plurality of surgical analysis models, surgical analysis device, surgical image analysis and recognition system, method And programs.
  • a computer-aided surgical analysis method includes: a computer acquiring a surgical image; A computer inputting the surgical image into one or more surgical element recognition models; A computer acquiring a combination of surgical recognition results calculated from each surgical element recognition model, wherein one or more surgical element recognition models are included in a surgical element recognition layer at a lowest level in the surgical analysis system; And a computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results, wherein the one or more surgical analysis models are included in the surgical analysis layer above the surgical element recognition layer. It is selected according to, surgical analysis result acquisition step; includes.
  • each surgical analysis model in the surgical analysis layer is characterized in that a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer based on data required for analysis.
  • the surgical element recognition model includes: an organ recognition model that recognizes an organ in the surgical image; A surgical tool recognition model for recognizing the movement of the surgical tool and the surgical tool in the surgical image; And an event recognition model that recognizes an event occurring within the surgical image, and the event is a non-ideal situation during surgery including bleeding.
  • the surgical analysis layer blood that calculates the degree of blood loss based on a combination of surgical recognition results including types of surgical organs recognized by the surgical element recognition layer, surgical operations, and events occurring during surgery Loss recognition model; And a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer, and includes the blood loss recognition model and the long-term damage detection model Is used to calculate the analysis results for each surgical procedure during or after surgery.
  • the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
  • the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan
  • the calculation model is characterized in that, by analyzing the surgical image data acquired in real time in real time, an optimal surgical plan to be performed later is calculated and provided.
  • the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan
  • the calculation model is characterized by calculating and providing a surgical plan for a real patient based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery.
  • the computer inputs one or more analysis results into a specific surgical solution model, thereby providing a summary of the surgical output based on the analysis results.
  • the surgical solution model includes: a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer; A chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer; And a surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer.
  • the surgical Q&A model that calculates an answer to a question about the surgery includes, and the surgical output providing step, the medical staff The answer is calculated and provided by entering a question about a specific surgery.
  • a computer-aided surgical analysis program may be stored in a medium in combination with a computer that is hardware, and to execute the computer-aided surgical analysis method.
  • the surgical image analysis apparatus includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input, wherein one or more surgical element recognition models are the lowest in the surgical analysis system.
  • a surgical solution providing layer including one or more surgical solution models that provide a surgical output arranged based on one or more analysis results obtained from the surgical analysis layer; further includes a.
  • each surgical element can be accurately recognized by an individual surgical element recognition model that recognizes individual surgical elements (surgical instruments, bleeding, camera, etc.) in the surgical image. That is, it is possible to provide a recognition result with high accuracy compared to a method of recognizing various surgical elements included in a surgical image through one recognition model.
  • each surgical solution model automatically calculates a result required for a medical staff by inputting one or more surgical analysis results
  • the post-operative work of the medical staff can be simplified.
  • a new surgical element recognition model when a new surgical analysis is required or when a new service type solution is required, a new surgical element recognition model, a surgical analysis model, or a surgical solution model in the surgical analysis system
  • FIG. 1 is a layer configuration diagram of a surgical analysis device according to an embodiment of the present invention.
  • Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
  • FIG. 3 is a flow chart of a computer-aided surgical analysis method further comprising a procedure for providing a surgical output according to an embodiment of the present invention.
  • the'surgery image' is an image of the surgical procedure.
  • the surgical image includes an image obtained by an endoscope inserted into the body during laparoscopic surgery including robotic surgery.
  • the surgical image may include a surgical image performed through an endoscope inserted through the oral cavity or anus.
  • virtual body model refers to a model generated in conformity with an actual patient's body based on medical image data.
  • the “virtual body model” may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling.
  • the virtual body model may be used for guiding or navigation during surgery, post-surgery analysis, and the like.
  • the virtual body model may be implemented in the same manner as the actual patient body by reflecting the color, texture, elasticity, etc. of the actual patient body.
  • virtual surgery data means data including rehearsal or simulation actions performed on a virtual body model.
  • the “virtual surgery data” may be image data in which rehearsal or simulation is performed on a virtual body model in a virtual space, or may be data recorded on a surgical operation performed on a virtual body model.
  • actual surgical data refers to data obtained as actual medical personnel perform surgery.
  • the "actual surgical data” may be image data obtained by photographing a surgical site in an actual surgical procedure, or may be data recorded about a surgical operation performed in an actual surgical procedure.
  • includes all of various devices capable of performing arithmetic processing and providing a result to a user.
  • the computer is not only a desktop PC and a notebook, but also a smart phone, a tablet PC, a cellular phone, and a personal communication service phone (PCS phone), synchronous/asynchronous.
  • Mobile terminals of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), and a Personal Digital Assistant (PDA) may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may be a server that receives a request from a client and performs information processing.
  • FIG. 1 is a block diagram of a surgical image analysis apparatus according to an embodiment of the present invention.
  • the surgical image analysis apparatus according to an embodiment of the present invention, the surgical element recognition layer 10; And a surgical analysis layer 20.
  • the surgical element recognition layer is a layer for recognizing elements involved in surgery and elements generated by surgery.
  • the elements involved in the surgery include a camera (eg, an endoscope used in laparoscopic surgery), a surgical tool, an organ, a blood vessel, and the like, for photographing a surgical image.
  • the elements generated by the surgery include the operation of the surgical tool, an event such as bleeding, and the like.
  • the surgical element recognition layer forms the lowest level in the surgical analysis system.
  • the surgical element recognition layer includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input.
  • the surgical element recognition model in each surgical element recognition layer will be described.
  • a plurality of surgical element recognition models may have a connection relationship with each other. That is, the recognition result of the B surgical element recognition model may be additionally used in order for the A surgical element recognition model to calculate the recognition result.
  • an organ recognition model for recognizing an organ in the surgical image is included. That is, the organ recognition model serves to recognize one or more organs existing in each image frame in the surgical image.
  • the organ recognition model may include a blood vessel model composed of arteries and veins.
  • the organ recognition model can recognize organs in the surgical image in various ways.
  • the organ recognition model may calculate an organ or blood vessel in which a surgical operation is performed based on a surgical step in a camera position obtained in a camera position recognition model described later or in a surgical step recognition model described later.
  • a surgical tool recognition model for recognizing a surgical tool (Instrument) in the surgical image.
  • the surgical tool recognition model may recognize the type of surgical tool appearing in the image.
  • the surgical tool recognition model may recognize images of surgical instruments appearing in each frame according to the placement by learning the images of each surgical tool.
  • the surgical tool recognition model may recognize a surgical tool performing a surgical operation inside the body as each surgical tool learns a plurality of surgical image data performing an operation on an organ.
  • the surgical tool motion (Instrument Action) recognition model further includes. That is, the surgical tool motion recognition model plays a role of recognizing the meaning of the motion of the surgical tool performed in a specific surgical step (that is, an operation for obtaining a result).
  • the surgical tool motion recognition model recognizes a basic surgical motion by acquiring a surgical image (or a specific section in the surgical image) that requires recognition of motion semantics and learning a plurality of image frames included in the surgical image.
  • the basic surgical operation refers to a basic operation such as cutting and grasping using a specific surgical tool. Thereafter, the surgical tool motion recognition model extracts a set of consecutive video frames from the plurality of video frames based on the recognized surgical motion, and derives the meaning of the unit surgical motion through learning.
  • the unit surgical operation refers to a surgical operation having a meaning for producing a specific result as an action for a specific organ in the surgical process.
  • the surgical tool motion recognition model includes: a basic surgical motion recognition module that recognizes a basic surgical motion that is a minimum unit of a surgical motion that does not reflect meaning during surgery; And a surgical meaning recognition module that recognizes a unit surgical operation based on the continuous basic surgical operation.
  • the basic surgical motion recognition module learns an image of a surgical tool, it recognizes the surgical tool in a continuous frame and calculates a state change of the surgical tool to recognize the basic surgical action of the surgical tool.
  • the surgical meaning recognition module trains learning data that matches the continuous basic surgical operation and the unitary surgical operation
  • the continuous basic in the new surgical image recognized by the basic surgical operation recognition module When surgical operation data is input, a unit surgical operation is calculated.
  • the learning data may further include organ information on which a continuous basic surgical operation is performed, and the organ information may be used as a result calculated by the organ recognition model in the surgical element recognition layer.
  • a camera position recognition model for recognizing the position of the camera during minimally invasive surgery is further included.
  • the camera when the camera is an endoscopic used in laparoscopic surgery, the camera may be a forward/reverse, up/down/left/right movement within the abdominal cavity after the endoscope is inserted into the abdominal cavity. It serves to calculate the position of the camera in real time within the abdominal cavity.
  • the camera position recognition model sets a reference position of a camera based on one or more reference points of a reference object in an image photographed by the camera, and calculates a relative position change based on a change in an object in the image Calculate camera position in real time.
  • the camera position recognition model obtains a reference object from the actual surgical image photographed by the camera entering the body, sets a reference position for the camera, and the camera As it moves, the position change amount of the camera is calculated.
  • the camera position recognition model is based on an object change in a real-time image acquired by the camera (for example, a change in size of an object, a change in the position of an object, a change in an object appearing in the image, etc.). Calculate. Then, the camera position recognition model calculates the current position of the camera based on the amount of change of the position of the camera with respect to the reference position.
  • the actual surgical image may be a stereoscopic 3D image, and accordingly, the actual surgical image may be a three-dimensional stereoscopic image, that is, an image having a depth sense. Therefore, it is possible to accurately grasp the position in the three-dimensional space of the surgical tool through the depth map of the actual surgical image.
  • the reference object is easy to detect a feature from an image, exists in a fixed position inside the body, has little or no movement during surgery, does not cause shape deformation, is not affected by surgical tools, and medical image data It is also possible to use an organ or internal specific site that satisfies at least one of the conditions such as that it should be possible to acquire by (eg, data taken by CT, PET, etc.). For example, a portion that has very little motion during surgery, such as the liver and abdominal wall, and a portion that can be obtained from medical image data such as the stomach, esophagus, and gallbladder may be determined as a reference object.
  • a surgical tool in an image photographed by the camera as a reference object may be used as a reference object.
  • the surgical tool stopped at a specific position may be used as a reference object.
  • the camera position recognition model may repeat the process of resetting the reference position of the camera based on one or more reference points of the reference object when the reference object appears in the surgical image. In other words, if the camera movement is frequent, an error may occur in the camera position as the relative movement accumulates from the reference position. Therefore, the camera position recognition model performs a process of resetting the camera reference position when a new reference object appears in the image in real time. It can be done.
  • the camera position recognition model can accumulate the camera position in the virtual body model generated in the same way as the actual patient's body. That is, the camera position recognition model may accumulate the relative position change with respect to the reference position of the camera after matching the actual reference object in the actual patient body with the virtual reference object in the virtual body model. Through this, the camera position recognition model can record in which 3D virtual body model the path the camera moved during surgery.
  • the camera position recognition model calculates the position of each surgical tool relative to the position of the camera as it recognizes the position of the camera. That is, the camera position recognition model calculates the relative positions of one or more surgical instruments from a specific position of the camera.
  • a surgical tool position calculation model is further included.
  • the surgical tool position calculation model the position of the first point of the surgical tool relative to the external space of the surgical subject based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the surgical subject Calculate and reflect the characteristic information of the surgical tool based on the position of the first point of the surgical tool through a virtual body model generated in accordance with the physical condition of the surgical subject. Calculate the 2 point position.
  • the surgical tool position calculation model provides the position information of the surgical tool in the actual body internal space of the surgical subject based on the position of the second point of the surgical tool with respect to the virtual body model.
  • the surgical tool is received by applying motion data when the surgical tool of the surgical robot system is moved to the virtual body model by matching the coordinate system of the surgical robot system with the coordinate system of the virtual body model. Calculate real-time location.
  • the surgical tool position calculation model is based on a reference point (eg, a specific location, such as a belly button of a patient's body, a marker displayed on a patient's body surface, or an identification mark projected by a surgical robot system on the body surface) of a surgical subject.
  • the surgical tool position calculation model can acquire the real-time position of the surgical tool by applying the surgical tool movement of the surgical robot into the virtual body model as the surgical robot actually matches the coordinate system of the actual patient body and the virtual body model. have.
  • a surgical step recognition model is further included.
  • the surgical step recognition model calculates what detailed surgical step corresponds to a process in which the current surgery is performed or a specific section in the surgical image among the entire surgical procedures.
  • the surgical stage recognition model can calculate the surgical stage of the surgical image in various ways. For example, the surgical stage recognition model stores a progression sequence for each surgical type, and can recognize a specific surgical phase (Phase) in the progression sequence based on the area where the camera is located in the patient's body.
  • an event occurring in the surgical image is recognized, and the event includes an event recognition model, which is a non-ideal situation during surgery including bleeding.
  • the event recognition model eg, bleeding recognition model
  • the event recognition model may include a bleeding recognition module; And a bleeding amount calculation module.
  • the bleeding presence recognition module recognizes whether a bleeding area exists in the surgical image based on deep learning-based learning, and recognizes a bleeding position in the image frame.
  • the bleeding presence recognition module may recognize whether a bleeding area is included in a new image by learning a plurality of images including bleeding.
  • the presence/absence recognition module may convert each pixel in the surgical image to a specific value based on the feature map including the feature information, and specify the bleeding area based on the specific value of each pixel in the surgical image. .
  • Each pixel in the surgical image may be converted into a specific value based on a predetermined weight according to whether it is an area corresponding to the characteristic of the bleeding based on the feature map (that is, according to the degree of affecting the characteristic of the bleeding).
  • the bleeding presence recognition module applies Gradient CAM (Gradient-weighted Class Activation Mapping) technology that inversely estimates the learning result of recognizing the bleeding area in the surgical image through CNN, so that each pixel of the surgical image is identified by a specific value.
  • Gradient CAM Gradient-weighted Class Activation Mapping
  • the bleeding presence recognition module assigns a high value (eg, a high weight) to pixels recognized as corresponding to the bleeding area in the surgical image based on the feature map, and a low value to the pixel recognized as not corresponding to the bleeding area.
  • Each pixel value of the surgical image may be converted in a manner of giving (eg, a low weight).
  • the presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area.
  • the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value.
  • the bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.
  • the bleeding amount calculation module is characterized in that it calculates the bleeding amount in the bleeding area based on the location of the bleeding area.
  • the bleeding amount calculation module may calculate the bleeding amount using pixel information of the bleeding area in the surgical image. For example, the bleeding amount calculation module may calculate the bleeding amount using the number of pixels corresponding to the bleeding area in the surgical image and color information (eg, RGB values) of the pixels.
  • the bleeding amount calculation module acquires depth information of a bleeding area in the surgical image based on a depth map of the surgical image, and bleeds based on the acquired depth information
  • the amount of bleeding can be calculated by estimating the volume corresponding to the region.
  • the surgical image is a stereoscopic image, since it has three-dimensional stereoscopic sense, that is, depth information, it is possible to grasp the volume in the three-dimensional space for the bleeding area.
  • the bleeding amount calculation module acquires pixel information (eg, the number of pixels, the position of pixels, etc.) of the bleeding area in the surgical image, and calculates a depth value of the depth map corresponding to the obtained pixel information of the bleeding area.
  • the bleeding amount calculation module may calculate the bleeding amount by grasping the volume of the bleeding area based on the calculated depth value.
  • the bleeding amount in the bleeding area may be calculated using gauze information.
  • the bleeding amount calculation module may be reflected in calculating the bleeding amount generated in the bleeding area using the number of gauze in the surgical image, color information of the gauze (eg, RGB value), and the like.
  • the operation time calculation / prediction model further includes.
  • the operation time calculation/prediction model includes an operation time prediction module for calculating an expected time to completion of an operation step being performed during operation; And an operation time calculation module that calculates a time when each step is performed after the operation is completed.
  • the operation time calculation module extracts a time corresponding to a specific operation phase (Phase) after the operation is completed, and calculates the total time required for a specific operation phase (Phase).
  • the operation time prediction module based on the time required to a specific point in time of a particular surgical step and a specific surgical step prior to a specific point in time (eg, predicted reference point) during the operation, Calculate the remaining surgical time for the expected surgical step.
  • the surgical time prediction module acquires a predetermined surgical image including a surgical operation for a specific surgical step, and uses a surgical operation time obtained based on the preset surgical image and the preset surgical image.
  • the learning data is generated, and learning is performed based on the learning data to predict the operation time in the specific surgical step.
  • the surgical analysis layer is a layer for calculating an analysis result based on surgical elements obtained from one or more surgical element recognition models.
  • the surgical analysis layer is formed as an upper layer of the surgical element recognition layer.
  • the surgical analysis layer includes one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results that are a combination of results provided in the one or more surgical element recognition models.
  • the surgical analysis layer a blood loss recognition model for calculating the degree of blood loss based on a combination of surgical recognition results including a type of surgical organ recognized by the surgical element recognition layer, a surgical operation, and an event during the operation.
  • the blood loss recognition model may be connected to at least one of an event recognition model, a long-term recognition model, and a surgical motion recognition model included in the surgical element recognition layer to receive a combination of surgical recognition results.
  • the blood loss recognition model when used during surgery, can calculate the blood loss level when an event occurs and provide it to the medical staff.
  • the surgical analysis layer a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer; includes do.
  • the long-term damage detection model can calculate the level of long-term damage by receiving a specific surgical step, a motion performed based on a specific surgical tool, and a surgical time from each surgical element recognition model.
  • the blood loss recognition model and the organ damage detection model are used to calculate the analysis results in each surgical procedure during or after surgery.
  • the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
  • the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer.
  • the optimal surgical plan calculation model calculates and provides an optimal surgical plan to be performed later by analyzing the surgical image data previously obtained in real time.
  • the optimal surgical plan calculation model is based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery, and calculates and provides a surgical plan for a real patient.
  • the optimal surgical plan calculation model may include a function of calculating an optimal entry position during minimally invasive surgery.
  • the virtual surgery system when the medical staff performs virtual surgery in the virtual body model, the virtual surgery system generates only the operation part of the surgical tool to perform a surgical simulation (ie, virtual surgery) without affecting the arm portion of the surgical tool. That is, virtual surgery is performed according to the surgical operation pattern of medical personnel without limitations on the characteristics of the patient's body (eg, organ placement, vascular condition, etc.) or the characteristics of surgical instruments.
  • the optimal surgical plan calculation model calculates the optimal surgical tool entry position during actual surgery by considering the internal characteristics of the patient's body or the characteristics of the surgical tool based on the results after performing the virtual surgery. That is, as the candidate region of the entry position of the arm capable of performing the movement of the surgical tool operation unit is continuously calculated, the region capable of intersection is extracted.
  • the computer when using a tool A, tool B, or tool C during surgical robot or laparoscopic surgery, the computer does not reach the operating part of tool A when performing a surgical operation (that is, it is operated due to the limitation of the length of the surgical tool)
  • a body surface area where a point that cannot be reached occurs may be excluded from the entry range.
  • the computer may exclude the surface area of the body that collides with the body organ or tissue in the process of performing the surgical operation by entering the tool A from the range of access.
  • the computer may exclude the corresponding body surface point from the enterable range if the surgical tool is not implemented at a specific location after the surgical tool enters at each body surface point within the enterable range.
  • the computer can calculate the entry range for the tool A.
  • the computer can calculate the optimal entry position of each surgical tool by separately performing the process of calculating the accessible range for each surgical tool (eg, B tool and C tool).
  • the computer may separately perform an entry range calculation process for each function based on the function of the surgical tool, thereby calculating an optimal entry position to which the function of each surgical tool can be applied.
  • the computer can extract an optimal entry range for each surgical tool and determine a range where a plurality of optimal entry ranges overlap as an optimal entry position. have. For example, when the tool A is changed to the D tool in the course of performing the surgery, the computer may calculate the overlapping region of the entry range for the A tool and the entry range for the D tool as an optimal entry position candidate region. . Since the location where the surgical tool can be entered is limited to a certain number (for example, 3), when changing from A tool to D tool, the same entry position must be used. A position that satisfies all the tool's entry ranges can be determined as the final optimal entry position.
  • a certain number for example, 3
  • the computer can divide the range in which the corresponding surgical tool is used (that is, the range of movement) into several groups reachable from a plurality of entry positions on the body surface. For example, when laparoscopic or robotic surgery is performed by generating three entry positions on the body surface, the computer divides the range of motion of the surgical tool into groups of three or less. At this time, the computer divides the range of motion based on whether or not it can be reached from a plurality of accessible ranges selected by other surgical instruments.
  • a specific surgical tool having a wide range of motion ie, the first surgical tool
  • another surgical tool ie, the second surgical tool
  • another surgical tool ie, the second surgical tool
  • the computer determines the range of motion of the first operation tool when it is used together with the second operation tool, where the first operation tool enters the optimal operation position of the second operation tool (that is, the second operation tool is entered). Keyhole).
  • the computer when the first surgical tool is continuously used in spite of the change of other surgical tools, the computer must be operated by entering the same entry position in consideration of the convenience of the user and the time required for the surgery. Can be set as a group.
  • the medical staff determines the optimal entry position of the surgical tool by reflecting the surgical simulation results performed without considering the entry position of the surgical tool and the long-term jamming of the arm portion of the surgical tool, the medical staff can perform the most convenient surgical operation. .
  • the optimal surgical plan calculation model is based on a result of performing a virtual operation by only a motion part of a surgical tool in which a cancer part has been removed by a medical staff, and selecting an appropriate surgical tool type for use in surgery. It can be calculated and presented.
  • the optimal surgical plan calculation model may perform training in various ways. For example, it is possible to calculate the optimal operation for each surgical type through reinforced learning by acquiring surgical images performed by a plurality of medical staff.
  • the optimal surgical plan calculation model does not use the actual surgical data, but performs a virtual operation with a virtually generated surgical procedure and then repeats the process of evaluating whether it is an optimal surgical procedure and then the optimal surgical plan. Can be created.
  • the optimal surgical plan calculation model is optimized by generating a plurality of genes corresponding to the surgical process based on a surgical process consisting of at least one detailed surgical operation, and performing virtual surgery on each of the plurality of genes. Evaluate surgery. Thereafter, the optimal surgical plan calculation model selects at least one gene among a plurality of genes based on the evaluation result, applies a genetic algorithm, applies a genetic algorithm to generate a new gene, and based on the new gene We derive the optimal surgical procedure.
  • the optimal surgical plan calculation model may calculate fitness by performing virtual surgery on a new gene (child gene).
  • the optimal surgical plan calculation model determines whether the suitability of a new gene (child gene) meets a preset condition, selects a new gene (child gene) that meets the condition, and applies genetic algorithms such as mating and mutation. can do.
  • a new child gene can be generated again. That is, the computer can repeatedly generate child genes from the parent genes based on the fitness results for evaluating whether the surgery is optimal, and obtain a gene including the optimal surgical process among the finally generated child genes. have. For example, a gene having the highest suitability among child genes can be selected and derived through an optimized surgical procedure.
  • Each surgical analysis model in the surgical analysis layer may receive a combination of surgical recognition results in various forms.
  • the surgical analysis model may be input in the form of connected code data by encoding the result calculated from the surgical element recognition model. That is, each surgical analysis model can acquire and input the code data necessary for analysis in each model after acquiring the code data connecting the codes for the surgical recognition results at each time point.
  • the surgical analysis model for each of a plurality of image frames, based on the surgical recognition information calculated from the plurality of surgical element recognition model, the surgical elements included in the surgical recognition information (surgical elements) between Relational Representation information representing a relationship may be input.
  • the surgical recognition information refers to the surgical element information recognized from the image frame, for example, surgical tools, surgical operations, body parts, bleeding status, surgical stage, surgical time (eg, remaining surgical time, surgical time, etc.), and camera information (for example, it may include at least one surgical element among camera positions, angles, directions, movements, and the like.
  • the relationship expression may be in a matrix form in which rows and columns are arranged as respective surgical elements, and values for correlations between the surgical elements are applied as matrix values.
  • the surgical analysis model may be input after calculating a relationship expression for each frame of a surgical image or a specific unit division image.
  • the surgical solution providing layer further includes.
  • the surgical solution providing layer includes one or more surgical solution models providing surgical outputs arranged based on one or more analysis results obtained from the surgical analysis layer.
  • the surgical solution provider layer includes one or more surgical solution models, which are a form of service that medical staff can use immediately.
  • the surgical solution providing layer is formed as an upper layer of the surgical analysis layer, and receives results calculated from each surgical analysis model in the surgical analysis layer.
  • the surgical solution model includes a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer.
  • the surgical evaluation model may receive a blood loss result, an organ damage model calculation result, and a comparison result between an optimal surgical process and an actual surgery to calculate the evaluation of the surgery performed by the medical staff.
  • the surgical solution model further includes a chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer.
  • the chart generation model can automatically generate a record of the operation performed and the result of performing the operation.
  • the chart generation model combines the surgical elements obtained in the surgical process after the surgery is completed, inputs them into each surgical analysis model, and receives the analysis results of a plurality of surgical analysis models to enter a chart (for example, surgery Record paper) can be automatically generated (created).
  • the surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer further includes.
  • the medical staff's performance evaluation may be performed in consideration of the difficulty of the surgery performed by each medical staff.
  • the surgical complexity calculated in the surgical complexity calculation model may be reflected. That is, in the surgical solution providing layer, the surgical evaluation model can be used for surgical evaluation by receiving the complexity calculation result calculated from the surgical complexity calculation model.
  • a surgical Q&A model for calculating an answer to a question about the surgery.
  • the surgical image analysis apparatus is composed of a surgical element recognition layer, a surgical analysis layer, and a surgical solution layer is described.
  • the lowest layer may be a surgical element recognition layer that recognizes surgical elements that are the smallest units that must be medically recognized in the medical procedure.
  • the middle layer may be a surgical module layer that can grasp a medical meaning through each surgical element and determine a condition for making a medical diagnosis.
  • the top layer may be a surgical solution layer that recognizes medical problems that may occur throughout the medical procedure and provides solutions to each problem.
  • the surgical element recognition layer may recognize surgical elements based on various surgical images photographed in the medical surgery process and various surgical related tools used in the medical surgery process.
  • the surgical elements include a surgical phase (Phase), a body part (eg, an organ), an event, an operation time (Time), a surgical instrument (Instrument), a camera (Camera), and a surgical operation (Acton). , And other elements.
  • the surgical element recognition layer may recognize at least one surgical element based on the surgical image obtained in the medical surgery process. At this time, it is possible to recognize not only the surgical elements recognized in the surgical image itself, but also using the results of learning the surgical images. Using the above-described embodiments of the present invention, it is possible to effectively recognize surgical elements that are medically recognized minimum units in the medical procedure.
  • the surgical element recognition layer may individually recognize surgical elements. For example, when an organ is recognized from a surgical image, only the corresponding organ may be recognized, and when a surgical tool is recognized from a surgical image, only the corresponding surgical tool may be recognized.
  • the surgical element recognition layer may use other surgical elements to recognize one surgical element. That is, the surgical element recognition layer may establish a primitive level relation representing a relationship between each surgical element, and recognize each surgical element using the primitive level relationship.
  • the primitive level relationship lists additional surgical elements necessary to recognize the corresponding surgical element, and the relationship between the surgical elements (eg, state change, position change, shape change, color change, arrangement relationship, etc.) It may include specified information. For example, when an event (eg, an event for bleeding) is recognized from a surgical image, additional surgical elements such as organs, surgical instruments, and surgical operations are further recognized based on a primitive level relationship for the event, The event can be recognized through additional recognized surgical elements.
  • the surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer.
  • the surgical analysis layer using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.
  • the surgical analysis layer can configure each surgical module according to information necessary in the medical surgery process or a medical problem to be solved (eg, bleeding loss evaluation, internal body damage detection, tool misuse detection, optimal surgical procedure proposal, etc.).
  • the surgical analysis layer may constitute a bleeding loss evaluation module for grasping the degree of bleeding in a surgical subject during medical surgery.
  • an internal damage detection module may be configured to determine how much damage has occurred in a specific organ during surgery.
  • a surgical tool misuse detection module may be configured to determine whether a surgical tool is misused during surgery.
  • each surgical module may selectively use at least one of the surgical elements recognized in the lower layer surgical element recognition layer.
  • each surgical module may use a module level relation.
  • the module level relationship may mean that the surgical elements to be recognized in the corresponding surgical module are determined and designated.
  • the module level relationship may be one in which surgical elements to be recognized in the corresponding surgical module are determined based on the degree to which a specific meaning can be recognized from the surgical image (eg, the above-described representative recognition value; SAM).
  • the surgical solution layer can finally solve a high-level medical problem using a medical meaning or medical judgment identified through the surgical analysis layer.
  • the surgical solution layer using at least one surgical module, Chart Generation, Complication Estimation, Surgical Performance Assessment, Surgical Q&A System (Q&A using Surgical Bot) System).
  • the surgical solution layer may configure each solution module or system according to each medical problem (eg, chart generation, complications determination, surgical performance evaluation, surgical Q&A system, etc.).
  • the surgical solution layer may constitute a chart generating solution module (or system) for recording all information generated during the surgical process or for recording a patient's condition.
  • a complications determination solution module (or system) for predicting complications that may occur after the surgical procedure may be configured.
  • each solution module (or system) may selectively use the medical meaning or medical judgment derived from the surgical analysis layer, which is a lower layer.
  • each solution module (or system) of the surgical solution layer may use a solution level relation.
  • the solution level relationship may mean that a surgical module that needs to be used in a corresponding solution module is determined and designated.
  • the medical staff may use a complications determination solution module to determine a patient's complications.
  • the complications determination solution module can grasp that a bleeding loss evaluation module and an internal damage detection module are required from a lower level based on the solution level relationship.
  • the necessary information may be received from the corresponding module of the lower layer to determine the patient's complications and provide the determination result to the medical staff.
  • the medical solution model according to the implementation of the present invention is configured to operate for each module from the lowest layer to the highest layer according to the medical problem. Therefore, even if a new medical problem occurs, it is possible to efficiently provide a solution by configuring a new module for each layer or by configuring a new layer.
  • the medical solution model according to the practice of the present invention can be applied to various medical operations, and can be effectively used for minimally invasive surgery using a laparoscopic or endoscope, especially a surgical robot.
  • Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
  • a computer-aided surgical analysis method the computer obtaining a surgical image (S200); A computer inputting the surgical image into one or more surgical element recognition models (S400); A computer acquiring a combination of surgical recognition results calculated in each surgical element recognition model (S600; obtaining a surgical recognition result combination); And obtaining a one or more analysis results by inputting the combination of the surgical recognition results into one or more surgical analysis models (S800; obtaining a surgical analysis result).
  • S200 surgical image
  • S400 surgical element recognition models
  • S600 obtaining a surgical recognition result combination
  • S800 obtaining a surgical analysis result
  • the computer acquires the surgical image (S200).
  • the computer may acquire a surgical image in real time while surgery is being performed by a medical staff.
  • the computer can acquire the entire surgical image stored after the medical staff completes the operation.
  • the computer when a computer uses a surgical image acquired in real time, the computer utilizes the surgical image to provide necessary information during surgery of a medical staff.
  • the computer uses the entire image after the surgery is completed, the computer uses the surgical image to perform a post-mortem analysis of the surgery.
  • the computer inputs the surgical image into one or more surgical element recognition models (S400). That is, the computer inputs the surgical images into one or more surgical element recognition models in order to recognize the surgical elements included in the surgical images.
  • the one or more surgical element recognition models are included in the surgical element recognition layer forming the lowest layer in the surgical analysis system.
  • each surgical element recognition model may be constructed in parallel within the surgical element recognition layer and have a connection relationship with each other. That is, it is possible to receive and use surgical elements (ie, recognition results) calculated from the B surgical element recognition model formed in parallel in the process of performing the A surgical element recognition model. Detailed description of each surgical element recognition model described above is omitted.
  • the computer acquires a combination of surgical recognition results calculated from each surgical element recognition model (S600; obtaining a surgical recognition result combination). That is, the computer generates data combining each surgical element recognition result calculated from each surgical element recognition model.
  • the computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results (S800; obtaining a surgical analysis result).
  • the one or more surgical analysis models are included in the surgical analysis layer on the surgical element recognition layer, and are selected according to a user's request. That is, each surgical analysis model in the surgical analysis layer is based on data necessary for analysis, and a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer, so that a combination of surgical element recognition results required for analysis It can be input to each surgical analysis model. Detailed description of each surgical analysis model described above is omitted.
  • the computer inputs one or more analysis results into a specific surgical solution model to provide a surgical output arranged based on the analysis results (S1000).
  • the surgical solution model is included in the surgical solution providing layer, and the surgical solution providing layer is created in the surgical analysis system as an upper layer of the surgical analysis layer.
  • each surgical solution model is connected to one or more surgical analysis models, one or more surgical analysis results may be input. Detailed description of each surgical solution model described above is omitted.
  • the surgical analysis method by the computer according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer that is hardware, and stored in a medium.
  • the above-described program is C, C++, JAVA, machine language, etc., in which a processor (CPU) of the computer can be read through a device interface of the computer in order for the computer to read the program and execute the methods implemented as a program.
  • It may include a code (Code) coded in the computer language of the.
  • code may include functional code related to a function defining functions necessary to execute the above methods, and control code related to an execution procedure necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do.
  • the code may further include a memory reference-related code as to which location (address address) of the computer's internal or external memory should be referred to additional information or media necessary for the computer's processor to perform the functions. have.
  • the code can be used to communicate with any other computer or server in the remote using the communication module of the computer. It may further include a communication-related code for whether to communicate, what information or media to transmit and receive during communication, and the like.
  • the storage medium refers to a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory.
  • examples of the storage medium include, but are not limited to, ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device. That is, the program may be stored in various recording media on various servers that the computer can access or various recording media on the user's computer.
  • the medium may be distributed over a computer system connected through a network, and code readable by a computer in a distributed manner may be stored.

Abstract

The present invention relates to a system, method, and program for analyzing and recognizing a surgical image. A method for analyzing surgery by using a computer according to an embodiment of the present invention comprises: a step for acquiring a surgical image by a computer; a step for inputting, by the computer, the surgical image to one or more surgical element recognition models; a surgery recognition result combination acquisition step for acquiring, by the computer, a combination of surgery recognition results calculated by each surgical element recognition model, wherein the one or more surgical element recognition models are included in a surgical element recognition layer, which is the lowest level in a surgical analysis system; and a surgical analysis result acquisition step for acquiring, by the computer, one or more analysis results by inputting the combination of surgery recognition results to one or more surgical analysis models, wherein the one or more surgical analysis models are included in a surgical analysis layer on the surgical element recognition layer and selected according to a request of a user.

Description

수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램Surgical analysis device, surgical image analysis and recognition system, method and program
본 발명은 수술영상 분석 및 인식 방법에 관한 것이다.The present invention relates to a surgical image analysis and recognition method.
의료수술 또는 의료시술(즉, 의료 치료 행위)은 개복수술(open surgery), 복강경 수술 및 로봇 수술을 포함하는 최소침습수술(MIS: Minimally Invasive Surgery), 방사선 치료, 내시경 시술 등으로 분류할 수 있다. 개복수술은 치료되어야 할 부분을 의료진이 직접 보고 만지며 시행하는 수술을 말하며, 최소침습수술은 키홀 수술(keyhole surgery)이라고도 하는데 복강경 수술과 로봇 수술이 대표적이다. 복강경 수술은 개복을 하지 않고 필요한 부분에 작은 구멍을 내어 특수 카메라가 부착된 복강경과 수술 도구를 몸속에 삽입하여 비디오 모니터를 통해서 관측하며 레이저나 특수기구를 이용하여 미세수술을 한다. 또한, 로봇수술은 수술로봇을 이용하여 최소 침습수술을 수행하는 것이다. 나아가 방사선 치료는 체외에서 방사선이나 레이저 광으로 수술 치료를 하는 것을 말한다. 또한, 내시경 시술은 내시경을 소화기 등에 삽입한 후 내시경에 구비된 통로로 도구를 삽입하여 수행하는 시술을 말한다.Medical or medical procedures (i.e., medical treatment acts) can be classified into open surgery, laparoscopic surgery and robotic surgery, including minimally invasive surgery (MIS), radiation therapy, and endoscopic surgery. . Laparoscopic surgery refers to surgery performed by a medical staff who directly sees and touches the area to be treated, and minimally invasive surgery is also called keyhole surgery. Laparoscopic surgery and robotic surgery are typical. Laparoscopic surgery is performed through a video monitor by inserting a surgical instrument into the body and a laparoscopic instrument with a special camera by making a small hole in the required part without opening, and performing microsurgery using a laser or special instrument. In addition, robot surgery is to perform minimally invasive surgery using a surgical robot. Furthermore, radiation therapy refers to the surgical treatment with radiation or laser light from outside the body. In addition, the endoscopic procedure refers to a procedure performed by inserting an endoscope into a digestive system or the like and inserting a tool into a passage provided in the endoscope.
이러한 의료 치료행위의 경우, 실제 수행 시 영상을 획득하여 이를 기초로 수행하는 경우가 많다. 즉, 개복수술은 의료진이 직접 육안으로 환자의 장기를 보면서 수행하고, 복강경수술 또는 내시경시술은 복강경 또는 내시경을 통해 획득되는 영상을 보면서 수행된다. 따라서, 치료행위 수행 시에 획득되는 영상을 통해서 의료진에게 다양한 정보를 제공하여 주는 것이 중요하다. In the case of such medical treatment, in many cases, an image is acquired based on actual practice. That is, laparotomy is performed by a medical staff while directly viewing the patient's organs with the naked eye, and laparoscopic or endoscopic procedures are performed while viewing images obtained through laparoscopic or endoscopic procedures. Therefore, it is important to provide a variety of information to medical staff through images obtained when performing treatment.
또한, 의료수술 또는 의료시술 과정에서 의사를 보조하기 위한 정보를 제공할 수 있는 기술들의 개발이 요구되고 있다. 수술 또는 시술을 보조하기 위한 정보를 제공하기 위해서는, 수술과정 또는 시술과정에서 행해지는 동작이나 다양한 수술정보를 인식하고 인식된 정보의 의미를 파악하는 것이 중요하다. 따라서, 컴퓨터가 영상으로부터 동작이나 다양한 수술정보를 인식할 수 있는 기술의 개발이 요구된다. In addition, there is a need to develop technologies capable of providing information to assist doctors in medical procedures or medical procedures. In order to provide information for assisting a surgery or procedure, it is important to recognize an operation or a variety of surgery information performed in the surgery process or procedure and grasp the meaning of the recognized information. Accordingly, there is a need to develop a technology that allows a computer to recognize motion or various surgical information from an image.
또한, 최근에는 수술영상의 분석에 딥 러닝이 널리 이용되고 있다. 딥 러닝은 여러 비선형 변환기법의 조합을 통해 높은 수준의 추상화(abstractions, 다량의 데이터나 복잡한 자료들 속에서 핵심적인 내용 또는 기능을 요약하는 작업)를 시도하는 기계학습 알고리즘의 집합으로 정의된다. 딥 러닝은 큰 틀에서 사람의 사고방식을 컴퓨터에게 가르치는 기계학습의 한 분야로 볼 수 있다. In addition, deep learning has been widely used in the analysis of surgical images in recent years. Deep learning is defined as a set of machine-learning algorithms that attempt high-level abstractions (abstraction of key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation methods. Deep learning can be seen as a field of machine learning that teaches a person's mindset to computers in a large framework.
수술영상을 분석 또는 인식하기 위해서는 수술영상 내에 포함된 여러가지 수술요소를 인식하여야 한다. 그러나 수술영상 내에 포함된 각 수술요소들은 하나의 학습모델만으로 획득되기 어렵다.In order to analyze or recognize a surgical image, various surgical elements included in the surgical image must be recognized. However, each surgical element included in the surgical image is difficult to acquire with only one learning model.
따라서, 본 발명은 병렬적으로 형성된 수술요소 인식모델을 통해, 수술영상 내에서 복수의 수술요소를 각각 인식하여 정확한 인식결과를 빠르게 얻어낼 수 있는, 수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램을 제공하고자 한다.Therefore, the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel. And programs.
또한, 본 발명은 수술요소 인식모델에서 산출된 여러 수술요소 인식결과를 이용하여 각각의 결과를 산출할 수 있는 복수의 수술 분석 모델을 통해, 수술 분석에 필요한 다양한 분석결과를 전부 또는 선택적으로 획득할 수 있는, 수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램을 제공하고자 한다.In addition, the present invention can obtain all or selectively various analysis results required for surgical analysis through a plurality of surgical analysis models capable of calculating each result using multiple surgical element recognition results calculated from the surgical element recognition model. The present invention is to provide a surgical analysis device, a surgical image analysis and recognition system, a method, and a program.
또한, 본 발명은 복수의 수술 분석 모델을 통한 여러 수술 분석 결과를 기반으로 의료진에게 필요한 서비스결과물을 간편하게 산출하여 주는 복수의 수술 솔루션 모델을 포함하는, 수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램을 제공하고자 한다.In addition, the present invention includes a plurality of surgical solution model that easily calculates the service result required for a medical staff based on a number of surgical analysis results through a plurality of surgical analysis models, surgical analysis device, surgical image analysis and recognition system, method And programs.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the following description.
본 발명의 일실시예에 따른 컴퓨터에 의한 수술 분석 방법은, 컴퓨터가 수술영상을 획득하는 단계; 컴퓨터가 하나 이상의 수술요소 인식모델에 상기 수술영상을 입력하는 단계; 컴퓨터가 각각의 수술요소 인식모델에서 산출된 수술인식결과 조합을 획득하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술인식결과 조합 획득단계; 및 컴퓨터가 상기 수술인식결과 조합을 하나 이상의 수술 분석 모델에 입력하여 하나 이상의 분석결과를 획득하되, 상기 하나 이상의 수술 분석 모델은 상기 수술요소 인식층 위의 수술 분석층에 포함되는 것으로서, 사용자의 요청에 따라 선택되는 것인, 수술 분석결과 획득단계;를 포함한다.A computer-aided surgical analysis method according to an embodiment of the present invention includes: a computer acquiring a surgical image; A computer inputting the surgical image into one or more surgical element recognition models; A computer acquiring a combination of surgical recognition results calculated from each surgical element recognition model, wherein one or more surgical element recognition models are included in a surgical element recognition layer at a lowest level in the surgical analysis system; And a computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results, wherein the one or more surgical analysis models are included in the surgical analysis layer above the surgical element recognition layer. It is selected according to, surgical analysis result acquisition step; includes.
또한, 다른 일실시예로, 상기 수술 분석층 내의 각각의 수술 분석 모델은, 분석에 필요한 데이터를 기반으로, 상기 수술요소 인식층 내의 하나 이상의 수술요소 인식모델과 연결관계가 설정된 것을 특징으로 한다.In addition, in another embodiment, each surgical analysis model in the surgical analysis layer is characterized in that a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer based on data required for analysis.
또한, 다른 일실시예로, 상기 수술요소 인식모델은, 상기 수술영상 내의 장기를 인식하는 장기인식모델; 상기 수술영상 내의 수술도구와 상기 수술도구의 움직임을 인식하는 수술도구 인식모델; 및 상기 수술영상 내에서 발생하는 이벤트를 인식하되, 상기 이벤트는 출혈을 포함하는 수술 중 비이상적인 상황인, 이벤트 인식모델;을 포함한다.In addition, in another embodiment, the surgical element recognition model includes: an organ recognition model that recognizes an organ in the surgical image; A surgical tool recognition model for recognizing the movement of the surgical tool and the surgical tool in the surgical image; And an event recognition model that recognizes an event occurring within the surgical image, and the event is a non-ideal situation during surgery including bleeding.
또한, 다른 일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 인식된 수술 장기 유형, 수술동작 및 수술 중 발생 이벤트를 포함하는 수술인식결과 조합을 기반으로 혈액 손실 정도를 산출하는 혈액 손실 인식모델; 및 상기 수술요소 인식층에서 인식된 수술단계 및 수술시간을 포함하는 수술인식결과 조합을 기반으로 장기 손상 정도를 산출하는 장기 손상 감지모델;을 포함하며, 상기 혈액 손실 인식모델 및 상기 장기 손상 감지모델은 수술 중 또는 수술 후에 각 수술과정에 분석결과 산출에 이용되는 것이다.In addition, in another embodiment, the surgical analysis layer, blood that calculates the degree of blood loss based on a combination of surgical recognition results including types of surgical organs recognized by the surgical element recognition layer, surgical operations, and events occurring during surgery Loss recognition model; And a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer, and includes the blood loss recognition model and the long-term damage detection model Is used to calculate the analysis results for each surgical procedure during or after surgery.
또한, 다른 일실시예로, 수술이 완료된 후 수술 결과를 분석하는 경우, 상기 수술 분석층은, 상기 수술요소 인식층에서 인식된 수술도구, 상기 수술도구로 동작이 수행되는 장기, 상기 수술도구로 수행되는 전체 수술 중의 세부수술단계, 상기 세부수술단계에서 발생한 이벤트를 기반으로, 잘못된 수술도구의 사용을 탐지하는, 수술도구 오사용 탐지모델;을 더 포함한다.In addition, in another embodiment, when analyzing the surgical results after the surgery is completed, the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
또한, 다른 일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되, 상기 최적 수술계획 산출모델은, 실시간으로 전에 획득된 수술영상데이터를 분석함에 따라 이후에 수행되어야 하는 최적 수술계획을 산출하여 제공하는 것을 특징으로 한다.In addition, in another embodiment, the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan The calculation model is characterized in that, by analyzing the surgical image data acquired in real time in real time, an optimal surgical plan to be performed later is calculated and provided.
또한, 다른 일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되, 상기 최적 수술계획 산출모델은, 수술 전에 환자의 가상신체모델에 수행된 가상수술에 대한 수술인식결과 조합을 기반으로, 실제 환자에 대한 수술계획을 산출하여 제공하는 것을 특징으로 한다.In addition, in another embodiment, the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan The calculation model is characterized by calculating and providing a surgical plan for a real patient based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery.
또한, 다른 일실시예로, 컴퓨터가 하나 이상의 분석결과를 특정한 수술 솔루션 모델에 입력하여, 분석결과를 기반으로 정리된 수술산출물을 제공하는 단계;를 더 포함한다.In addition, in another embodiment, the computer inputs one or more analysis results into a specific surgical solution model, thereby providing a summary of the surgical output based on the analysis results.
또한, 다른 일실시예로, 상기 수술 솔루션 모델은, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 평가 결과를 산출하는 수술 평가 모델; 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 차트를 생성하는 차트 생성 모델; 및 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술의 복잡도 또는 난이도를 산출하는 수술 복잡도 산출모델;을 더 포함한다.In addition, in another embodiment, the surgical solution model includes: a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer; A chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer; And a surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer.
또한, 다른 일실시예로, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 학습함에 따라, 수술에 대한 질문에 대한 답변을 산출하는 수술 Q&A 모델;을 포함하며, 상기 수술산출물 제공단계는, 의료진의 특정 수술에 대한 질문을 입력함에 따라 답변을 산출하여 제공하는 것이다.In addition, in another embodiment, as learning one or more analysis results obtained from the surgical analysis layer, the surgical Q&A model that calculates an answer to a question about the surgery; includes, and the surgical output providing step, the medical staff The answer is calculated and provided by entering a question about a specific surgery.
본 발명의 다른 일실시예에 따른 컴퓨터에 의한 수술 분석 프로그램은, 하드웨어인 컴퓨터와 결합되어, 상기 컴퓨터에 의한 수술 분석 방법을 실행시키기 위하여 매체에 저장될 수 있다.A computer-aided surgical analysis program according to another embodiment of the present invention may be stored in a medium in combination with a computer that is hardware, and to execute the computer-aided surgical analysis method.
본 발명의 또 다른 일실시예에 따른 수술영상 분석장치는, 수술영상이 입력됨에 따라 수술인식결과를 산출하는 하나 이상의 수술요소 인식모델을 포함하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술요소 인식층; 및 상기 하나 이상의 수술요소 인식모델에서 제공된 결과의 조합인 수술인식결과 조합을 기반으로 분석결과를 획득하는 하나 이상의 수술 분석 모델을 포함하되, 상기 수술요소 인식층의 상위 층으로 형성되는, 수술 분석층;을 포함한다.The surgical image analysis apparatus according to another embodiment of the present invention includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input, wherein one or more surgical element recognition models are the lowest in the surgical analysis system. A surgical element recognition layer included in the level of the surgical element recognition layer; And one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results that are a combination of results provided by the one or more surgical element recognition models, wherein the surgical analysis layer is formed as an upper layer of the surgical element recognition layer. ;.
또한, 다른 일실시예로, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 정리된 수술산출물을 제공하는 하나 이상의 수술 솔루션 모델을 포함하는, 수술 솔루션 제공층;을 더 포함한다.In addition, in another embodiment, a surgical solution providing layer including one or more surgical solution models that provide a surgical output arranged based on one or more analysis results obtained from the surgical analysis layer; further includes a.
상기와 같은 본 발명에 따르면, 수술영상 내의 개별 수술요소(수술도구, 출혈, 카메라 등)을 인식하는 개별 수술요소 인식모델에 의해, 각 수술요소를 정확하게 인식할 수 있다. 즉, 하나의 인식모델을 통해 수술영상에 포함된 여러가지 수술요소를 인식하는 방식에 비해 높은 정확도의 인식결과를 제공할 수 있다.According to the present invention as described above, each surgical element can be accurately recognized by an individual surgical element recognition model that recognizes individual surgical elements (surgical instruments, bleeding, camera, etc.) in the surgical image. That is, it is possible to provide a recognition result with high accuracy compared to a method of recognizing various surgical elements included in a surgical image through one recognition model.
또한, 본 발명에 따르면, 수술요소 인식모델에 의해 산출된 수술요소 인식결과를 조합하여 수술 분석을 수행하므로, 의료진에게 정확한 수술 분석 결과를 제공할 수 있다.In addition, according to the present invention, since surgical analysis is performed by combining the surgical element recognition results calculated by the surgical element recognition model, it is possible to provide accurate surgical analysis results to the medical staff.
또한, 본 발명에 따르면, 하나 이상의 수술분석 결과를 입력함에 따라 각각의 수술 솔루션 모델이 의료진에게 필요한 결과물을 자동으로 산출하므로, 의료진의 수술 후 업무가 간소화될 수 있다.In addition, according to the present invention, as each surgical solution model automatically calculates a result required for a medical staff by inputting one or more surgical analysis results, the post-operative work of the medical staff can be simplified.
또한, 본 발명에 따르면, 새로운 수술요소 인식을 하여야 하는 경우, 새로운 수술 분석이 필요한 경우 또는 새로운 서비스 유형의 솔루션이 필요한 경우에, 수술 분석 시스템 내에 새로운 수술요소 인식모델, 수술 분석 모델 또는 수술 솔루션 모델을 추가하고 상위레이어 또는 하위레이어에 포함된 하나 이상의 모델과 연결관계만 형성하면 되므로, 간편하게 새로운 기능 추가를 할 수 있다.In addition, according to the present invention, when a new surgical element recognition is required, when a new surgical analysis is required or when a new service type solution is required, a new surgical element recognition model, a surgical analysis model, or a surgical solution model in the surgical analysis system To add a new function, you only need to add and form a connection relationship with one or more models included in the upper layer or lower layer.
도 1은 본 발명의 일실시예에 따른 수술 분석 장치의 레이어 구성도이다.1 is a layer configuration diagram of a surgical analysis device according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 컴퓨터에 의한 수술 분석 방법의 순서도이다.Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따른 수술 산출물 제공 과정을 더 포함하는 컴퓨터에 의한 수술 분석 방법의 순서도이다.3 is a flow chart of a computer-aided surgical analysis method further comprising a procedure for providing a surgical output according to an embodiment of the present invention.
이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예를 상세히 설명한다. 본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나 본 발명은 이하에서 게시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 게시가 완전하도록 하고, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. 명세서 전체에 걸쳐 동일 참조 부호는 동일 구성 요소를 지칭한다.Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Advantages and features of the present invention, and methods for achieving them will be clarified with reference to embodiments described below in detail together with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various different forms, and only the embodiments allow the publication of the present invention to be complete, and common knowledge in the art to which the present invention pertains It is provided to fully inform the person having the scope of the invention, and the present invention is only defined by the scope of the claims. The same reference numerals refer to the same components throughout the specification.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in the present specification may be used as meanings commonly understood by those skilled in the art to which the present invention pertains. In addition, terms defined in the commonly used dictionary are not ideally or excessively interpreted unless specifically defined.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다.The terminology used herein is for describing the embodiments and is not intended to limit the present invention. In the present specification, the singular form also includes the plural form unless otherwise specified in the phrase. As used herein, “comprises” and/or “comprising” does not exclude the presence or addition of one or more other components other than the components mentioned.
본 명세서에서 '수술영상'은 수술 과정을 촬영한 영상이다. 예를 들어, 수술영상은 로봇수술을 포함하는 복강경 수술 시에 신체 내부로 삽입된 내시경에 의해 획득되는 영상을 포함한다. 또한, 수술영상은 구강 또는 항문을 통해 삽입된 내시경을 통해 수행되는 시술 영상을 포함할 수 있다.In this specification, the'surgery image' is an image of the surgical procedure. For example, the surgical image includes an image obtained by an endoscope inserted into the body during laparoscopic surgery including robotic surgery. Further, the surgical image may include a surgical image performed through an endoscope inserted through the oral cavity or anus.
본 명세서에서 "가상신체모델"은 의료영상데이터를 기반으로 실제 환자의 신체에 부합하게 생성된 모델을 의미한다. "가상신체모델"은 의료영상데이터를 그대로 3차원으로 모델링하여 생성한 것일 수도 있고, 모델링 후에 실제 수술 시와 같게 보정한 것일 수도 있다. 상기 가상신체모델은 수술 중에 가이딩 또는 네비게이션, 수술 후 분석 등에 이용될 수 있다. 또한, 상기 가상신체모델은 실제 환자 신체의 색상, 텍스쳐(Texture), 탄성 등이 반영되어 실제 환자 신체와 동일하게 구현된 것일 수 있다.In this specification, "virtual body model" refers to a model generated in conformity with an actual patient's body based on medical image data. The "virtual body model" may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling. The virtual body model may be used for guiding or navigation during surgery, post-surgery analysis, and the like. In addition, the virtual body model may be implemented in the same manner as the actual patient body by reflecting the color, texture, elasticity, etc. of the actual patient body.
본 명세서에서 "가상수술데이터"는 가상신체모델에 대해 수행되는 리허설 또는 시뮬레이션 행위를 포함하는 데이터를 의미한다. "가상수술데이터"는 가상공간에서 가상신체모델에 대해 리허설 또는 시뮬레이션이 수행된 영상데이터일 수도 있고, 가상신체모델에 대해 수행된 수술동작에 대해 기록된 데이터일 수도 있다.As used herein, "virtual surgery data" means data including rehearsal or simulation actions performed on a virtual body model. The "virtual surgery data" may be image data in which rehearsal or simulation is performed on a virtual body model in a virtual space, or may be data recorded on a surgical operation performed on a virtual body model.
본 명세서에서 "실제수술데이터"는 실제 의료진이 수술을 수행함에 따라 획득되는 데이터를 의미한다. "실제수술데이터"는 실제 수술과정에서 수술부위를 촬영한 영상데이터일 수도 있고, 실제 수술과정에서 수행된 수술동작에 대해 기록된 데이터일 수도 있다.In this specification, "actual surgical data" refers to data obtained as actual medical personnel perform surgery. The "actual surgical data" may be image data obtained by photographing a surgical site in an actual surgical procedure, or may be data recorded about a surgical operation performed in an actual surgical procedure.
본 명세서에서 "컴퓨터"는 연산처리를 수행하여 사용자에게 결과를 제공할 수 있는 다양한 장치들이 모두 포함된다. 예를 들어, 컴퓨터는 데스크 탑 PC, 노트북(Note Book) 뿐만 아니라 스마트폰(Smart phone), 태블릿 PC, 셀룰러폰(Cellular phone), 피씨에스폰(PCS phone; Personal Communication Service phone), 동기식/비동기식 IMT-2000(International Mobile Telecommunication-2000)의 이동 단말기, 팜 PC(Palm Personal Computer), 개인용 디지털 보조기(PDA; Personal Digital Assistant) 등도 해당될 수 있다. 또한, 헤드마운트 디스플레이(Head Mounted Display; HMD) 장치가 컴퓨팅 기능을 포함하는 경우, HMD장치가 컴퓨터가 될 수 있다. 또한, 컴퓨터는 클라이언트로부터 요청을 수신하여 정보처리를 수행하는 서버가 해당될 수 있다.In this specification, "computer" includes all of various devices capable of performing arithmetic processing and providing a result to a user. For example, the computer is not only a desktop PC and a notebook, but also a smart phone, a tablet PC, a cellular phone, and a personal communication service phone (PCS phone), synchronous/asynchronous. Mobile terminals of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), and a Personal Digital Assistant (PDA) may also be applicable. In addition, when a head mounted display (HMD) device includes a computing function, the HMD device may be a computer. Also, the computer may be a server that receives a request from a client and performs information processing.
이하, 도면들을 참조하여, 본 발명의 실시예들에 따른 컴퓨터에 의한 수술 분석 방법, 프로그램 및 수술분석장치에 대한 상세한 설명을 기재한다.Hereinafter, with reference to the drawings, a detailed description of a computer-aided surgical analysis method, a program, and a surgical analysis apparatus according to embodiments of the present invention will be described.
도 1은 본 발명의 일실시예에 따른 수술영상 분석장치의 구성도이다.1 is a block diagram of a surgical image analysis apparatus according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 일실시예에 따른 수술영상 분석장치는, 수술요소 인식층(10); 및 수술 분석층(20);을 포함한다.Referring to Figure 1, the surgical image analysis apparatus according to an embodiment of the present invention, the surgical element recognition layer 10; And a surgical analysis layer 20.
상기 수술요소 인식층은, 수술에 개입되는 요소, 수술에 의해 발생되는 요소를 인식하는 레이어(Layer)이다. 상기 수술에 개입되는 요소는 수술영상을 촬영하는 카메라(예를 들어, 복강경 수술 시에 이용되는 내시경), 수술도구, 장기, 혈관 등이 포함된다. 또한, 상기 수술에 의해 발생되는 요소는 수술도구의 동작, 출혈 등의 이벤트(Event) 등이 포함된다. 상기 수술요소 인식층은 수술분석시스템 내의 최하위 레벨을 형성한다.The surgical element recognition layer is a layer for recognizing elements involved in surgery and elements generated by surgery. The elements involved in the surgery include a camera (eg, an endoscope used in laparoscopic surgery), a surgical tool, an organ, a blood vessel, and the like, for photographing a surgical image. In addition, the elements generated by the surgery include the operation of the surgical tool, an event such as bleeding, and the like. The surgical element recognition layer forms the lowest level in the surgical analysis system.
상기 수술요소 인식층은 수술영상이 입력됨에 따라 수술인식결과를 산출하는 하나 이상의 수술요소 인식모델을 포함한다. 이하, 각각의 수술요소 인식층 내 수술요소 인식모델에 대한 상세한 설명을 기술한다.The surgical element recognition layer includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input. Hereinafter, a detailed description of the surgical element recognition model in each surgical element recognition layer will be described.
복수의 수술요소 인식모델은 상호 간에 연결관계를 가질 수 있다. 즉, A 수술요소 인식모델이 인식결과를 산출하기 위해 B 수술요소 인식모델의 인식결과를 추가로 이용할 수 있다.A plurality of surgical element recognition models may have a connection relationship with each other. That is, the recognition result of the B surgical element recognition model may be additionally used in order for the A surgical element recognition model to calculate the recognition result.
일실시예로, 상기 수술영상 내의 장기를 인식하는 장기인식모델을 포함한다. 즉, 장기인식모델은 수술영상 내의 각각의 영상프레임 내에 존재하는 하나 이상의 장기를 인식하는 역할을 수행한다. 또한, 장기인식모델은 동맥 및 정맥으로 구성된 혈관 모델을 포함할 수 있다.In one embodiment, an organ recognition model for recognizing an organ in the surgical image is included. That is, the organ recognition model serves to recognize one or more organs existing in each image frame in the surgical image. In addition, the organ recognition model may include a blood vessel model composed of arteries and veins.
상기 장기인식모델은 다양한 방식으로 수술영상 내의 장기를 인식할 수 있다. 예를 들어, 상기 장기인식모델은, 후술되는 카메라 위치 인식모델에서 획득된 카메라 위치 또는 후술되는 수술단계 인식모델에서 수술단계를 기반으로 수술동작이 수행되는 장기 또는 혈관을 산출할 수 있다.The organ recognition model can recognize organs in the surgical image in various ways. For example, the organ recognition model may calculate an organ or blood vessel in which a surgical operation is performed based on a surgical step in a camera position obtained in a camera position recognition model described later or in a surgical step recognition model described later.
다른 일실시예로, 상기 수술영상 내의 수술도구(Instrument)를 인식하는 수술도구 인식모델을 포함한다. 상기 수술도구 인식모델은 영상 내에 등장하는 수술도구의 유형을 인식할 수 있다. 예를 들어, 상기 수술도구 인식모델은 각 수술도구의 이미지를 학습하여 둠에 따라 각 프레임 내에 등장하는 수술도구를 인식할 수 있다. 또한, 예를 들어, 상기 수술도구 인식모델은 각 수술도구가 장기에 대해 동작을 수행하는 복수의 수술영상데이터를 학습함에 따라, 신체 내부에서 수술동작을 수행 중인 수술도구를 인식할 수 있다.In another embodiment, a surgical tool recognition model for recognizing a surgical tool (Instrument) in the surgical image is included. The surgical tool recognition model may recognize the type of surgical tool appearing in the image. For example, the surgical tool recognition model may recognize images of surgical instruments appearing in each frame according to the placement by learning the images of each surgical tool. Further, for example, the surgical tool recognition model may recognize a surgical tool performing a surgical operation inside the body as each surgical tool learns a plurality of surgical image data performing an operation on an organ.
다른 일실시예로, 수술도구 동작(Instrument Action) 인식모델;을 더 포함한다. 즉, 수술도구 동작인식모델은 특정한 수술단계에서 수행되는 수술도구의 움직임이 어떠한 의미를 가지는 동작(즉, 어떤 결과를 얻기 위한 동작)인지 인식하는 역할을 수행한다.In another embodiment, the surgical tool motion (Instrument Action) recognition model; further includes. That is, the surgical tool motion recognition model plays a role of recognizing the meaning of the motion of the surgical tool performed in a specific surgical step (that is, an operation for obtaining a result).
구체적으로, 수술도구 동작 인식모델은, 동작 의미 인식이 필요한 수술영상(또는 수술영상 내의 특정구간)을 획득한 후 상기 수술영상에 포함된 복수의 영상프레임을 학습하여 기본 수술동작을 인식한다. 상기 기본 수술동작은 특정한 수술도구를 이용한 자르기, 잡기와 같은 기본적인 동작을 의미한다. 그 후, 수술도구 동작 인식모델은, 상기 인식된 수술동작을 기초로 상기 복수의 영상프레임 중 연속된 영상프레임 세트를 추출하여, 학습을 통해 상기 단위 수술동작의 의미를 도출한다. 상기 단위 수술동작은 수술과정에서 특정 장기에 대한 동작(Action)으로서 특정한 결과를 내기 위한 의미를 가지는 수술동작을 의미한다.Specifically, the surgical tool motion recognition model recognizes a basic surgical motion by acquiring a surgical image (or a specific section in the surgical image) that requires recognition of motion semantics and learning a plurality of image frames included in the surgical image. The basic surgical operation refers to a basic operation such as cutting and grasping using a specific surgical tool. Thereafter, the surgical tool motion recognition model extracts a set of consecutive video frames from the plurality of video frames based on the recognized surgical motion, and derives the meaning of the unit surgical motion through learning. The unit surgical operation refers to a surgical operation having a meaning for producing a specific result as an action for a specific organ in the surgical process.
이를 위해, 수술도구 동작 인식모델은, 수술 중의 의미가 반영되지 않는 수술동작의 최소단위인 기본 수술동작을 인식하는 기본 수술동작 인식모듈; 및 연속된 기본 수술동작을 기반으로 단위 수술동작을 인식하는 수술의미 인식모듈;을 포함할 수 있다.To this end, the surgical tool motion recognition model includes: a basic surgical motion recognition module that recognizes a basic surgical motion that is a minimum unit of a surgical motion that does not reflect meaning during surgery; And a surgical meaning recognition module that recognizes a unit surgical operation based on the continuous basic surgical operation.
일실시예로, 상기 기본 수술동작 인식모듈은 수술도구 이미지를 학습함에 따라, 연속된 프레임에서 수술도구를 인식해내고, 수술도구의 상태 변화를 산출하여 수술도구의 기본 수술동작을 인식한다.In one embodiment, as the basic surgical motion recognition module learns an image of a surgical tool, it recognizes the surgical tool in a continuous frame and calculates a state change of the surgical tool to recognize the basic surgical action of the surgical tool.
또한, 일실시예로, 상기 수술의미 인식모듈은 연속된 기본 수술동작과 단위 수술동작을 매칭한 학습데이터를 트레이닝함에 따라, 상기 기본 수술동작 인식모듈에 의해 인식된, 신규 수술영상 내 연속된 기본 수술동작 데이터가 입력되면 단위 수술동작을 산출한다. 또한, 이 때, 학습데이터는 연속된 기본 수술동작이 수행되는 장기 정보를 더 포함할 수 있고, 상기 장기 정보는 수술요소 인식층 내의 장기 인식 모델에 의해 산출된 결과가 이용될 수 있다.In addition, in one embodiment, as the surgical meaning recognition module trains learning data that matches the continuous basic surgical operation and the unitary surgical operation, the continuous basic in the new surgical image recognized by the basic surgical operation recognition module When surgical operation data is input, a unit surgical operation is calculated. In addition, at this time, the learning data may further include organ information on which a continuous basic surgical operation is performed, and the organ information may be used as a result calculated by the organ recognition model in the surgical element recognition layer.
또한, 다른 일실시예로, 최소침습 수술 시의 카메라의 위치를 인식하는 카메라 위치 인식모델을 더 포함한다. 구체적으로, 카메라가 복강경 수술에서 사용되는 내시경인 경우, 내시경이 복강 내부로 내시경이 삽입된 후에 복강 내에서 전/후진, 상/하/좌/우 이동 등이 될 수 있으므로, 카메라 위치 인식모델은 복강 내에서 카메라의 위치를 실시간으로 산출하는 역할을 수행한다.In addition, in another embodiment, a camera position recognition model for recognizing the position of the camera during minimally invasive surgery is further included. Specifically, when the camera is an endoscopic used in laparoscopic surgery, the camera may be a forward/reverse, up/down/left/right movement within the abdominal cavity after the endoscope is inserted into the abdominal cavity. It serves to calculate the position of the camera in real time within the abdominal cavity.
일실시예로, 카메라 위치 인식모델은, 카메라에 의해 촬영되는 영상 내 기준객체의 하나 이상의 기준지점을 기반으로 카메라의 기준 위치를 설정하고, 영상 내의 객체의 변화를 기반으로 상대적 위치 변화를 산출하여 카메라 위치를 실시간 산출한다.In one embodiment, the camera position recognition model sets a reference position of a camera based on one or more reference points of a reference object in an image photographed by the camera, and calculates a relative position change based on a change in an object in the image Calculate camera position in real time.
구체적으로, 실제수술영상을 이용한 카메라 위치 산출 방법은, 카메라 위치 인식모델이 신체내부로 진입하는 카메라에 의해 촬영되는 실제수술영상으로부터 기준 객체를 획득하여 상기 카메라에 대한 기준 위치를 설정하고, 상기 카메라가 이동함에 따라 상기 카메라의 위치 변화량을 산출한다. 예를 들어, 카메라 위치 인식모델은 카메라에 의해 획득되는 실시간 영상 내의 객체 변화(예를 들어, 객체의 크기 변화, 객체의 위치 변화, 영상 내에 나타나는 객체의 변화 등)을 기반으로 카메라의 위치 변화량을 산출한다. 그 후, 카메라 위치 인식모델은 상기 기준 위치에 대한 상기 카메라의 위치 변화량을 기반으로 상기 카메라의 현재 위치를 산출한다. 이 때, 실제수술영상은 스테레오스코픽 3D 영상일 수 있으며, 이에 따라 실제수술영상은 3차원적인 입체감, 즉 깊이감을 가진 영상일 수 있다. 따라서, 실제수술영상의 깊이 정보(depth map) 통해서 수술도구의 3차원 공간 상의 위치를 정확하게 파악할 수 있다.Specifically, in the method of calculating the camera position using the actual surgical image, the camera position recognition model obtains a reference object from the actual surgical image photographed by the camera entering the body, sets a reference position for the camera, and the camera As it moves, the position change amount of the camera is calculated. For example, the camera position recognition model is based on an object change in a real-time image acquired by the camera (for example, a change in size of an object, a change in the position of an object, a change in an object appearing in the image, etc.). Calculate. Then, the camera position recognition model calculates the current position of the camera based on the amount of change of the position of the camera with respect to the reference position. At this time, the actual surgical image may be a stereoscopic 3D image, and accordingly, the actual surgical image may be a three-dimensional stereoscopic image, that is, an image having a depth sense. Therefore, it is possible to accurately grasp the position in the three-dimensional space of the surgical tool through the depth map of the actual surgical image.
상기 기준 객체는 영상으로부터 특징을 검출하기 용이하고, 신체내부에 고정된 위치에 존재하고, 수술 시에 움직임이 없거나 매우 적고, 형태의 변형이 일어나지 않으며, 수술도구에 의해 영향받지 않고, 의료영상데이터(예: CT, PET 등에 의해 촬영된 데이터)에 의해서도 획득이 가능해야 하는 등의 조건들 중 적어도 하나를 만족하는 장기 또는 내부 특정 부위를 이용할 수 있다. 예를 들어, 간, 복벽 등과 같이 수술 시 움직임이 매우 적은 부분, 위, 식도, 담낭 등과 같이 의료영상데이터에서도 획득이 가능한 부분을 기준 객체로 정할 수 있다. The reference object is easy to detect a feature from an image, exists in a fixed position inside the body, has little or no movement during surgery, does not cause shape deformation, is not affected by surgical tools, and medical image data It is also possible to use an organ or internal specific site that satisfies at least one of the conditions such as that it should be possible to acquire by (eg, data taken by CT, PET, etc.). For example, a portion that has very little motion during surgery, such as the liver and abdominal wall, and a portion that can be obtained from medical image data such as the stomach, esophagus, and gallbladder may be determined as a reference object.
또한, 카메라의 위치 이동이 발생하는 경우에, 기준 객체로 카메라에 의해 촬영되는 영상 내 수술도구가 기준객체로 사용될 수도 있다. 일반적으로, 로봇 수술을 수행할 때, 카메라 이동과 수술도구 이동이 동시에 발생하지 않으므로, 카메라 이동 시에는 특정 위치에 멈춰 있는 수술도구가 기준객체로 이용될 수 있다.In addition, when a movement of the camera position occurs, a surgical tool in an image photographed by the camera as a reference object may be used as a reference object. In general, when performing robotic surgery, since the camera movement and the movement of the surgical tool do not occur at the same time, when the camera is moved, the surgical tool stopped at a specific position may be used as a reference object.
또한, 카메라 위치 인식모델은, 수술영상 내에 기준객체가 등장하면 기준객체의 하나 이상의 기준지점을 기반으로, 카메라의 기준위치를 재설정하는 과정을 반복할 수 있다. 즉, 카메라 이동이 빈번한 경우, 기준위치로부터 상대적 이동을 누적함에 따라 카메라 위치에 오차가 발생할 수 있으므로, 카메라 위치 인식모델은, 실시간으로 새로운 기준객체가 영상 내에 등장하면 카메라 기준위치를 재설정하는 과정을 수행할 수 있다.In addition, the camera position recognition model may repeat the process of resetting the reference position of the camera based on one or more reference points of the reference object when the reference object appears in the surgical image. In other words, if the camera movement is frequent, an error may occur in the camera position as the relative movement accumulates from the reference position. Therefore, the camera position recognition model performs a process of resetting the camera reference position when a new reference object appears in the image in real time. It can be done.
또한, 카메라 위치 인식모델은, 카메라 위치를 실제 환자의 신체와 동일하게 생성된 가상신체모델 내에 누적할 수 있다. 즉, 카메라 위치 인식모델은 실제 환자 신체 내의 실제기준객체와 가상신체모델 내의 가상기준객체를 매칭한 후 카메라의 기준위치에 대한 상대적 위치 변화를 누적할 수 있다. 이를 통해, 카메라 위치 인식모델은, 수술 중에 카메라가 어떠한 경로로 이동하였는지 3차원 가상신체모델 내에 기록할 수 있다.In addition, the camera position recognition model can accumulate the camera position in the virtual body model generated in the same way as the actual patient's body. That is, the camera position recognition model may accumulate the relative position change with respect to the reference position of the camera after matching the actual reference object in the actual patient body with the virtual reference object in the virtual body model. Through this, the camera position recognition model can record in which 3D virtual body model the path the camera moved during surgery.
또한, 다른 일실시예로, 상기 카메라 위치 인식모델은, 카메라의 위치를 인식함에 따라, 카메라 위치와 상대적으로 배치된 각 수술도구의 배치위치를 산출한다. 즉, 카메라 위치 인식모델은 카메라의 특정 위치로부터 하나 이상의 수술도구의 상대적인 위치를 산출한다.In addition, in another embodiment, the camera position recognition model calculates the position of each surgical tool relative to the position of the camera as it recognizes the position of the camera. That is, the camera position recognition model calculates the relative positions of one or more surgical instruments from a specific position of the camera.
또한, 다른 일실시예로, 수술도구 위치 산출모델을 더 포함한다. 수술도구 위치 산출모델의 일실시예로, 수술 대상자의 신체내부 공간으로 삽입되는 수술도구에 부착된 센싱장치로부터 획득된 센싱정보를 기반으로 수술 대상자의 신체외부 공간에 대한 수술도구의 제1 지점 위치를 산출하고, 수술 대상자의 신체상태와 부합하게 생성된 가상신체모델을 통해, 수술도구의 제1 지점 위치를 기반으로 수술도구의 특성정보를 반영하여 수술 대상자의 신체내부 공간에 대한 수술도구의 제2 지점 위치를 산출한다. 이를 통해, 수술도구 위치 산출모델은, 가상신체모델에 대한 수술도구의 제2 지점 위치를 기초로 수술 대상자의 실제 신체내부 공간에서의 수술도구의 위치정보를 제공한다.In addition, in another embodiment, a surgical tool position calculation model is further included. As an embodiment of the surgical tool position calculation model, the position of the first point of the surgical tool relative to the external space of the surgical subject based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the surgical subject Calculate and reflect the characteristic information of the surgical tool based on the position of the first point of the surgical tool through a virtual body model generated in accordance with the physical condition of the surgical subject. Calculate the 2 point position. Through this, the surgical tool position calculation model provides the position information of the surgical tool in the actual body internal space of the surgical subject based on the position of the second point of the surgical tool with respect to the virtual body model.
수술도구 위치 산출모델의 다른 일실시예로, 수술로봇 시스템의 좌표계와 가상신체모델의 좌표계를 일치시켜서, 수술로봇 시스템의 수술도구 이동 시의 움직임데이터를 수신하여 가상신체모델에 적용함에 따라 수술도구의 실시간 위치를 산출한다. 구체적으로, 수술도구 위치 산출모델은, 수술 대상자의 기준점(예를 들어, 환자 신체의 배꼽과 같은 특정 위치, 환자 신체 표면에 표시된 마커 또는 신체표면에 수술로봇 시스템에서 프로젝션된 식별표지 등)을 기반으로 수술도구를 포함하는 수술로봇의 좌표 정보를 획득하고, 수술 대상자의 신체상태와 부합하게 생성된 가상신체모델의 좌표 정보를 수술로봇의 좌표 정보와 매칭하고, 수술로봇의 좌표 정보로부터 획득된 수술도구의 위치에 대응하는 가상신체모델에서의 수술도구의 위치를 산출한다. 즉, 수술도구 위치 산출모델은 수술로봇이 실제로 움직이는 실제 환자 신체와 가상신체모델의 좌표계를 일치시킴에 따라, 수술로봇의 수술도구 이동을 가상신체모델 내에 적용하여 수술도구의 실시간 위치를 획득할 수 있다.As another embodiment of the surgical tool position calculation model, the surgical tool is received by applying motion data when the surgical tool of the surgical robot system is moved to the virtual body model by matching the coordinate system of the surgical robot system with the coordinate system of the virtual body model. Calculate real-time location. Specifically, the surgical tool position calculation model is based on a reference point (eg, a specific location, such as a belly button of a patient's body, a marker displayed on a patient's body surface, or an identification mark projected by a surgical robot system on the body surface) of a surgical subject. Acquire the coordinate information of the surgical robot including the surgical tool, match the coordinate information of the virtual body model generated in accordance with the body condition of the surgical target with the coordinate information of the surgical robot, and perform the surgery obtained from the coordinate information of the surgical robot The position of the surgical tool in the virtual body model corresponding to the position of the tool is calculated. That is, the surgical tool position calculation model can acquire the real-time position of the surgical tool by applying the surgical tool movement of the surgical robot into the virtual body model as the surgical robot actually matches the coordinate system of the actual patient body and the virtual body model. have.
또한, 다른 일실시예로, 수술 단계 인식모델을 더 포함한다. 상기 수술단계 인식모델은 전체 수술 과정 중에서 현재 수술이 수행되는 과정 또는 수술영상 내의 특정 구간이 어떠한 세부수술단계에 해당하는지 산출한다. 수술 단계 인식모델은 다양한 방식으로 수술영상의 수술단계를 산출할 수 있다. 예를 들어, 수술단계 인식모델은 각 수술유형 별로 진행순서를 저장하고 있고, 환자 신체에서 카메라가 위치한 영역을 기반으로 진행순서 중의 특정한 수술단계(Phase)를 인식할 수 있다.In addition, in another embodiment, a surgical step recognition model is further included. The surgical step recognition model calculates what detailed surgical step corresponds to a process in which the current surgery is performed or a specific section in the surgical image among the entire surgical procedures. The surgical stage recognition model can calculate the surgical stage of the surgical image in various ways. For example, the surgical stage recognition model stores a progression sequence for each surgical type, and can recognize a specific surgical phase (Phase) in the progression sequence based on the area where the camera is located in the patient's body.
또한, 다른 일실시예로, 상기 수술영상 내에서 발생하는 이벤트를 인식하되, 상기 이벤트는 출혈을 포함하는 수술 중 비이상적인 상황인, 이벤트 인식모델;을 포함한다. 이벤트 인식모델(예를 들어, 출혈인식모델)은 출혈유무 인식모듈; 및 출혈량 산출모듈;를 포함할 수 있다.Also, in another embodiment, an event occurring in the surgical image is recognized, and the event includes an event recognition model, which is a non-ideal situation during surgery including bleeding. The event recognition model (eg, bleeding recognition model) may include a bleeding recognition module; And a bleeding amount calculation module.
상기 출혈유무 인식모듈은 딥러닝 기반의 학습을 기초로 수술영상 내 출혈 영역이 존재하는지 여부를 인식한고, 영상프레임 내에서 출혈 위치를 인식할 수 있다. 출혈유무 인식모듈은 출혈을 포함한 복수의 영상을 학습함에 따라, 신규 영상 내에 출혈영역이 포함되어 있는지 인식할 수 있다.The bleeding presence recognition module recognizes whether a bleeding area exists in the surgical image based on deep learning-based learning, and recognizes a bleeding position in the image frame. The bleeding presence recognition module may recognize whether a bleeding area is included in a new image by learning a plurality of images including bleeding.
일 실시예로, 출혈유무 인식모듈은 특징 정보를 포함하는 특징 맵을 기초로 수술영상 내 각 픽셀을 특정값으로 변환하고, 수술영상 내 각 픽셀의 특정값을 기초로 출혈 영역을 특정할 수 있다. 수술영상 내 각 픽셀은 특징 맵을 기초로 출혈의 특징에 대응하는 영역인지에 따라(즉, 출혈의 특징에 영향을 준 정도에 따라) 소정의 가중치를 기초로 특정값으로 변환될 수 있다. 예를 들어, 출혈유무 인식모듈은 CNN을 통해 수술영상 내 출혈 영역을 인식한 학습결과를 역으로 추정하는 Grad CAM(Gradient-weighted Class Activation Mapping) 기술을 적용하여, 수술영상의 각 픽셀을 특정값으로 변환할 수 있다. 출혈유무 인식모듈은 특징 맵을 기초로 수술영상 내 출혈 영역에 해당하는 것으로 인식된 픽셀에 대해 높은 값(예컨대, 높은 가중치)을 부여하고, 출혈 영역에 해당하지 않는 것으로 인식된 픽셀에 대해 낮은 값(예컨대, 낮은 가중치)을 부여하는 방식으로 수술영상의 각 픽셀 값을 변환시킬 수 있다. 출혈유무 인식모듈은 이러한 변환된 픽셀 값을 통해 수술영상 내 출혈 영역 부분을 보다 부각시킬 수 있고, 이를 통해 출혈 영역을 세그먼테이션하여 해당 영역의 위치를 추정할 수 있다. 또한, 출혈유무 인식모듈은 Grad CAM 기술을 적용하여, 특징 맵을 기초로 수술영상 내 각 픽셀에 대해 히트 맵(heat map)을 생성하여 각 픽셀을 확률값으로 변환시킬 수 있다. 출혈유무 인식모듈은 변환된 각 픽셀의 확률값을 기초로 수술영상 내 출혈 영역을 특정할 수 있다. 예컨대, 출혈유무 인식모듈은 확률값이 큰 픽셀 영역을 출혈 영역으로 판단할 수 있다. In one embodiment, the presence/absence recognition module may convert each pixel in the surgical image to a specific value based on the feature map including the feature information, and specify the bleeding area based on the specific value of each pixel in the surgical image. . Each pixel in the surgical image may be converted into a specific value based on a predetermined weight according to whether it is an area corresponding to the characteristic of the bleeding based on the feature map (that is, according to the degree of affecting the characteristic of the bleeding). For example, the bleeding presence recognition module applies Gradient CAM (Gradient-weighted Class Activation Mapping) technology that inversely estimates the learning result of recognizing the bleeding area in the surgical image through CNN, so that each pixel of the surgical image is identified by a specific value. Can be converted to The bleeding presence recognition module assigns a high value (eg, a high weight) to pixels recognized as corresponding to the bleeding area in the surgical image based on the feature map, and a low value to the pixel recognized as not corresponding to the bleeding area. Each pixel value of the surgical image may be converted in a manner of giving (eg, a low weight). The presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area. In addition, the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value. The bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.
상기 출혈량 산출모듈은 출혈 영역의 위치를 기초로 출혈 영역에서의 출혈량을 산출하는 것을 특징으로 한다. 일 실시예로, 출혈량 산출모듈은 수술영상 내 출혈 영역의 픽셀 정보를 이용하여 출혈량을 산출할 수 있다. 예를 들어, 출혈량 산출모듈은 수술영상 내 출혈 영역에 해당하는 픽셀의 개수, 픽셀의 색상 정보(예: RGB 값) 등을 이용하여 출혈량을 산출할 수 있다. The bleeding amount calculation module is characterized in that it calculates the bleeding amount in the bleeding area based on the location of the bleeding area. In one embodiment, the bleeding amount calculation module may calculate the bleeding amount using pixel information of the bleeding area in the surgical image. For example, the bleeding amount calculation module may calculate the bleeding amount using the number of pixels corresponding to the bleeding area in the surgical image and color information (eg, RGB values) of the pixels.
다른 실시예로, 출혈량 산출모듈은 수술영상이 스테레오스코픽 영상인 경우, 수술영상의 깊이 맵(depth map)을 기초로 수술영상 내 출혈 영역의 깊이 정보를 획득하고, 획득된 깊이 정보를 기초로 출혈 영역에 대응하는 부피를 추정하여 출혈량을 산출할 수 있다. 수술영상이 스테레오스코픽 영상인 경우 3차원적인 입체감, 즉 깊이 정보를 가지므로, 출혈 영역에 대한 3차원 공간 상에서의 부피를 파악할 수 있다. 예를 들어, 출혈량 산출모듈은 수술영상 내 출혈 영역의 픽셀 정보(예: 픽셀의 개수, 픽셀의 위치 등)를 획득하고, 획득된 출혈 영역의 픽셀 정보에 대응하는 깊이 맵의 깊이 값을 산출할 수 있다. 출혈량 산출모듈은 산출된 깊이 값을 기초로 출혈 영역의 부피를 파악하여 출혈량을 계산할 수 있다. In another embodiment, when the surgical image is a stereoscopic image, the bleeding amount calculation module acquires depth information of a bleeding area in the surgical image based on a depth map of the surgical image, and bleeds based on the acquired depth information The amount of bleeding can be calculated by estimating the volume corresponding to the region. When the surgical image is a stereoscopic image, since it has three-dimensional stereoscopic sense, that is, depth information, it is possible to grasp the volume in the three-dimensional space for the bleeding area. For example, the bleeding amount calculation module acquires pixel information (eg, the number of pixels, the position of pixels, etc.) of the bleeding area in the surgical image, and calculates a depth value of the depth map corresponding to the obtained pixel information of the bleeding area. Can. The bleeding amount calculation module may calculate the bleeding amount by grasping the volume of the bleeding area based on the calculated depth value.
또 다른 실시예로, 출혈량 산출모듈은 수술영상 내 거즈가 포함된 경우, 거즈 정보를 이용하여 출혈 영역에서의 출혈량을 산출할 수 있다. 예를 들어, 출혈량 산출모듈은 수술영상 내 거즈의 개수, 거즈의 색상 정보(예: RGB 값) 등을 이용하여 출혈 영역에서 발생한 출혈량을 산출하는데 반영할 수 있다. In another embodiment, when the bleeding amount calculation module includes gauze in the surgical image, the bleeding amount in the bleeding area may be calculated using gauze information. For example, the bleeding amount calculation module may be reflected in calculating the bleeding amount generated in the bleeding area using the number of gauze in the surgical image, color information of the gauze (eg, RGB value), and the like.
또한, 다른 일실시예로, 수술시간 산출/예측모델을 더 포함한다. 수술시간 산출/예측모델은 수술 진행 중에 수행 중인 수술단계의 완료까지 예상되는 시간을 산출하는 수술시간 예측모듈; 및 수술이 완료된 후 각 단계가 수행된 시간을 산출하는 수술시간 산출모듈;을 포함할 수 있다.In addition, in another embodiment, the operation time calculation / prediction model further includes. The operation time calculation/prediction model includes an operation time prediction module for calculating an expected time to completion of an operation step being performed during operation; And an operation time calculation module that calculates a time when each step is performed after the operation is completed.
예를 들어, 상기 수술시간 산출모듈은 수술이 완료된 후에 특정 수술단계(Phase)에 해당되는 시간을 추출하여, 특정 수술단계(Phase)의 전체 소요시간을 산출한다.For example, the operation time calculation module extracts a time corresponding to a specific operation phase (Phase) after the operation is completed, and calculates the total time required for a specific operation phase (Phase).
또한, 예를 들어, 상기 수술시간 예측모듈은, 수술 수행 중의 특정시점(예를 들어, 예측 기준시점) 이전의 특정 수술단계의 수술영상과 특정 수술단계의 특정시점까지의 소요시간을 기반으로, 예상되는 해당 수술단계의 남은 수술시간을 산출한다. 구체적으로, 상기 수술시간 예측모듈은, 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하고, 상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하고, 상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측한다.In addition, for example, the operation time prediction module, based on the time required to a specific point in time of a particular surgical step and a specific surgical step prior to a specific point in time (eg, predicted reference point) during the operation, Calculate the remaining surgical time for the expected surgical step. Specifically, the surgical time prediction module acquires a predetermined surgical image including a surgical operation for a specific surgical step, and uses a surgical operation time obtained based on the preset surgical image and the preset surgical image. The learning data is generated, and learning is performed based on the learning data to predict the operation time in the specific surgical step.
상기 수술 분석층은, 하나 이상의 수술요소 인식모델에서 획득된 수술요소를 기반으로 분석결과를 산출하는 레이어(Layer)이다. 상기 수술 분석층은 상기 수술요소 인식층의 상위 층으로 형성된다. 상기 수술 분석층은, 상기 하나 이상의 수술요소 인식모델에서 제공된 결과의 조합인 수술인식결과 조합을 기반으로 분석결과를 획득하는 하나 이상의 수술 분석 모델을 포함한다. The surgical analysis layer is a layer for calculating an analysis result based on surgical elements obtained from one or more surgical element recognition models. The surgical analysis layer is formed as an upper layer of the surgical element recognition layer. The surgical analysis layer includes one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results that are a combination of results provided in the one or more surgical element recognition models.
일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 인식된 수술 장기 유형, 수술동작 및 수술 중 발생 이벤트를 포함하는 수술인식결과 조합을 기반으로 혈액 손실 정도를 산출하는 혈액 손실 인식모델;를 포함한다. 이를 위해, 혈액 손실 인식모델은 수술요소 인식층에 포함된 이벤트인식모델, 장기 인식 모델, 수술동작 인식모델 중 적어도 하나와 연결되어, 수술인식결과 조합을 입력받을 수 있다. 예를 들어, 혈액 손실 인식모델은, 수술을 수행 중에 사용되는 경우, 이벤트 발생 시에 혈액 손실 수준을 산출하여 의료진에게 제공할 수 있다.In one embodiment, the surgical analysis layer, a blood loss recognition model for calculating the degree of blood loss based on a combination of surgical recognition results including a type of surgical organ recognized by the surgical element recognition layer, a surgical operation, and an event during the operation. ;. To this end, the blood loss recognition model may be connected to at least one of an event recognition model, a long-term recognition model, and a surgical motion recognition model included in the surgical element recognition layer to receive a combination of surgical recognition results. For example, the blood loss recognition model, when used during surgery, can calculate the blood loss level when an event occurs and provide it to the medical staff.
또한, 다른 일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 인식된 수술단계 및 수술시간을 포함하는 수술인식결과 조합을 기반으로 장기 손상 정도를 산출하는 장기 손상 감지모델;을 포함한다. 장기 손상 감지모델은 특정한 수술단계, 특정한 수술도구를 기반으로 수행된 동작, 수술시간 등을 각각의 수술요소 인식모델로부터 입력받아서 장기 손상 수준을 산출할 수 있다. In addition, in another embodiment, the surgical analysis layer, a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer; includes do. The long-term damage detection model can calculate the level of long-term damage by receiving a specific surgical step, a motion performed based on a specific surgical tool, and a surgical time from each surgical element recognition model.
상기 혈액 손실 인식모델 및 상기 장기 손상 감지모델은 수술 중 또는 수술 후에 각 수술과정에 분석결과 산출에 이용되는 것이다.The blood loss recognition model and the organ damage detection model are used to calculate the analysis results in each surgical procedure during or after surgery.
또한, 다른 일실시예로, 수술이 완료된 후 수술 결과를 분석하는 경우, 상기 수술 분석층은, 상기 수술요소 인식층에서 인식된 수술도구, 상기 수술도구로 동작이 수행되는 장기, 상기 수술도구로 수행되는 전체 수술 중의 세부수술단계, 상기 세부수술단계에서 발생한 이벤트를 기반으로, 잘못된 수술도구의 사용을 탐지하는, 수술도구 오사용 탐지모델;을 더 포함한다. In addition, in another embodiment, when analyzing the surgical results after the surgery is completed, the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
또한, 다른 일실시예로, 상기 수술 분석층은, 상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함한다.In another embodiment, the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer.
일실시예로, 상기 최적 수술계획 산출모델은, 실시간으로 전에 획득된 수술영상데이터를 분석함에 따라 이후에 수행되어야 하는 최적 수술계획을 산출하여 제공한다.In one embodiment, the optimal surgical plan calculation model calculates and provides an optimal surgical plan to be performed later by analyzing the surgical image data previously obtained in real time.
또한, 다른 일실시예로, 상기 최적 수술계획 산출모델은, 수술 전에 환자의 가상신체모델에 수행된 가상수술에 대한 수술인식결과 조합을 기반으로, 실제 환자에 대한 수술계획을 산출하여 제공 한다.In addition, in another embodiment, the optimal surgical plan calculation model is based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery, and calculates and provides a surgical plan for a real patient.
또한, 다른 일실시예로, 상기 최적 수술계획 산출모델은, 최소 침습 수술 시의 최적 진입 위치를 산출하는 기능을 포함할 수 있다. 구체적으로, 의료진이 가상신체모델 내에서 가상수술을 수행할 때, 가상수술시스템은 수술도구의 동작부만 생성하여 수술도구의 암 부분의 영향 없이 수술 시뮬레이션(즉, 가상수술)을 수행하도록 한다. 즉, 환자의 신체 내부 특성(예: 장기 배치, 혈관 상태 등)이나 수술도구의 특성 등에 따른 제약없이 의료진들의 수술동작 패턴에 따라 가상수술을 수행한다. 최적 수술계획 산출모델은, 가상수술을 수행한 이후에 그 결과를 바탕으로 환자의 신체 내부 특성이나 수술도구의 특성 등을 고려하여 실제수술 시 최적의 수술도구 진입 위치를 산출한다. 즉, 수술도구 동작부의 움직임이 수행 가능한 암의 진입 위치 후보 영역을 연속적으로 산출함에 따라, 교집합이 될 수 있는 영역을 추출한다.In addition, in another embodiment, the optimal surgical plan calculation model may include a function of calculating an optimal entry position during minimally invasive surgery. Specifically, when the medical staff performs virtual surgery in the virtual body model, the virtual surgery system generates only the operation part of the surgical tool to perform a surgical simulation (ie, virtual surgery) without affecting the arm portion of the surgical tool. That is, virtual surgery is performed according to the surgical operation pattern of medical personnel without limitations on the characteristics of the patient's body (eg, organ placement, vascular condition, etc.) or the characteristics of surgical instruments. The optimal surgical plan calculation model calculates the optimal surgical tool entry position during actual surgery by considering the internal characteristics of the patient's body or the characteristics of the surgical tool based on the results after performing the virtual surgery. That is, as the candidate region of the entry position of the arm capable of performing the movement of the surgical tool operation unit is continuously calculated, the region capable of intersection is extracted.
일례로서, 수술로봇 또는 복강경 수술 시에 A도구, B도구, C도구를 이용하는 경우, 컴퓨터는 A도구의 동작부가 수술동작을 수행할 때 도달하지 못하는 영역(즉, 수술도구의 길이 제한에 의해 동작부가 수술동작 수행 시에 도달하지 못하는 지점이 발생하는 신체 표면 영역)을 진입가능범위에서 제외할 수 있다. 또한, 컴퓨터는 A도구가 진입하여 수술동작을 수행하는 과정에서 신체 장기나 조직과 충돌하는 신체 표면 영역을 진입가능범위에서 제외할 수 있다. 또한, 컴퓨터는 진입가능범위 내의 각 신체 표면 지점에서 수술도구가 진입한 후 특정한 위치에서 요구되는 수술동작을 구현하지 못하면, 해당 신체 표면 지점을 진입가능범위에서 제외할 수 있다. 이를 통해, 컴퓨터는 A도구에 대한 진입가능범위를 산출할 수 있다. 컴퓨터는 각각의 수술도구(예를 들어, B도구 및 C도구)에 대해 각각 진입가능범위 산출 과정을 개별적으로 수행하여, 각 수술도구의 최적진입위치를 산출할 수 있다. 또한, 상술한 바와 같이 컴퓨터는 수술도구의 기능을 기초로 각 기능에 대해 진입가능범위 산출 과정을 개별적으로 수행하여, 각 수술도구의 기능을 적용할 수 있는 최적진입위치를 산출할 수도 있다.As an example, when using a tool A, tool B, or tool C during surgical robot or laparoscopic surgery, the computer does not reach the operating part of tool A when performing a surgical operation (that is, it is operated due to the limitation of the length of the surgical tool) When an additional surgical operation is performed, a body surface area where a point that cannot be reached occurs) may be excluded from the entry range. In addition, the computer may exclude the surface area of the body that collides with the body organ or tissue in the process of performing the surgical operation by entering the tool A from the range of access. In addition, the computer may exclude the corresponding body surface point from the enterable range if the surgical tool is not implemented at a specific location after the surgical tool enters at each body surface point within the enterable range. Through this, the computer can calculate the entry range for the tool A. The computer can calculate the optimal entry position of each surgical tool by separately performing the process of calculating the accessible range for each surgical tool (eg, B tool and C tool). In addition, as described above, the computer may separately perform an entry range calculation process for each function based on the function of the surgical tool, thereby calculating an optimal entry position to which the function of each surgical tool can be applied.
다른 예로서, 하나의 최적진입위치로 여러 개의 수술도구가 진입하여야 하는 경우, 컴퓨터는 각 수술도구에 대한 최적진입범위를 추출한 후, 복수의 최적진입범위가 중첩되는 범위를 최적진입위치로 결정할 수 있다. 예를 들어, 수술 수행 과정에서 A도구가 D도구로 변경되는 경우, 컴퓨터는 A도구에 대한 진입가능범위와 D도구에 대한 진입가능범위의 중첩되는 영역을 최적진입위치 후보영역으로 산출할 수 있다. 수술도구가 진입될 수 있는 위치는 특정 개수(예를 들어, 3개)로 제한되므로, A도구에서 D도구로 변경되었을 때 동일한 진입위치를 사용할 수밖에 없으므로, 컴퓨터는 A도구의 진입가능범위와 D도구의 진입가능범위를 모두 만족하는 위치를 최종 최적진입위치로 결정할 수 있다.As another example, when several surgical instruments need to enter one optimal entry position, the computer can extract an optimal entry range for each surgical tool and determine a range where a plurality of optimal entry ranges overlap as an optimal entry position. have. For example, when the tool A is changed to the D tool in the course of performing the surgery, the computer may calculate the overlapping region of the entry range for the A tool and the entry range for the D tool as an optimal entry position candidate region. . Since the location where the surgical tool can be entered is limited to a certain number (for example, 3), when changing from A tool to D tool, the same entry position must be used. A position that satisfies all the tool's entry ranges can be determined as the final optimal entry position.
또 다른 예로서, 동일한 수술도구가 가상신체모델 내에서 여러 번 사용되는 경우, 수술도구의 수술동작을 수행하는 범위(즉, 움직임의 범위)가 넓은 경우에 하나의 수술도구 진입위치에서 모든 수술동작을 수행하기 어려울 수 있으므로, 컴퓨터는 해당 수술도구가 사용되는 범위(즉, 움직임의 범위)를 신체표면 상의 복수의 진입위치에서 도달 가능한 여러 개의 그룹으로 분할할 수 있다. 예를 들어, 신체 표면에 3개의 진입위치를 생성하여 복강경수술 또는 로봇수술을 수행하는 경우, 컴퓨터는 3개 이하의 그룹으로 수술도구의 움직임 범위를 분할한다. 이때, 컴퓨터는 다른 수술도구에 의해 선정된 복수의 진입가능범위로부터 도달 가능 여부를 바탕으로 움직임의 범위를 분할한다. 또한, 넓은 움직임 범위를 가지는 특정한 수술도구(즉, 제1수술도구)가 다른 수술도구(즉, 제2수술도구)와 동시에 사용되고 다른 수술도구(즉, 제2수술도구)가 필수적으로 진입하여야 하는 최적진입위치가 결정되는 경우, 컴퓨터는 제2수술도구와 함께 사용될 때의 제1수술도구의 움직임 범위를 제1수술도구가 제2수술도구의 최적진입위치(즉, 제2수술도구가 진입되는 키홀)를 통해서 접근할 수 없는 범위로 설정할 수 있다. 또한, 컴퓨터는, 다른 수술도구의 변경에도 불구하고 제1수술도구가 연속적으로 사용되는 경우, 컴퓨터는 사용자의 수술 시 편의성 및 수술 시 소요시간을 고려하여 동일한 진입위치로 진입하여 동작이 수행되어야 하는 그룹으로 설정할 수 있다. As another example, when the same surgical tool is used multiple times in a virtual body model, all surgical operations are performed at a single surgical tool entry position when the surgical tool is operated in a wide range (ie, range of motion). Since it can be difficult to perform, the computer can divide the range in which the corresponding surgical tool is used (that is, the range of movement) into several groups reachable from a plurality of entry positions on the body surface. For example, when laparoscopic or robotic surgery is performed by generating three entry positions on the body surface, the computer divides the range of motion of the surgical tool into groups of three or less. At this time, the computer divides the range of motion based on whether or not it can be reached from a plurality of accessible ranges selected by other surgical instruments. In addition, a specific surgical tool having a wide range of motion (ie, the first surgical tool) is used simultaneously with another surgical tool (ie, the second surgical tool) and another surgical tool (ie, the second surgical tool) is required to enter When the optimal entry position is determined, the computer determines the range of motion of the first operation tool when it is used together with the second operation tool, where the first operation tool enters the optimal operation position of the second operation tool (that is, the second operation tool is entered). Keyhole). In addition, when the first surgical tool is continuously used in spite of the change of other surgical tools, the computer must be operated by entering the same entry position in consideration of the convenience of the user and the time required for the surgery. Can be set as a group.
의료진이 수술도구의 진입위치와 수술도구의 암 부분의 장기걸림을 고려하지 않고 수행한 수술시뮬레이션 결과를 반영하여 수술도구의 최적진입 위치를 결정하므로, 의료진이 가장 편한 수술동작을 수행할 수 있도록 한다.Since the medical staff determines the optimal entry position of the surgical tool by reflecting the surgical simulation results performed without considering the entry position of the surgical tool and the long-term jamming of the arm portion of the surgical tool, the medical staff can perform the most convenient surgical operation. .
또한, 다른 일실시예로, 상기 최적 수술계획 산출모델은, 의료진이 암 부분이 제거된 수술도구의 동작부만으로 가상수술을 수행한 결과를 기반으로, 수술 시에 사용하기에 적절한 수술도구 유형을 산출하여 제시할 수 있다.In addition, in another embodiment, the optimal surgical plan calculation model is based on a result of performing a virtual operation by only a motion part of a surgical tool in which a cancer part has been removed by a medical staff, and selecting an appropriate surgical tool type for use in surgery. It can be calculated and presented.
또한, 일실시예로, 상기 최적 수술계획 산출모델은, 다양한 방식으로 트레이닝을 수행할 수 있다. 예를 들어, 복수의 의료진에 의해 수행된 수술영상을 획득하여 강화학습을 통해 각 수술유형에 대한 최적수술을 산출할 수 있다.In addition, in one embodiment, the optimal surgical plan calculation model may perform training in various ways. For example, it is possible to calculate the optimal operation for each surgical type through reinforced learning by acquiring surgical images performed by a plurality of medical staff.
또한, 예를 들어, 상기 최적 수술계획 산출모델은, 실제 수술 데이터를 이용하지 않고, 가상으로 생성된 수술과정으로 가상수술을 수행한 후 최적화된 수술인지 평가하는 과정을 반복함에 따라 최적 수술계획을 생성할 수 있다.In addition, for example, the optimal surgical plan calculation model does not use the actual surgical data, but performs a virtual operation with a virtually generated surgical procedure and then repeats the process of evaluating whether it is an optimal surgical procedure and then the optimal surgical plan. Can be created.
구체적으로, 상기 최적 수술계획 산출모델은, 적어도 하나의 세부수술동작으로 이루어지는 수술과정에 기초하여 상기 수술과정에 대응하는 복수의 유전자를 생성하고, 복수의 유전자 각각에 대해 가상수술을 수행하여 최적화된 수술인지를 평가한다. 그 후, 상기 최적 수술계획 산출모델은 상기 평가 결과를 기반으로 복수의 유전자 중 적어도 하나의 유전자를 선택하여 유전 알고리즘을 적용하고, 유전 알고리즘을 적용하여 새로운 유전자를 생성하고, 상기 새로운 유전자에 기초하여 최적의 수술과정을 도출한다.Specifically, the optimal surgical plan calculation model is optimized by generating a plurality of genes corresponding to the surgical process based on a surgical process consisting of at least one detailed surgical operation, and performing virtual surgery on each of the plurality of genes. Evaluate surgery. Thereafter, the optimal surgical plan calculation model selects at least one gene among a plurality of genes based on the evaluation result, applies a genetic algorithm, applies a genetic algorithm to generate a new gene, and based on the new gene We derive the optimal surgical procedure.
상기 최적 수술계획 산출모델은, 새로운 유전자(자식 유전자)에 대해 가상수술을 수행하여 적합도를 산출할 수 있다. 또한, 상기 최적 수술계획 산출모델은, 새로운 유전자(자식 유전자)의 적합도가 기설정된 조건에 부합하는지를 판단하고, 조건에 부합하는 새로운 유전자(자식 유전자)를 선택하여 교배, 돌연변이 등의 유전 알고리즘을 적용할 수 있다. 이러한 유전 알고리즘을 새로운 유전자(자식 유전자)에 적용함으로써, 또 다시 새로운 자식 유전자를 생성할 수 있다. 즉, 컴퓨터는 최적의 수술인지를 평가하기 위한 적합도 결과를 기초로, 부모 유전자들로부터 자식 유전자들을 반복적으로 생성하여, 최종적으로 생성된 자식 유전자들 중에서 최적의 수술과정을 포함하는 유전자를 획득할 수 있다. 예컨대, 자식 유전자들 중에서 가장 높은 적합도를 가지는 유전자를 선택하여, 최적화된 수술과정으로 도출할 수 있다. The optimal surgical plan calculation model may calculate fitness by performing virtual surgery on a new gene (child gene). In addition, the optimal surgical plan calculation model determines whether the suitability of a new gene (child gene) meets a preset condition, selects a new gene (child gene) that meets the condition, and applies genetic algorithms such as mating and mutation. can do. By applying this genetic algorithm to a new gene (child gene), a new child gene can be generated again. That is, the computer can repeatedly generate child genes from the parent genes based on the fitness results for evaluating whether the surgery is optimal, and obtain a gene including the optimal surgical process among the finally generated child genes. have. For example, a gene having the highest suitability among child genes can be selected and derived through an optimized surgical procedure.
상기 수술 분석층 내의 각각의 수술분석모델은 다양한 형태로 수술인식결과 조합을 입력받을 수 있다.Each surgical analysis model in the surgical analysis layer may receive a combination of surgical recognition results in various forms.
일실시예로, 수술 분석모델은 수술요소 인식모델에서 산출된 결과를 코드화하여 연결한 코드데이터 형태로 입력받을 수 있다. 즉, 각각의 수술분석모델은 각 시점마다 수술인식결과에 대한 코드를 연결한 코드데이터를 획득한 후, 각 모델 내 분석에 필요한 코드 부분을 추출하여 입력할 수 있다.In one embodiment, the surgical analysis model may be input in the form of connected code data by encoding the result calculated from the surgical element recognition model. That is, each surgical analysis model can acquire and input the code data necessary for analysis in each model after acquiring the code data connecting the codes for the surgical recognition results at each time point.
또한, 다른 일실시예로, 수술 분석모델은, 복수의 영상프레임 각각에 대해, 복수의 수술요소 인식모델에서 산출된 수술인식정보를 기초로 상기 수술인식정보에 포함된 수술요소(surgical element) 간의 관계를 나타내는 관계표현(Relational Representation) 정보를 입력받을 수 있다. 수술인식정보는 영상프레임으로부터 인식된 수술요소 정보를 말하며, 예컨대 수술도구, 수술동작, 신체부위, 출혈여부, 수술단계, 수술시간(예컨대, 잔여수술시간, 수술소요시간 등), 및 카메라 정보(예컨대, 카메라의 위치, 각도, 방향, 이동 등) 중 적어도 하나의 수술요소를 포함할 수 있다. 예를 들어, 상기 관계표현은 행과 열이 각각의 수술요소로 나열되고, 수술요소 상호 간의 상관관계에 대한 값이 행렬값으로 적용되는 행렬 형태가 될 수도 있다. 또한, 일 예로, 수술 분석 모델은 수술영상의 각각의 프레임 또는 특정한 단위분할영상에 대해 관계표현을 산출한 후 입력받을 수 있다.Further, in another embodiment, the surgical analysis model, for each of a plurality of image frames, based on the surgical recognition information calculated from the plurality of surgical element recognition model, the surgical elements included in the surgical recognition information (surgical elements) between Relational Representation information representing a relationship may be input. The surgical recognition information refers to the surgical element information recognized from the image frame, for example, surgical tools, surgical operations, body parts, bleeding status, surgical stage, surgical time (eg, remaining surgical time, surgical time, etc.), and camera information ( For example, it may include at least one surgical element among camera positions, angles, directions, movements, and the like. For example, the relationship expression may be in a matrix form in which rows and columns are arranged as respective surgical elements, and values for correlations between the surgical elements are applied as matrix values. In addition, as an example, the surgical analysis model may be input after calculating a relationship expression for each frame of a surgical image or a specific unit division image.
또한, 다른 일실시예로, 수술 솔루션 제공층;을 더 포함한다. 상기 수술 솔루션 제공층은, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 정리된 수술산출물을 제공하는 하나 이상의 수술 솔루션 모델을 포함한다. 즉, 수술 솔루션 제공층은 의료진이 바로 활용할 수 있는 서비스 형태인 하나 이상의 수술 솔루션 모델을 포함한다.In addition, in another embodiment, the surgical solution providing layer; further includes. The surgical solution providing layer includes one or more surgical solution models providing surgical outputs arranged based on one or more analysis results obtained from the surgical analysis layer. In other words, the surgical solution provider layer includes one or more surgical solution models, which are a form of service that medical staff can use immediately.
상기 수술 솔루션 제공층은 상기 수술 분석층의 상위 레이어로 형성되고, 상기 수술 분석층 내의 각각의 수술 분석모델에서 산출된 결과를 입력받는다.The surgical solution providing layer is formed as an upper layer of the surgical analysis layer, and receives results calculated from each surgical analysis model in the surgical analysis layer.
일실시예로, 상기 수술 솔루션 모델은, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 평가 결과를 산출하는 수술 평가 모델;을 포함한다. 예를 들어, 수술 평가 모델은, 혈액 손실 결과, 장기 손상 모델의 산출 결과, 최적 수술 프로세스와 실제 수술의 비교결과를 입력받아서 의료진이 수행한 수술에 대한 평가를 산출할 수 있다.In one embodiment, the surgical solution model includes a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer. For example, the surgical evaluation model may receive a blood loss result, an organ damage model calculation result, and a comparison result between an optimal surgical process and an actual surgery to calculate the evaluation of the surgery performed by the medical staff.
또한, 다른 일실시예로, 상기 수술 솔루션 모델은, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 차트를 생성하는 차트 생성 모델을 더 포함한다. 차트 생성 모델은 수술을 수행한 과정, 수술 수행 결과 등에 대한 기록을 자동으로 생성할 수 있다. 예를 들어, 차트 생성 모델은 수술이 완료된 후 수술과정에서 획득된 수술요소를 조합하여, 각각의 수술 분석모델에 입력하고, 복수의 수술 분석모델의 분석결과를 입력받아서 차트(예를 들어, 수술기록지)를 자동 생성(작성)할 수 있다.In addition, in another embodiment, the surgical solution model further includes a chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer. The chart generation model can automatically generate a record of the operation performed and the result of performing the operation. For example, the chart generation model combines the surgical elements obtained in the surgical process after the surgery is completed, inputs them into each surgical analysis model, and receives the analysis results of a plurality of surgical analysis models to enter a chart (for example, surgery Record paper) can be automatically generated (created).
또한, 다른 일실시예로, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술의 복잡도 또는 난이도를 산출하는 수술 복잡도 산출모델;을 더 포함한다. 예를 들어, 수술 복잡도 산출모델을 이용함에 따라, 각 의료진이 수행한 수술의 난이도를 고려하여, 의료진의 실적 평가를 수행할 수 있다. In another embodiment, the surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer; further includes. For example, as the surgical complexity calculation model is used, the medical staff's performance evaluation may be performed in consideration of the difficulty of the surgery performed by each medical staff.
또한, 예를 들어, 수술 결과에 대한 평가 시에 수술 복잡도 산출모델에서 산출된 수술 복잡도를 반영할 수 있다. 즉, 수술 솔루션 제공층에서, 수술 평가 모델은 수술 복잡도 산출모델로부터 산출된 복잡도 산출결과를 제공받아서 수술 평가에 이용할 수 있다.In addition, for example, when evaluating the surgical results, the surgical complexity calculated in the surgical complexity calculation model may be reflected. That is, in the surgical solution providing layer, the surgical evaluation model can be used for surgical evaluation by receiving the complexity calculation result calculated from the surgical complexity calculation model.
또한, 다른 일실시예로, 상기 수술분석층에서 획득된 하나 이상의 분석결과를 학습함에 따라, 수술에 대한 질문에 대한 답변을 산출하는 수술 Q&A 모델;을 포함한다.Also, as another embodiment, as learning one or more analysis results obtained from the surgical analysis layer, a surgical Q&A model for calculating an answer to a question about the surgery.
본 발명의 일실시예에 따른 수술영상 분석장치가 수술요소 인식층, 수술 분석층 및 수술 솔루션 층으로 구성되는 경우에 대한 상세한 설명을 기재한다.A detailed description of the case where the surgical image analysis apparatus according to an embodiment of the present invention is composed of a surgical element recognition layer, a surgical analysis layer, and a surgical solution layer is described.
일 실시예로, 최하위 계층은 의료수술과정에서 의학적으로 인식되어야 하는 최소 단위인 수술요소들을 인식하는 수술요소 인식 레이어(Surgical Element Recognition Layer)일 수 있다. 중간 계층은 각 수술요소들을 통해 의학적으로 의미를 파악하고 의학적 진단을 내리기 위한 상태를 판단할 수 있도록 하는 수술 분석 레이어(Surgical Module Layer)일 수 있다. 최상위 계층은 의료수술과정 전체에서 발생할 수 있는 의학적 문제를 인지하고 각 문제에 대한 해결책을 제공할 수 있도록 하는 수술 솔루션 레이어(Surgical Solution Layer)일 수 있다. In one embodiment, the lowest layer may be a surgical element recognition layer that recognizes surgical elements that are the smallest units that must be medically recognized in the medical procedure. The middle layer may be a surgical module layer that can grasp a medical meaning through each surgical element and determine a condition for making a medical diagnosis. The top layer may be a surgical solution layer that recognizes medical problems that may occur throughout the medical procedure and provides solutions to each problem.
수술요소 인식 레이어는, 의료수술과정에서 촬영된 다양한 수술영상 및 의료수술과정에서 사용되는 다양한 수술 관련 도구들을 기초로 수술요소들을 인식할 수 있다. 예를 들어, 수술요소는 수술단계(Phase), 신체부위(예컨대, 장기; Organ), 이벤트(Event), 수술시간(Time), 수술도구(Instrument), 카메라(Camera), 수술동작(Acton), 기타 요소 등을 포함할 수 있다. The surgical element recognition layer may recognize surgical elements based on various surgical images photographed in the medical surgery process and various surgical related tools used in the medical surgery process. For example, the surgical elements include a surgical phase (Phase), a body part (eg, an organ), an event, an operation time (Time), a surgical instrument (Instrument), a camera (Camera), and a surgical operation (Acton). , And other elements.
일 실시예로, 수술요소 인식 레이어는, 의료수술과정에서 획득되는 수술영상을 기초로 적어도 하나의 수술요소를 인식할 수 있다. 이때, 수술영상 자체에서 인식되는 수술요소뿐만 아니라, 수술영상을 학습시킨 결과를 이용하여 수술요소를 인식할 수도 있다. 상술한 본 발명의 실시예들을 활용하면 의료수술과정에서 의학적으로 인식되어야 하는 최소 단위인 수술요소들을 효과적으로 인식할 수 있다. In one embodiment, the surgical element recognition layer may recognize at least one surgical element based on the surgical image obtained in the medical surgery process. At this time, it is possible to recognize not only the surgical elements recognized in the surgical image itself, but also using the results of learning the surgical images. Using the above-described embodiments of the present invention, it is possible to effectively recognize surgical elements that are medically recognized minimum units in the medical procedure.
수술요소 인식 레이어는 수술요소들을 개별적으로 인식할 수 있다. 예를 들어, 수술영상으로부터 장기를 인식할 경우 해당 장기만을 인식할 수 있고, 수술영상으로부터 수술도구를 인식할 경우 해당 수술도구만을 인식할 수 있다.The surgical element recognition layer may individually recognize surgical elements. For example, when an organ is recognized from a surgical image, only the corresponding organ may be recognized, and when a surgical tool is recognized from a surgical image, only the corresponding surgical tool may be recognized.
또한, 수술요소 인식 레이어는 하나의 수술요소를 인식하기 위해 다른 수술요소를 이용할 수도 있다. 즉, 수술요소 인식 레이어는 각 수술요소 간의 관계를 나타내는 프리미티브 레벨 관계(Primitive level Relation)를 설정하고, 이러한 프리미티브 레벨 관계를 이용하여 각 수술요소를 인식할 수 있다. 예를 들어, 프리미티브 레벨 관계는, 해당 수술요소를 인식하기 위해 필요한 추가적인 수술요소들이 나열되어 있고, 이들 수술요소 간의 관계(예컨대, 상태 변화, 위치 변화, 형태 변화, 색상 변화, 배치 관계 등)를 지정한 정보 등을 포함할 수 있다. 예를 들어, 수술영상으로부터 이벤트(예컨대, 출혈 발생 여부에 대한 이벤트)를 인식할 경우, 해당 이벤트에 대한 프리미티브 레벨 관계를 기초로 장기, 수술도구, 수술동작 등과 같은 추가적인 수술요소들을 더 인식하고, 인식된 추가적인 수술요소들을 통해서 해당 이벤트를 인식할 수 있다. Also, the surgical element recognition layer may use other surgical elements to recognize one surgical element. That is, the surgical element recognition layer may establish a primitive level relation representing a relationship between each surgical element, and recognize each surgical element using the primitive level relationship. For example, the primitive level relationship lists additional surgical elements necessary to recognize the corresponding surgical element, and the relationship between the surgical elements (eg, state change, position change, shape change, color change, arrangement relationship, etc.) It may include specified information. For example, when an event (eg, an event for bleeding) is recognized from a surgical image, additional surgical elements such as organs, surgical instruments, and surgical operations are further recognized based on a primitive level relationship for the event, The event can be recognized through additional recognized surgical elements.
수술 분석 레이어는, 수술요소 인식 레이어를 통해 인식된 각 수술요소를 기초로 특정한 의학적 의미를 파악하거나 특정한 의학적 판단을 할 수 있다. 예를 들어, 수술 분석 레이어는, 적어도 하나의 수술요소를 이용하여, 출혈 손실 평가(Blood Loss Estimation), 신체내부 손상 검출(Anatomy Injury Detection; 예컨대 장기 손상 검출), 도구 오남용 검출(Instrument Misuse Detection), 최적 수술과정 제안(Optimal Planning Suggestion) 등과 같은 의학적 의미를 파악하거나 의학적 상태를 판단할 수 있다. The surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer. For example, the surgical analysis layer, using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.
즉, 수술 분석 레이어는 의료수술과정에서 필요한 정보나 해결하고자 하는 의학적 문제(예컨대, 출혈 손실 평가, 신체내부 손상 검출, 도구 오남용 검출, 최적 수술과정 제안 등)에 따라 각각의 수술 모듈을 구성할 수 있다. 예를 들어, 수술 분석 레이어는 의료수술과정에서 수술 대상자의 출혈량 정도를 파악하기 위한 출혈 손실 평가 모듈을 구성할 수 있다. 또한, 수술 중에 특정 장기에 어느 정도 손상이 발생했는지를 판단하기 위한 신체내부 손상 검출 모듈을 구성할 수 있다. 또한, 수술 중에 수술도구가 오남용되었는지를 판단하기 위한 수술도구 오남용 검출 모듈을 구성할 수 있다. 이때, 각 수술 모듈은 하위 계층인 수술요소 인식 레이어에서 인식된 수술요소들 중에서 적어도 하나를 선택적으로 이용할 수 있다. That is, the surgical analysis layer can configure each surgical module according to information necessary in the medical surgery process or a medical problem to be solved (eg, bleeding loss evaluation, internal body damage detection, tool misuse detection, optimal surgical procedure proposal, etc.). have. For example, the surgical analysis layer may constitute a bleeding loss evaluation module for grasping the degree of bleeding in a surgical subject during medical surgery. In addition, an internal damage detection module may be configured to determine how much damage has occurred in a specific organ during surgery. In addition, a surgical tool misuse detection module may be configured to determine whether a surgical tool is misused during surgery. At this time, each surgical module may selectively use at least one of the surgical elements recognized in the lower layer surgical element recognition layer.
일 실시예로, 수술요소들 중에서 적어도 하나를 선택적으로 이용함에 있어, 각 수술 모듈은 모듈 레벨 관계(Module level Relation)를 이용할 수 있다. 모듈 레벨 관계는 해당 수술 모듈에서 인식되어야 하는 수술요소들을 결정하여 지정한 것을 의미할 수 있다. 예컨대, 모듈 레벨 관계는 수술영상으로부터 특정 의미를 파악할 수 있는 정도(예컨대, 상술한 대표 인식값; SAM)을 바탕으로 해당 수술 모듈에서 인식되어야 하는 수술요소들이 결정된 것일 수 있다. In one embodiment, in selectively using at least one of the surgical elements, each surgical module may use a module level relation. The module level relationship may mean that the surgical elements to be recognized in the corresponding surgical module are determined and designated. For example, the module level relationship may be one in which surgical elements to be recognized in the corresponding surgical module are determined based on the degree to which a specific meaning can be recognized from the surgical image (eg, the above-described representative recognition value; SAM).
수술 솔루션 레이어는, 수술 분석 레이어를 통해 파악된 의학적 의미나 의학적 판단을 이용하여, 최종적으로 상위 레벨의 의학적 문제를 해결할 수 있다. 예를 들어, 수술 솔루션 레이어는, 적어도 하나의 수술 모듈을 이용하여, 차트 생성(Chart Generation), 합병증 판단(Complication Estimation), 수술 성능 평가(Surgical Performance Assessment), 수술 Q&A 시스템(Surgical Bot을 이용한 Q&A System) 등과 같은 의학적 문제를 해결할 수 있다. The surgical solution layer can finally solve a high-level medical problem using a medical meaning or medical judgment identified through the surgical analysis layer. For example, the surgical solution layer, using at least one surgical module, Chart Generation, Complication Estimation, Surgical Performance Assessment, Surgical Q&A System (Q&A using Surgical Bot) System).
이때, 수술 솔루션 레이어는, 각 의학적 문제(예컨대, 차트 생성, 합병증 판단, 수술 성능 평가, 수술 Q&A 시스템 등)에 따라 각각의 솔루션 모듈이나 시스템을 구성할 수 있다. 예를 들어, 수술 솔루션 레이어는, 수술과정에서 발생한 모든 정보를 기록하거나 환자의 상태를 기록하기 위한 차트 생성 솔루션 모듈(또는, 시스템)을 구성할 수 있다. 또한, 수술과정 이후에 발생할 수 있는 합병증을 예측하기 위한 합병증 판단 솔루션 모듈(또는, 시스템)을 구성할 수 있다. 이때, 각 솔루션 모듈(또는, 시스템)은 하위 계층인 수술 분석 레이어에서 도출된 의학적 의미나 의학적 판단을 선택적으로 이용할 수 있다. At this time, the surgical solution layer may configure each solution module or system according to each medical problem (eg, chart generation, complications determination, surgical performance evaluation, surgical Q&A system, etc.). For example, the surgical solution layer may constitute a chart generating solution module (or system) for recording all information generated during the surgical process or for recording a patient's condition. In addition, a complications determination solution module (or system) for predicting complications that may occur after the surgical procedure may be configured. In this case, each solution module (or system) may selectively use the medical meaning or medical judgment derived from the surgical analysis layer, which is a lower layer.
일 실시예로, 수술 분석 레이어의 수술 모듈 중에서 적어도 하나를 선택적으로 이용함에 있어, 수술 솔루션 레이어의 각 솔루션 모듈(또는, 시스템)은 솔루션 레벨 관계(Solution level Relation)를 이용할 수 있다. 솔루션 레벨 관계는 해당 솔루션 모듈에서 사용될 필요가 있는 수술 모듈을 결정하여 지정한 것을 의미할 수 있다. 예를 들어, 의료진이 환자의 합병증을 판단하기 위해서는 합병증 판단 솔루션 모듈을 사용할 수 있다. 이 경우, 합병증 판단 솔루션 모듈은 솔루션 레벨 관계를 기초로 하위 레벨로부터 출혈 손실 평가 모듈, 신체내부 손상 검출 모듈이 필요하다는 것을 파악할 수 있다. 그리고, 하위 계층의 해당 모듈로부터 필요한 정보를 수신하여 환자의 합병증을 판단하고, 판단 결과를 의료진에게 제공할 수 있다. In an embodiment, in selectively using at least one of the surgical modules of the surgical analysis layer, each solution module (or system) of the surgical solution layer may use a solution level relation. The solution level relationship may mean that a surgical module that needs to be used in a corresponding solution module is determined and designated. For example, the medical staff may use a complications determination solution module to determine a patient's complications. In this case, the complications determination solution module can grasp that a bleeding loss evaluation module and an internal damage detection module are required from a lower level based on the solution level relationship. In addition, the necessary information may be received from the corresponding module of the lower layer to determine the patient's complications and provide the determination result to the medical staff.
상술한 바와 같이, 본 발명의 실시에에 따른 의학적 해결 모델은, 의학적 문제에 따라 최하위 계층부터 최상위 계층까지 각 모듈별로 동작할 수 있도록 구성된다. 따라서, 새로운 의학적 문제가 발생하더라도 각 계층별로 새로운 모듈을 구성하거나 또는 새로운 계층을 구성함으로써 효율적으로 해결책을 제공해 줄 수 있다. As described above, the medical solution model according to the implementation of the present invention is configured to operate for each module from the lowest layer to the highest layer according to the medical problem. Therefore, even if a new medical problem occurs, it is possible to efficiently provide a solution by configuring a new module for each layer or by configuring a new layer.
또한, 본 발명의 실시에에 따른 의학적 해결 모델은, 다양한 의료수술에 적용될 수 있으며, 특히 수술로봇을 비롯하여 복강경이나 내시경 등을 이용한 최소침습수술에 효과적으로 활용될 수 있다.In addition, the medical solution model according to the practice of the present invention can be applied to various medical operations, and can be effectively used for minimally invasive surgery using a laparoscopic or endoscope, especially a surgical robot.
도 2는 본 발명의 일실시예에 따른 컴퓨터에 의한 수술 분석 방법의 순서도이다.Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
도 2를 참조하면, 본 발명의 일실시예에 따른 컴퓨터에 의한 수술 분석 방법은, 컴퓨터가 수술영상을 획득하는 단계(S200); 컴퓨터가 하나 이상의 수술요소 인식모델에 상기 수술영상을 입력하는 단계(S400); 컴퓨터가 각각의 수술요소 인식모델에서 산출된 수술인식결과 조합을 획득하는 단계(S600; 수술인식결과 조합 획득단계); 및 컴퓨터가 상기 수술인식결과 조합을 하나 이상의 수술 분석 모델에 입력하여 하나 이상의 분석결과를 획득하는 단계(S800; 수술 분석결과 획득단계);를 포함한다. 이하, 각 단계에 대한 상세한 설명을 기재한다.Referring to Figure 2, a computer-aided surgical analysis method according to an embodiment of the present invention, the computer obtaining a surgical image (S200); A computer inputting the surgical image into one or more surgical element recognition models (S400); A computer acquiring a combination of surgical recognition results calculated in each surgical element recognition model (S600; obtaining a surgical recognition result combination); And obtaining a one or more analysis results by inputting the combination of the surgical recognition results into one or more surgical analysis models (S800; obtaining a surgical analysis result). Hereinafter, detailed description of each step will be described.
컴퓨터가 수술영상을 획득한다(S200). 컴퓨터는 의료진에 수술이 수행되는 중에 실시간으로 수술영상을 획득할 수 있다. 또한, 컴퓨터는 의료진이 수술을 완료한 후에 저장된 전체 수술영상을 획득할수 있다.The computer acquires the surgical image (S200). The computer may acquire a surgical image in real time while surgery is being performed by a medical staff. In addition, the computer can acquire the entire surgical image stored after the medical staff completes the operation.
예를 들어, 컴퓨터가 실시간으로 획득된 수술영상을 이용하는 경우, 컴퓨터는 의료진의 수술 중에 필요한 정보를 제공하기 위해 수술영상을 활용한다. 또한, 예를 들어, 컴퓨터가 수술 완료 후에 전체 영상을 이용하는 경우, 컴퓨터는 수술에 대한 사후적 분석을 하기 위해 수술영상을 활용한다.For example, when a computer uses a surgical image acquired in real time, the computer utilizes the surgical image to provide necessary information during surgery of a medical staff. In addition, for example, when the computer uses the entire image after the surgery is completed, the computer uses the surgical image to perform a post-mortem analysis of the surgery.
컴퓨터가 하나 이상의 수술요소 인식모델에 상기 수술영상을 입력한다(S400). 즉, 컴퓨터는 수술영상에 포함된 수술요소를 인식하기 위해 하나 이상의 수술요소 인식모델에 수술영상을 입력한다. The computer inputs the surgical image into one or more surgical element recognition models (S400). That is, the computer inputs the surgical images into one or more surgical element recognition models in order to recognize the surgical elements included in the surgical images.
상기 하나 이상의 수술요소 인식모델은 수술분석 시스템 내에서 최하위 층을 형성하는 수술요소 인식층에 포함된다. 또한, 수술요소 인식층이 복수의 수술요소 인식모델을 포함하는 경우, 각각의 수술요소 인식모델은 수술요소 인식층 내에 병렬적으로 구축되면서, 상호 간에 연결관계를 가질 수 있다. 즉, A 수술요소 인식모델이 수행되는 과정에서 병렬적으로 형성된 B 수술요소 인식모델 에서 산출된 수술요소(즉, 인식결과)를 입력받아 사용할 수 있다. 기설명된 각각의 수술요소 인식모델에 대한 상세한 설명은 생략한다.The one or more surgical element recognition models are included in the surgical element recognition layer forming the lowest layer in the surgical analysis system. In addition, when the surgical element recognition layer includes a plurality of surgical element recognition models, each surgical element recognition model may be constructed in parallel within the surgical element recognition layer and have a connection relationship with each other. That is, it is possible to receive and use surgical elements (ie, recognition results) calculated from the B surgical element recognition model formed in parallel in the process of performing the A surgical element recognition model. Detailed description of each surgical element recognition model described above is omitted.
컴퓨터가 각각의 수술요소 인식모델에서 산출된 수술인식결과 조합을 획득한다(S600; 수술인식결과 조합 획득단계). 즉, 컴퓨터는 각 수술요소 인식모델에서 산출된 각 수술요소 인식결과를 조합한 데이터를 생성한다.The computer acquires a combination of surgical recognition results calculated from each surgical element recognition model (S600; obtaining a surgical recognition result combination). That is, the computer generates data combining each surgical element recognition result calculated from each surgical element recognition model.
컴퓨터가 상기 수술인식결과 조합을 하나 이상의 수술 분석 모델에 입력하여 하나 이상의 분석결과를 획득한다(S800; 수술 분석결과 획득단계). 상기 하나 이상의 수술 분석 모델은 상기 수술요소 인식층 위의 수술 분석층에 포함되는 것으로서, 사용자의 요청에 따라 선택되는 것이다. 즉, 상기 수술 분석층 내의 각각의 수술 분석 모델은 분석에 필요한 데이터를 기반으로, 상기 수술요소 인식층 내의 하나 이상의 수술요소 인식모델과 연결관계가 설정되어 있어, 분석에 필요한 수술요소 인식결과 조합이 각 수술 분석 모델에 입력될 수 있다. 기 설명된 각각의 수술 분석 모델에 대한 상세한 설명은 생략한다.The computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results (S800; obtaining a surgical analysis result). The one or more surgical analysis models are included in the surgical analysis layer on the surgical element recognition layer, and are selected according to a user's request. That is, each surgical analysis model in the surgical analysis layer is based on data necessary for analysis, and a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer, so that a combination of surgical element recognition results required for analysis It can be input to each surgical analysis model. Detailed description of each surgical analysis model described above is omitted.
또한, 다른 일실시예로, 도 3에서와 같이, 컴퓨터가 하나 이상의 분석결과를 특정한 수술 솔루션 모델에 입력하여, 분석결과를 기반으로 정리된 수술산출물을 제공하는 단계(S1000);를 더 포함한다. 수술 솔루션 모델은 수술 솔루션 제공층에 포함되며, 상기 수술 솔루션 제공층은 상기 수술 분석층의 상위 레이어로 수술 분석 시스템 내에 생성된다. 또한, 각각의 수술 솔루션 모델은 하나 이상의 수술 분석 모델과 연결됨에 따라 하나 이상의 수술 분석결과를 입력받을 수 있다. 기 설명된 각각의 수술 솔루션 모델에 대한 상세한 설명은 생략한다.In another embodiment, as shown in FIG. 3, the computer inputs one or more analysis results into a specific surgical solution model to provide a surgical output arranged based on the analysis results (S1000). . The surgical solution model is included in the surgical solution providing layer, and the surgical solution providing layer is created in the surgical analysis system as an upper layer of the surgical analysis layer. Also, as each surgical solution model is connected to one or more surgical analysis models, one or more surgical analysis results may be input. Detailed description of each surgical solution model described above is omitted.
이상에서 전술한 본 발명의 일 실시예에 따른 컴퓨터에 의한 수술 분석 방법은, 하드웨어인 컴퓨터와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.The surgical analysis method by the computer according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer that is hardware, and stored in a medium.
상기 전술한 프로그램은, 상기 컴퓨터가 프로그램을 읽어 들여 프로그램으로 구현된 상기 방법들을 실행시키기 위하여, 상기 컴퓨터의 프로세서(CPU)가 상기 컴퓨터의 장치 인터페이스를 통해 읽힐 수 있는 C, C++, JAVA, 기계어 등의 컴퓨터 언어로 코드화된 코드(Code)를 포함할 수 있다. 이러한 코드는 상기 방법들을 실행하는 필요한 기능들을 정의한 함수 등과 관련된 기능적인 코드(Functional Code)를 포함할 수 있고, 상기 기능들을 상기 컴퓨터의 프로세서가 소정의 절차대로 실행시키는데 필요한 실행 절차 관련 제어 코드를 포함할 수 있다. 또한, 이러한 코드는 상기 기능들을 상기 컴퓨터의 프로세서가 실행시키는데 필요한 추가 정보나 미디어가 상기 컴퓨터의 내부 또는 외부 메모리의 어느 위치(주소 번지)에서 참조되어야 하는지에 대한 메모리 참조관련 코드를 더 포함할 수 있다. 또한, 상기 컴퓨터의 프로세서가 상기 기능들을 실행시키기 위하여 원격(Remote)에 있는 어떠한 다른 컴퓨터나 서버 등과 통신이 필요한 경우, 코드는 상기 컴퓨터의 통신 모듈을 이용하여 원격에 있는 어떠한 다른 컴퓨터나 서버 등과 어떻게 통신해야 하는지, 통신 시 어떠한 정보나 미디어를 송수신해야 하는지 등에 대한 통신 관련 코드를 더 포함할 수 있다. The above-described program is C, C++, JAVA, machine language, etc., in which a processor (CPU) of the computer can be read through a device interface of the computer in order for the computer to read the program and execute the methods implemented as a program. It may include a code (Code) coded in the computer language of the. Such code may include functional code related to a function defining functions necessary to execute the above methods, and control code related to an execution procedure necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do. In addition, the code may further include a memory reference-related code as to which location (address address) of the computer's internal or external memory should be referred to additional information or media necessary for the computer's processor to perform the functions. have. In addition, when the processor of the computer needs to communicate with any other computer or server in the remote to execute the functions, the code can be used to communicate with any other computer or server in the remote using the communication module of the computer. It may further include a communication-related code for whether to communicate, what information or media to transmit and receive during communication, and the like.
상기 저장되는 매체는, 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상기 저장되는 매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피디스크, 광 데이터 저장장치 등이 있지만, 이에 제한되지 않는다. 즉, 상기 프로그램은 상기 컴퓨터가 접속할 수 있는 다양한 서버 상의 다양한 기록매체 또는 사용자의 상기 컴퓨터상의 다양한 기록매체에 저장될 수 있다. 또한, 상기 매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장될 수 있다.The storage medium refers to a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory. Specifically, examples of the storage medium include, but are not limited to, ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device. That is, the program may be stored in various recording media on various servers that the computer can access or various recording media on the user's computer. In addition, the medium may be distributed over a computer system connected through a network, and code readable by a computer in a distributed manner may be stored.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다. The embodiments of the present invention have been described above with reference to the accompanying drawings, but those skilled in the art to which the present invention pertains may be implemented in other specific forms without changing the technical spirit or essential features of the present invention. You will understand. Therefore, it should be understood that the above-described embodiments are illustrative in all respects and not restrictive.

Claims (13)

  1. 컴퓨터가 수술영상을 획득하는 단계;A computer acquiring a surgical image;
    컴퓨터가 하나 이상의 수술요소 인식모델에 상기 수술영상을 입력하는 단계;A computer inputting the surgical image into one or more surgical element recognition models;
    컴퓨터가 각각의 수술요소 인식모델에서 산출된 수술인식결과 조합을 획득하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술인식결과 조합 획득단계; 및A computer acquiring a combination of surgical recognition results calculated from each surgical element recognition model, wherein one or more surgical element recognition models are included in a surgical element recognition layer at a lowest level in the surgical analysis system; And
    컴퓨터가 상기 수술인식결과 조합을 하나 이상의 수술 분석 모델에 입력하여 하나 이상의 분석결과를 획득하되, 상기 하나 이상의 수술 분석 모델은 상기 수술요소 인식층 위의 수술 분석층에 포함되는 것으로서, 사용자의 요청에 따라 선택되는 것인, 수술 분석결과 획득단계;를 포함하는, 컴퓨터에 의한 수술 분석 방법.The computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results, wherein the one or more surgical analysis models are included in the surgical analysis layer above the surgical element recognition layer. It is selected according to, surgical analysis result acquisition step; including, surgical analysis method using a computer.
  2. 제1항에 있어서,According to claim 1,
    상기 수술 분석층 내의 각각의 수술 분석 모델은,Each surgical analysis model in the surgical analysis layer,
    분석에 필요한 데이터를 기반으로, 상기 수술요소 인식층 내의 하나 이상의 수술요소 인식모델과 연결관계가 설정된 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.Based on the data required for analysis, characterized in that the connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer, a computerized surgical analysis method.
  3. 제1항에 있어서, According to claim 1,
    상기 수술요소 인식모델은,The surgical element recognition model,
    상기 수술영상 내의 장기를 인식하는 장기인식모델;An organ recognition model that recognizes an organ in the surgical image;
    상기 수술영상 내의 수술도구와 상기 수술도구의 움직임을 인식하는 수술도구 인식모델; 및A surgical tool recognition model for recognizing the movement of the surgical tool and the surgical tool in the surgical image; And
    상기 수술영상 내에서 발생하는 이벤트를 인식하되, 상기 이벤트는 출혈을 포함하는 수술 중 비이상적인 상황인, 이벤트 인식모델;을 포함하는, 컴퓨터에 의한 수술 분석 방법.Recognizing an event occurring within the surgical image, the event is a non-ideal situation during surgery that includes bleeding, an event recognition model; including, surgical analysis method by a computer.
  4. 제1항에 있어서, According to claim 1,
    상기 수술 분석층은,The surgical analysis layer,
    상기 수술요소 인식층에서 인식된 수술 장기 유형, 수술동작 및 수술 중 발생 이벤트를 포함하는 수술인식결과 조합을 기반으로 혈액 손실 정도를 산출하는 혈액 손실 인식모델; 및A blood loss recognition model for calculating a degree of blood loss based on a combination of surgical recognition results including an operation organ type, a surgical operation, and an event occurring during the operation recognized by the surgical element recognition layer; And
    상기 수술요소 인식층에서 인식된 수술단계 및 수술시간을 포함하는 수술인식결과 조합을 기반으로 장기 손상 정도를 산출하는 장기 손상 감지모델;을 포함하며, Includes; a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer;
    상기 혈액 손실 인식모델 및 상기 장기 손상 감지모델은 The blood loss recognition model and the organ damage detection model
    수술 중 또는 수술 후에 각 수술과정에 분석결과 산출에 이용되는 것인, 컴퓨터에 의한 수술 분석 방법.Computer-aided surgical analysis method, which is used to calculate the analysis results during or after surgery.
  5. 제4항에 있어서, According to claim 4,
    수술이 완료된 후 수술 결과를 분석하는 경우,When analyzing the results of the surgery after the surgery is completed,
    상기 수술 분석층은,The surgical analysis layer,
    상기 수술요소 인식층에서 인식된 수술도구, 상기 수술도구로 동작이 수행되는 장기, 상기 수술도구로 수행되는 전체 수술 중의 세부수술단계, 상기 세부수술단계에서 발생한 이벤트를 기반으로, 잘못된 수술도구의 사용을 탐지하는, 수술도구 오사용 탐지모델;을 더 포함하는, 컴퓨터에 의한 수술 분석 방법.Based on the surgical tool recognized by the surgical element recognition layer, the organ in which the operation is performed with the surgical tool, the detailed surgical step during the entire operation performed with the surgical tool, and the event generated in the detailed surgical step, the use of the wrong surgical tool A surgical analysis method using a computer, further comprising; a detection model for misusing a surgical tool.
  6. 제1항에 있어서, According to claim 1,
    상기 수술 분석층은,The surgical analysis layer,
    상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되,Further comprising; an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer;
    상기 최적 수술계획 산출모델은, 실시간으로 전에 획득된 수술영상데이터를 분석함에 따라 이후에 수행되어야 하는 최적 수술계획을 산출하여 제공하는 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.The optimal surgical plan calculation model, characterized in that by providing the optimal surgical plan to be performed later by analyzing the surgical image data acquired in real time, computer-aided surgical analysis method.
  7. 제1항에 있어서, According to claim 1,
    상기 수술 분석층은,The surgical analysis layer,
    상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되,Further comprising; an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer;
    상기 최적 수술계획 산출모델은, 수술 전에 환자의 가상신체모델에 수행된 가상수술에 대한 수술인식결과 조합을 기반으로, 실제 환자에 대한 수술계획을 산출하여 제공하는 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.The optimal surgical plan calculation model is based on a combination of surgical recognition results for virtual surgery performed on a patient's virtual body model before surgery, and provides a surgical plan for a real patient by calculating and providing a surgical plan for a computer. Method of analysis.
  8. 제1항에 있어서, According to claim 1,
    컴퓨터가 하나 이상의 분석결과를 특정한 수술 솔루션 모델에 입력하여, 분석결과를 기반으로 정리된 수술산출물을 제공하는 단계;를 더 포함하는, 컴퓨터에 의한 수술 분석 방법.The computer inputs one or more analysis results into a specific surgical solution model, providing a surgical output arranged based on the analysis results; further comprising, surgical analysis method by a computer.
  9. 제8항에 있어서, The method of claim 8,
    상기 수술 솔루션 모델은,The surgical solution model,
    상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 평가 결과를 산출하는 수술 평가 모델;A surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer;
    상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 차트를 생성하는 차트 생성 모델; 및A chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer; And
    상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술의 복잡도 또는 난이도를 산출하는 수술 복잡도 산출모델;을 더 포함하는, 컴퓨터에 의한 수술 분석 방법.A computerized surgical analysis method further comprising; a surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer.
  10. 제8항에 있어서, The method of claim 8,
    상기 수술분석층에서 획득된 하나 이상의 분석결과를 학습함에 따라, 수술에 대한 질문에 대한 답변을 산출하는 수술 Q&A 모델;을 포함하며,Includes a surgical Q&A model that calculates an answer to a question about surgery by learning one or more analysis results obtained from the surgical analysis layer.
    상기 수술산출물 제공단계는,The step of providing the surgical output,
    의료진의 특정 수술에 대한 질문을 입력함에 따라 답변을 산출하여 제공하는 것인, 컴퓨터에 의한 수술 분석 방법.A computer-aided surgical analysis method in which an answer is calculated and provided according to input of a question about a specific operation of a medical staff.
  11. 하드웨어인 컴퓨터와 결합되어, 제1항 내지 제10항 중 어느 한 항의 방법을 실행시키기 위하여 매체에 저장된, 컴퓨터에 의한 수술 분석 프로그램.A computer-aided surgical analysis program stored in a medium to execute the method of any one of claims 1 to 10 in combination with a hardware computer.
  12. 수술영상이 입력됨에 따라 수술인식결과를 산출하는 하나 이상의 수술요소 인식모델을 포함하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술요소 인식층; 및Surgical element recognition layer that includes one or more surgical element recognition models for calculating the surgical recognition result as the surgical image is input, wherein one or more surgical element recognition models are included in the surgical element recognition layer at the lowest level in the surgical analysis system. ; And
    상기 하나 이상의 수술요소 인식모델에서 제공된 결과의 조합인 수술인식결과 조합을 기반으로 분석결과를 획득하는 하나 이상의 수술 분석 모델을 포함하되, 상기 수술요소 인식층의 상위 층으로 형성되는, 수술 분석층;을 포함하는, 수술영상 분석장치.A surgical analysis model including one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results, which is a combination of results provided by the one or more surgical element recognition models, wherein the surgical analysis layer is formed as an upper layer of the surgical element recognition layer; Including, surgical image analysis device.
  13. 제12항에 있어서, The method of claim 12,
    상기 수술 분석층에서 획득된 하나 이상의 분석결과를 기반으로 정리된 수술산출물을 제공하는 하나 이상의 수술 솔루션 모델을 포함하는, 수술 솔루션 제공층;을 더 포함하는, 수술영상 분석장치.A surgical image analysis device further comprising; a surgical solution providing layer, including one or more surgical solution models that provide a surgical output arranged based on one or more analysis results obtained from the surgical analysis layer.
PCT/KR2020/001475 2019-02-01 2020-01-31 Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image WO2020159276A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2019-0013320 2019-02-01
KR20190013320 2019-02-01
KR10-2020-0011504 2020-01-31
KR1020200011504A KR20200096155A (en) 2019-02-01 2020-01-31 Method for analysis and recognition of medical image

Publications (1)

Publication Number Publication Date
WO2020159276A1 true WO2020159276A1 (en) 2020-08-06

Family

ID=71840082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/001475 WO2020159276A1 (en) 2019-02-01 2020-01-31 Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image

Country Status (1)

Country Link
WO (1) WO2020159276A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560602A (en) * 2020-12-02 2021-03-26 中山大学中山眼科中心 Cataract surgery step identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009543611A (en) * 2006-07-12 2009-12-10 メディカル サイバーワールド、インコーポレイテッド Computerized medical training system
KR101399269B1 (en) * 2011-01-07 2014-05-27 레스토레이션 로보틱스, 인코포레이티드 Methods and systems for modifying a parameter of an automated procedure
US20180137244A1 (en) * 2016-11-17 2018-05-17 Terarecon, Inc. Medical image identification and interpretation
KR101864380B1 (en) * 2017-12-28 2018-06-04 (주)휴톰 Surgical image data learning system
KR101862360B1 (en) * 2017-12-28 2018-06-29 (주)휴톰 Program and method for providing feedback about result of surgery

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009543611A (en) * 2006-07-12 2009-12-10 メディカル サイバーワールド、インコーポレイテッド Computerized medical training system
KR101399269B1 (en) * 2011-01-07 2014-05-27 레스토레이션 로보틱스, 인코포레이티드 Methods and systems for modifying a parameter of an automated procedure
US20180137244A1 (en) * 2016-11-17 2018-05-17 Terarecon, Inc. Medical image identification and interpretation
KR101864380B1 (en) * 2017-12-28 2018-06-04 (주)휴톰 Surgical image data learning system
KR101862360B1 (en) * 2017-12-28 2018-06-29 (주)휴톰 Program and method for providing feedback about result of surgery

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560602A (en) * 2020-12-02 2021-03-26 中山大学中山眼科中心 Cataract surgery step identification method and device

Similar Documents

Publication Publication Date Title
KR102014359B1 (en) Method and apparatus for providing camera location using surgical video
US11737841B2 (en) Configuring surgical system with surgical procedures atlas
WO2019132168A1 (en) System for learning surgical image data
JP2023126480A (en) Surgical system with training or assist functions
WO2019132165A1 (en) Method and program for providing feedback on surgical outcome
WO2019132244A1 (en) Method for generating surgical simulation information and program
KR102146672B1 (en) Program and method for providing feedback about result of surgery
KR20210104190A (en) Method for analysis and recognition of medical image
WO2020159276A1 (en) Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image
KR102628324B1 (en) Device and method for analysing results of surgical through user interface based on artificial interlligence
KR20200096155A (en) Method for analysis and recognition of medical image
CN116075901A (en) System and method for processing medical data
WO2019164277A1 (en) Method and device for evaluating bleeding by using surgical image
WO2019132166A1 (en) Method and program for displaying surgical assistant image
WO2021206517A1 (en) Intraoperative vascular navigation method and system
WO2019164273A1 (en) Method and device for predicting surgery time on basis of surgery image
WO2019164278A1 (en) Method and device for providing surgical information using surgical image
KR102084598B1 (en) Ai based surgery assistant system for operating lesion of bone
WO2023018138A1 (en) Device and method for generating virtual pneumoperitoneum model of patient
WO2023008818A1 (en) Device and method for matching actual surgery image and 3d-based virtual simulation surgery image on basis of poi definition and phase recognition
Casy et al. “Stand-up straight!”: human pose estimation to evaluate postural skills during orthopedic surgery simulations
WO2023003389A1 (en) Apparatus and method for determining insertion position of trocar on three-dimensional virtual pneumoperitoneum model of patient
WO2019164279A1 (en) Method and apparatus for evaluating recognition level of surgical image
KR20190133424A (en) Program and method for providing feedback about result of surgery
WO2023136616A1 (en) Apparatus and method for providing virtual reality-based surgical environment for each surgical situation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20748512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20748512

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20748512

Country of ref document: EP

Kind code of ref document: A1