WO2020159276A1 - Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image - Google Patents
Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image Download PDFInfo
- Publication number
- WO2020159276A1 WO2020159276A1 PCT/KR2020/001475 KR2020001475W WO2020159276A1 WO 2020159276 A1 WO2020159276 A1 WO 2020159276A1 KR 2020001475 W KR2020001475 W KR 2020001475W WO 2020159276 A1 WO2020159276 A1 WO 2020159276A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- analysis
- recognition
- layer
- model
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
Definitions
- the present invention relates to a surgical image analysis and recognition method.
- Laparoscopic surgery refers to surgery performed by a medical staff who directly sees and touches the area to be treated, and minimally invasive surgery is also called keyhole surgery.
- Laparoscopic surgery and robotic surgery are typical. Laparoscopic surgery is performed through a video monitor by inserting a surgical instrument into the body and a laparoscopic instrument with a special camera by making a small hole in the required part without opening, and performing microsurgery using a laser or special instrument.
- robot surgery is to perform minimally invasive surgery using a surgical robot.
- radiation therapy refers to the surgical treatment with radiation or laser light from outside the body.
- endoscopic procedure refers to a procedure performed by inserting an endoscope into a digestive system or the like and inserting a tool into a passage provided in the endoscope.
- Deep learning has been widely used in the analysis of surgical images in recent years. Deep learning is defined as a set of machine-learning algorithms that attempt high-level abstractions (abstraction of key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation methods. Deep learning can be seen as a field of machine learning that teaches a person's mindset to computers in a large framework.
- the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel. And programs.
- the present invention can obtain all or selectively various analysis results required for surgical analysis through a plurality of surgical analysis models capable of calculating each result using multiple surgical element recognition results calculated from the surgical element recognition model.
- the present invention is to provide a surgical analysis device, a surgical image analysis and recognition system, a method, and a program.
- the present invention includes a plurality of surgical solution model that easily calculates the service result required for a medical staff based on a number of surgical analysis results through a plurality of surgical analysis models, surgical analysis device, surgical image analysis and recognition system, method And programs.
- a computer-aided surgical analysis method includes: a computer acquiring a surgical image; A computer inputting the surgical image into one or more surgical element recognition models; A computer acquiring a combination of surgical recognition results calculated from each surgical element recognition model, wherein one or more surgical element recognition models are included in a surgical element recognition layer at a lowest level in the surgical analysis system; And a computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results, wherein the one or more surgical analysis models are included in the surgical analysis layer above the surgical element recognition layer. It is selected according to, surgical analysis result acquisition step; includes.
- each surgical analysis model in the surgical analysis layer is characterized in that a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer based on data required for analysis.
- the surgical element recognition model includes: an organ recognition model that recognizes an organ in the surgical image; A surgical tool recognition model for recognizing the movement of the surgical tool and the surgical tool in the surgical image; And an event recognition model that recognizes an event occurring within the surgical image, and the event is a non-ideal situation during surgery including bleeding.
- the surgical analysis layer blood that calculates the degree of blood loss based on a combination of surgical recognition results including types of surgical organs recognized by the surgical element recognition layer, surgical operations, and events occurring during surgery Loss recognition model; And a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer, and includes the blood loss recognition model and the long-term damage detection model Is used to calculate the analysis results for each surgical procedure during or after surgery.
- the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
- the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan
- the calculation model is characterized in that, by analyzing the surgical image data acquired in real time in real time, an optimal surgical plan to be performed later is calculated and provided.
- the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer, but the optimal surgical plan
- the calculation model is characterized by calculating and providing a surgical plan for a real patient based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery.
- the computer inputs one or more analysis results into a specific surgical solution model, thereby providing a summary of the surgical output based on the analysis results.
- the surgical solution model includes: a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer; A chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer; And a surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer.
- the surgical Q&A model that calculates an answer to a question about the surgery includes, and the surgical output providing step, the medical staff The answer is calculated and provided by entering a question about a specific surgery.
- a computer-aided surgical analysis program may be stored in a medium in combination with a computer that is hardware, and to execute the computer-aided surgical analysis method.
- the surgical image analysis apparatus includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input, wherein one or more surgical element recognition models are the lowest in the surgical analysis system.
- a surgical solution providing layer including one or more surgical solution models that provide a surgical output arranged based on one or more analysis results obtained from the surgical analysis layer; further includes a.
- each surgical element can be accurately recognized by an individual surgical element recognition model that recognizes individual surgical elements (surgical instruments, bleeding, camera, etc.) in the surgical image. That is, it is possible to provide a recognition result with high accuracy compared to a method of recognizing various surgical elements included in a surgical image through one recognition model.
- each surgical solution model automatically calculates a result required for a medical staff by inputting one or more surgical analysis results
- the post-operative work of the medical staff can be simplified.
- a new surgical element recognition model when a new surgical analysis is required or when a new service type solution is required, a new surgical element recognition model, a surgical analysis model, or a surgical solution model in the surgical analysis system
- FIG. 1 is a layer configuration diagram of a surgical analysis device according to an embodiment of the present invention.
- Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
- FIG. 3 is a flow chart of a computer-aided surgical analysis method further comprising a procedure for providing a surgical output according to an embodiment of the present invention.
- the'surgery image' is an image of the surgical procedure.
- the surgical image includes an image obtained by an endoscope inserted into the body during laparoscopic surgery including robotic surgery.
- the surgical image may include a surgical image performed through an endoscope inserted through the oral cavity or anus.
- virtual body model refers to a model generated in conformity with an actual patient's body based on medical image data.
- the “virtual body model” may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling.
- the virtual body model may be used for guiding or navigation during surgery, post-surgery analysis, and the like.
- the virtual body model may be implemented in the same manner as the actual patient body by reflecting the color, texture, elasticity, etc. of the actual patient body.
- virtual surgery data means data including rehearsal or simulation actions performed on a virtual body model.
- the “virtual surgery data” may be image data in which rehearsal or simulation is performed on a virtual body model in a virtual space, or may be data recorded on a surgical operation performed on a virtual body model.
- actual surgical data refers to data obtained as actual medical personnel perform surgery.
- the "actual surgical data” may be image data obtained by photographing a surgical site in an actual surgical procedure, or may be data recorded about a surgical operation performed in an actual surgical procedure.
- ⁇ includes all of various devices capable of performing arithmetic processing and providing a result to a user.
- the computer is not only a desktop PC and a notebook, but also a smart phone, a tablet PC, a cellular phone, and a personal communication service phone (PCS phone), synchronous/asynchronous.
- Mobile terminals of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), and a Personal Digital Assistant (PDA) may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may be a server that receives a request from a client and performs information processing.
- FIG. 1 is a block diagram of a surgical image analysis apparatus according to an embodiment of the present invention.
- the surgical image analysis apparatus according to an embodiment of the present invention, the surgical element recognition layer 10; And a surgical analysis layer 20.
- the surgical element recognition layer is a layer for recognizing elements involved in surgery and elements generated by surgery.
- the elements involved in the surgery include a camera (eg, an endoscope used in laparoscopic surgery), a surgical tool, an organ, a blood vessel, and the like, for photographing a surgical image.
- the elements generated by the surgery include the operation of the surgical tool, an event such as bleeding, and the like.
- the surgical element recognition layer forms the lowest level in the surgical analysis system.
- the surgical element recognition layer includes one or more surgical element recognition models for calculating a surgical recognition result as a surgical image is input.
- the surgical element recognition model in each surgical element recognition layer will be described.
- a plurality of surgical element recognition models may have a connection relationship with each other. That is, the recognition result of the B surgical element recognition model may be additionally used in order for the A surgical element recognition model to calculate the recognition result.
- an organ recognition model for recognizing an organ in the surgical image is included. That is, the organ recognition model serves to recognize one or more organs existing in each image frame in the surgical image.
- the organ recognition model may include a blood vessel model composed of arteries and veins.
- the organ recognition model can recognize organs in the surgical image in various ways.
- the organ recognition model may calculate an organ or blood vessel in which a surgical operation is performed based on a surgical step in a camera position obtained in a camera position recognition model described later or in a surgical step recognition model described later.
- a surgical tool recognition model for recognizing a surgical tool (Instrument) in the surgical image.
- the surgical tool recognition model may recognize the type of surgical tool appearing in the image.
- the surgical tool recognition model may recognize images of surgical instruments appearing in each frame according to the placement by learning the images of each surgical tool.
- the surgical tool recognition model may recognize a surgical tool performing a surgical operation inside the body as each surgical tool learns a plurality of surgical image data performing an operation on an organ.
- the surgical tool motion (Instrument Action) recognition model further includes. That is, the surgical tool motion recognition model plays a role of recognizing the meaning of the motion of the surgical tool performed in a specific surgical step (that is, an operation for obtaining a result).
- the surgical tool motion recognition model recognizes a basic surgical motion by acquiring a surgical image (or a specific section in the surgical image) that requires recognition of motion semantics and learning a plurality of image frames included in the surgical image.
- the basic surgical operation refers to a basic operation such as cutting and grasping using a specific surgical tool. Thereafter, the surgical tool motion recognition model extracts a set of consecutive video frames from the plurality of video frames based on the recognized surgical motion, and derives the meaning of the unit surgical motion through learning.
- the unit surgical operation refers to a surgical operation having a meaning for producing a specific result as an action for a specific organ in the surgical process.
- the surgical tool motion recognition model includes: a basic surgical motion recognition module that recognizes a basic surgical motion that is a minimum unit of a surgical motion that does not reflect meaning during surgery; And a surgical meaning recognition module that recognizes a unit surgical operation based on the continuous basic surgical operation.
- the basic surgical motion recognition module learns an image of a surgical tool, it recognizes the surgical tool in a continuous frame and calculates a state change of the surgical tool to recognize the basic surgical action of the surgical tool.
- the surgical meaning recognition module trains learning data that matches the continuous basic surgical operation and the unitary surgical operation
- the continuous basic in the new surgical image recognized by the basic surgical operation recognition module When surgical operation data is input, a unit surgical operation is calculated.
- the learning data may further include organ information on which a continuous basic surgical operation is performed, and the organ information may be used as a result calculated by the organ recognition model in the surgical element recognition layer.
- a camera position recognition model for recognizing the position of the camera during minimally invasive surgery is further included.
- the camera when the camera is an endoscopic used in laparoscopic surgery, the camera may be a forward/reverse, up/down/left/right movement within the abdominal cavity after the endoscope is inserted into the abdominal cavity. It serves to calculate the position of the camera in real time within the abdominal cavity.
- the camera position recognition model sets a reference position of a camera based on one or more reference points of a reference object in an image photographed by the camera, and calculates a relative position change based on a change in an object in the image Calculate camera position in real time.
- the camera position recognition model obtains a reference object from the actual surgical image photographed by the camera entering the body, sets a reference position for the camera, and the camera As it moves, the position change amount of the camera is calculated.
- the camera position recognition model is based on an object change in a real-time image acquired by the camera (for example, a change in size of an object, a change in the position of an object, a change in an object appearing in the image, etc.). Calculate. Then, the camera position recognition model calculates the current position of the camera based on the amount of change of the position of the camera with respect to the reference position.
- the actual surgical image may be a stereoscopic 3D image, and accordingly, the actual surgical image may be a three-dimensional stereoscopic image, that is, an image having a depth sense. Therefore, it is possible to accurately grasp the position in the three-dimensional space of the surgical tool through the depth map of the actual surgical image.
- the reference object is easy to detect a feature from an image, exists in a fixed position inside the body, has little or no movement during surgery, does not cause shape deformation, is not affected by surgical tools, and medical image data It is also possible to use an organ or internal specific site that satisfies at least one of the conditions such as that it should be possible to acquire by (eg, data taken by CT, PET, etc.). For example, a portion that has very little motion during surgery, such as the liver and abdominal wall, and a portion that can be obtained from medical image data such as the stomach, esophagus, and gallbladder may be determined as a reference object.
- a surgical tool in an image photographed by the camera as a reference object may be used as a reference object.
- the surgical tool stopped at a specific position may be used as a reference object.
- the camera position recognition model may repeat the process of resetting the reference position of the camera based on one or more reference points of the reference object when the reference object appears in the surgical image. In other words, if the camera movement is frequent, an error may occur in the camera position as the relative movement accumulates from the reference position. Therefore, the camera position recognition model performs a process of resetting the camera reference position when a new reference object appears in the image in real time. It can be done.
- the camera position recognition model can accumulate the camera position in the virtual body model generated in the same way as the actual patient's body. That is, the camera position recognition model may accumulate the relative position change with respect to the reference position of the camera after matching the actual reference object in the actual patient body with the virtual reference object in the virtual body model. Through this, the camera position recognition model can record in which 3D virtual body model the path the camera moved during surgery.
- the camera position recognition model calculates the position of each surgical tool relative to the position of the camera as it recognizes the position of the camera. That is, the camera position recognition model calculates the relative positions of one or more surgical instruments from a specific position of the camera.
- a surgical tool position calculation model is further included.
- the surgical tool position calculation model the position of the first point of the surgical tool relative to the external space of the surgical subject based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the surgical subject Calculate and reflect the characteristic information of the surgical tool based on the position of the first point of the surgical tool through a virtual body model generated in accordance with the physical condition of the surgical subject. Calculate the 2 point position.
- the surgical tool position calculation model provides the position information of the surgical tool in the actual body internal space of the surgical subject based on the position of the second point of the surgical tool with respect to the virtual body model.
- the surgical tool is received by applying motion data when the surgical tool of the surgical robot system is moved to the virtual body model by matching the coordinate system of the surgical robot system with the coordinate system of the virtual body model. Calculate real-time location.
- the surgical tool position calculation model is based on a reference point (eg, a specific location, such as a belly button of a patient's body, a marker displayed on a patient's body surface, or an identification mark projected by a surgical robot system on the body surface) of a surgical subject.
- the surgical tool position calculation model can acquire the real-time position of the surgical tool by applying the surgical tool movement of the surgical robot into the virtual body model as the surgical robot actually matches the coordinate system of the actual patient body and the virtual body model. have.
- a surgical step recognition model is further included.
- the surgical step recognition model calculates what detailed surgical step corresponds to a process in which the current surgery is performed or a specific section in the surgical image among the entire surgical procedures.
- the surgical stage recognition model can calculate the surgical stage of the surgical image in various ways. For example, the surgical stage recognition model stores a progression sequence for each surgical type, and can recognize a specific surgical phase (Phase) in the progression sequence based on the area where the camera is located in the patient's body.
- an event occurring in the surgical image is recognized, and the event includes an event recognition model, which is a non-ideal situation during surgery including bleeding.
- the event recognition model eg, bleeding recognition model
- the event recognition model may include a bleeding recognition module; And a bleeding amount calculation module.
- the bleeding presence recognition module recognizes whether a bleeding area exists in the surgical image based on deep learning-based learning, and recognizes a bleeding position in the image frame.
- the bleeding presence recognition module may recognize whether a bleeding area is included in a new image by learning a plurality of images including bleeding.
- the presence/absence recognition module may convert each pixel in the surgical image to a specific value based on the feature map including the feature information, and specify the bleeding area based on the specific value of each pixel in the surgical image. .
- Each pixel in the surgical image may be converted into a specific value based on a predetermined weight according to whether it is an area corresponding to the characteristic of the bleeding based on the feature map (that is, according to the degree of affecting the characteristic of the bleeding).
- the bleeding presence recognition module applies Gradient CAM (Gradient-weighted Class Activation Mapping) technology that inversely estimates the learning result of recognizing the bleeding area in the surgical image through CNN, so that each pixel of the surgical image is identified by a specific value.
- Gradient CAM Gradient-weighted Class Activation Mapping
- the bleeding presence recognition module assigns a high value (eg, a high weight) to pixels recognized as corresponding to the bleeding area in the surgical image based on the feature map, and a low value to the pixel recognized as not corresponding to the bleeding area.
- Each pixel value of the surgical image may be converted in a manner of giving (eg, a low weight).
- the presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area.
- the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value.
- the bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.
- the bleeding amount calculation module is characterized in that it calculates the bleeding amount in the bleeding area based on the location of the bleeding area.
- the bleeding amount calculation module may calculate the bleeding amount using pixel information of the bleeding area in the surgical image. For example, the bleeding amount calculation module may calculate the bleeding amount using the number of pixels corresponding to the bleeding area in the surgical image and color information (eg, RGB values) of the pixels.
- the bleeding amount calculation module acquires depth information of a bleeding area in the surgical image based on a depth map of the surgical image, and bleeds based on the acquired depth information
- the amount of bleeding can be calculated by estimating the volume corresponding to the region.
- the surgical image is a stereoscopic image, since it has three-dimensional stereoscopic sense, that is, depth information, it is possible to grasp the volume in the three-dimensional space for the bleeding area.
- the bleeding amount calculation module acquires pixel information (eg, the number of pixels, the position of pixels, etc.) of the bleeding area in the surgical image, and calculates a depth value of the depth map corresponding to the obtained pixel information of the bleeding area.
- the bleeding amount calculation module may calculate the bleeding amount by grasping the volume of the bleeding area based on the calculated depth value.
- the bleeding amount in the bleeding area may be calculated using gauze information.
- the bleeding amount calculation module may be reflected in calculating the bleeding amount generated in the bleeding area using the number of gauze in the surgical image, color information of the gauze (eg, RGB value), and the like.
- the operation time calculation / prediction model further includes.
- the operation time calculation/prediction model includes an operation time prediction module for calculating an expected time to completion of an operation step being performed during operation; And an operation time calculation module that calculates a time when each step is performed after the operation is completed.
- the operation time calculation module extracts a time corresponding to a specific operation phase (Phase) after the operation is completed, and calculates the total time required for a specific operation phase (Phase).
- the operation time prediction module based on the time required to a specific point in time of a particular surgical step and a specific surgical step prior to a specific point in time (eg, predicted reference point) during the operation, Calculate the remaining surgical time for the expected surgical step.
- the surgical time prediction module acquires a predetermined surgical image including a surgical operation for a specific surgical step, and uses a surgical operation time obtained based on the preset surgical image and the preset surgical image.
- the learning data is generated, and learning is performed based on the learning data to predict the operation time in the specific surgical step.
- the surgical analysis layer is a layer for calculating an analysis result based on surgical elements obtained from one or more surgical element recognition models.
- the surgical analysis layer is formed as an upper layer of the surgical element recognition layer.
- the surgical analysis layer includes one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results that are a combination of results provided in the one or more surgical element recognition models.
- the surgical analysis layer a blood loss recognition model for calculating the degree of blood loss based on a combination of surgical recognition results including a type of surgical organ recognized by the surgical element recognition layer, a surgical operation, and an event during the operation.
- the blood loss recognition model may be connected to at least one of an event recognition model, a long-term recognition model, and a surgical motion recognition model included in the surgical element recognition layer to receive a combination of surgical recognition results.
- the blood loss recognition model when used during surgery, can calculate the blood loss level when an event occurs and provide it to the medical staff.
- the surgical analysis layer a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer; includes do.
- the long-term damage detection model can calculate the level of long-term damage by receiving a specific surgical step, a motion performed based on a specific surgical tool, and a surgical time from each surgical element recognition model.
- the blood loss recognition model and the organ damage detection model are used to calculate the analysis results in each surgical procedure during or after surgery.
- the surgical analysis layer is a surgical tool recognized by the surgical element recognition layer, an organ performing an operation with the surgical tool, and the surgical tool It further includes; a surgical operation misuse detection model, which detects the use of the wrong surgical instrument, based on the detailed surgical operation during the entire operation performed, and an event generated in the detailed surgical operation.
- the surgical analysis layer further includes an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer.
- the optimal surgical plan calculation model calculates and provides an optimal surgical plan to be performed later by analyzing the surgical image data previously obtained in real time.
- the optimal surgical plan calculation model is based on a combination of surgical recognition results for a virtual operation performed on a patient's virtual body model before surgery, and calculates and provides a surgical plan for a real patient.
- the optimal surgical plan calculation model may include a function of calculating an optimal entry position during minimally invasive surgery.
- the virtual surgery system when the medical staff performs virtual surgery in the virtual body model, the virtual surgery system generates only the operation part of the surgical tool to perform a surgical simulation (ie, virtual surgery) without affecting the arm portion of the surgical tool. That is, virtual surgery is performed according to the surgical operation pattern of medical personnel without limitations on the characteristics of the patient's body (eg, organ placement, vascular condition, etc.) or the characteristics of surgical instruments.
- the optimal surgical plan calculation model calculates the optimal surgical tool entry position during actual surgery by considering the internal characteristics of the patient's body or the characteristics of the surgical tool based on the results after performing the virtual surgery. That is, as the candidate region of the entry position of the arm capable of performing the movement of the surgical tool operation unit is continuously calculated, the region capable of intersection is extracted.
- the computer when using a tool A, tool B, or tool C during surgical robot or laparoscopic surgery, the computer does not reach the operating part of tool A when performing a surgical operation (that is, it is operated due to the limitation of the length of the surgical tool)
- a body surface area where a point that cannot be reached occurs may be excluded from the entry range.
- the computer may exclude the surface area of the body that collides with the body organ or tissue in the process of performing the surgical operation by entering the tool A from the range of access.
- the computer may exclude the corresponding body surface point from the enterable range if the surgical tool is not implemented at a specific location after the surgical tool enters at each body surface point within the enterable range.
- the computer can calculate the entry range for the tool A.
- the computer can calculate the optimal entry position of each surgical tool by separately performing the process of calculating the accessible range for each surgical tool (eg, B tool and C tool).
- the computer may separately perform an entry range calculation process for each function based on the function of the surgical tool, thereby calculating an optimal entry position to which the function of each surgical tool can be applied.
- the computer can extract an optimal entry range for each surgical tool and determine a range where a plurality of optimal entry ranges overlap as an optimal entry position. have. For example, when the tool A is changed to the D tool in the course of performing the surgery, the computer may calculate the overlapping region of the entry range for the A tool and the entry range for the D tool as an optimal entry position candidate region. . Since the location where the surgical tool can be entered is limited to a certain number (for example, 3), when changing from A tool to D tool, the same entry position must be used. A position that satisfies all the tool's entry ranges can be determined as the final optimal entry position.
- a certain number for example, 3
- the computer can divide the range in which the corresponding surgical tool is used (that is, the range of movement) into several groups reachable from a plurality of entry positions on the body surface. For example, when laparoscopic or robotic surgery is performed by generating three entry positions on the body surface, the computer divides the range of motion of the surgical tool into groups of three or less. At this time, the computer divides the range of motion based on whether or not it can be reached from a plurality of accessible ranges selected by other surgical instruments.
- a specific surgical tool having a wide range of motion ie, the first surgical tool
- another surgical tool ie, the second surgical tool
- another surgical tool ie, the second surgical tool
- the computer determines the range of motion of the first operation tool when it is used together with the second operation tool, where the first operation tool enters the optimal operation position of the second operation tool (that is, the second operation tool is entered). Keyhole).
- the computer when the first surgical tool is continuously used in spite of the change of other surgical tools, the computer must be operated by entering the same entry position in consideration of the convenience of the user and the time required for the surgery. Can be set as a group.
- the medical staff determines the optimal entry position of the surgical tool by reflecting the surgical simulation results performed without considering the entry position of the surgical tool and the long-term jamming of the arm portion of the surgical tool, the medical staff can perform the most convenient surgical operation. .
- the optimal surgical plan calculation model is based on a result of performing a virtual operation by only a motion part of a surgical tool in which a cancer part has been removed by a medical staff, and selecting an appropriate surgical tool type for use in surgery. It can be calculated and presented.
- the optimal surgical plan calculation model may perform training in various ways. For example, it is possible to calculate the optimal operation for each surgical type through reinforced learning by acquiring surgical images performed by a plurality of medical staff.
- the optimal surgical plan calculation model does not use the actual surgical data, but performs a virtual operation with a virtually generated surgical procedure and then repeats the process of evaluating whether it is an optimal surgical procedure and then the optimal surgical plan. Can be created.
- the optimal surgical plan calculation model is optimized by generating a plurality of genes corresponding to the surgical process based on a surgical process consisting of at least one detailed surgical operation, and performing virtual surgery on each of the plurality of genes. Evaluate surgery. Thereafter, the optimal surgical plan calculation model selects at least one gene among a plurality of genes based on the evaluation result, applies a genetic algorithm, applies a genetic algorithm to generate a new gene, and based on the new gene We derive the optimal surgical procedure.
- the optimal surgical plan calculation model may calculate fitness by performing virtual surgery on a new gene (child gene).
- the optimal surgical plan calculation model determines whether the suitability of a new gene (child gene) meets a preset condition, selects a new gene (child gene) that meets the condition, and applies genetic algorithms such as mating and mutation. can do.
- a new child gene can be generated again. That is, the computer can repeatedly generate child genes from the parent genes based on the fitness results for evaluating whether the surgery is optimal, and obtain a gene including the optimal surgical process among the finally generated child genes. have. For example, a gene having the highest suitability among child genes can be selected and derived through an optimized surgical procedure.
- Each surgical analysis model in the surgical analysis layer may receive a combination of surgical recognition results in various forms.
- the surgical analysis model may be input in the form of connected code data by encoding the result calculated from the surgical element recognition model. That is, each surgical analysis model can acquire and input the code data necessary for analysis in each model after acquiring the code data connecting the codes for the surgical recognition results at each time point.
- the surgical analysis model for each of a plurality of image frames, based on the surgical recognition information calculated from the plurality of surgical element recognition model, the surgical elements included in the surgical recognition information (surgical elements) between Relational Representation information representing a relationship may be input.
- the surgical recognition information refers to the surgical element information recognized from the image frame, for example, surgical tools, surgical operations, body parts, bleeding status, surgical stage, surgical time (eg, remaining surgical time, surgical time, etc.), and camera information (for example, it may include at least one surgical element among camera positions, angles, directions, movements, and the like.
- the relationship expression may be in a matrix form in which rows and columns are arranged as respective surgical elements, and values for correlations between the surgical elements are applied as matrix values.
- the surgical analysis model may be input after calculating a relationship expression for each frame of a surgical image or a specific unit division image.
- the surgical solution providing layer further includes.
- the surgical solution providing layer includes one or more surgical solution models providing surgical outputs arranged based on one or more analysis results obtained from the surgical analysis layer.
- the surgical solution provider layer includes one or more surgical solution models, which are a form of service that medical staff can use immediately.
- the surgical solution providing layer is formed as an upper layer of the surgical analysis layer, and receives results calculated from each surgical analysis model in the surgical analysis layer.
- the surgical solution model includes a surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer.
- the surgical evaluation model may receive a blood loss result, an organ damage model calculation result, and a comparison result between an optimal surgical process and an actual surgery to calculate the evaluation of the surgery performed by the medical staff.
- the surgical solution model further includes a chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer.
- the chart generation model can automatically generate a record of the operation performed and the result of performing the operation.
- the chart generation model combines the surgical elements obtained in the surgical process after the surgery is completed, inputs them into each surgical analysis model, and receives the analysis results of a plurality of surgical analysis models to enter a chart (for example, surgery Record paper) can be automatically generated (created).
- the surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer further includes.
- the medical staff's performance evaluation may be performed in consideration of the difficulty of the surgery performed by each medical staff.
- the surgical complexity calculated in the surgical complexity calculation model may be reflected. That is, in the surgical solution providing layer, the surgical evaluation model can be used for surgical evaluation by receiving the complexity calculation result calculated from the surgical complexity calculation model.
- a surgical Q&A model for calculating an answer to a question about the surgery.
- the surgical image analysis apparatus is composed of a surgical element recognition layer, a surgical analysis layer, and a surgical solution layer is described.
- the lowest layer may be a surgical element recognition layer that recognizes surgical elements that are the smallest units that must be medically recognized in the medical procedure.
- the middle layer may be a surgical module layer that can grasp a medical meaning through each surgical element and determine a condition for making a medical diagnosis.
- the top layer may be a surgical solution layer that recognizes medical problems that may occur throughout the medical procedure and provides solutions to each problem.
- the surgical element recognition layer may recognize surgical elements based on various surgical images photographed in the medical surgery process and various surgical related tools used in the medical surgery process.
- the surgical elements include a surgical phase (Phase), a body part (eg, an organ), an event, an operation time (Time), a surgical instrument (Instrument), a camera (Camera), and a surgical operation (Acton). , And other elements.
- the surgical element recognition layer may recognize at least one surgical element based on the surgical image obtained in the medical surgery process. At this time, it is possible to recognize not only the surgical elements recognized in the surgical image itself, but also using the results of learning the surgical images. Using the above-described embodiments of the present invention, it is possible to effectively recognize surgical elements that are medically recognized minimum units in the medical procedure.
- the surgical element recognition layer may individually recognize surgical elements. For example, when an organ is recognized from a surgical image, only the corresponding organ may be recognized, and when a surgical tool is recognized from a surgical image, only the corresponding surgical tool may be recognized.
- the surgical element recognition layer may use other surgical elements to recognize one surgical element. That is, the surgical element recognition layer may establish a primitive level relation representing a relationship between each surgical element, and recognize each surgical element using the primitive level relationship.
- the primitive level relationship lists additional surgical elements necessary to recognize the corresponding surgical element, and the relationship between the surgical elements (eg, state change, position change, shape change, color change, arrangement relationship, etc.) It may include specified information. For example, when an event (eg, an event for bleeding) is recognized from a surgical image, additional surgical elements such as organs, surgical instruments, and surgical operations are further recognized based on a primitive level relationship for the event, The event can be recognized through additional recognized surgical elements.
- the surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer.
- the surgical analysis layer using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.
- the surgical analysis layer can configure each surgical module according to information necessary in the medical surgery process or a medical problem to be solved (eg, bleeding loss evaluation, internal body damage detection, tool misuse detection, optimal surgical procedure proposal, etc.).
- the surgical analysis layer may constitute a bleeding loss evaluation module for grasping the degree of bleeding in a surgical subject during medical surgery.
- an internal damage detection module may be configured to determine how much damage has occurred in a specific organ during surgery.
- a surgical tool misuse detection module may be configured to determine whether a surgical tool is misused during surgery.
- each surgical module may selectively use at least one of the surgical elements recognized in the lower layer surgical element recognition layer.
- each surgical module may use a module level relation.
- the module level relationship may mean that the surgical elements to be recognized in the corresponding surgical module are determined and designated.
- the module level relationship may be one in which surgical elements to be recognized in the corresponding surgical module are determined based on the degree to which a specific meaning can be recognized from the surgical image (eg, the above-described representative recognition value; SAM).
- the surgical solution layer can finally solve a high-level medical problem using a medical meaning or medical judgment identified through the surgical analysis layer.
- the surgical solution layer using at least one surgical module, Chart Generation, Complication Estimation, Surgical Performance Assessment, Surgical Q&A System (Q&A using Surgical Bot) System).
- the surgical solution layer may configure each solution module or system according to each medical problem (eg, chart generation, complications determination, surgical performance evaluation, surgical Q&A system, etc.).
- the surgical solution layer may constitute a chart generating solution module (or system) for recording all information generated during the surgical process or for recording a patient's condition.
- a complications determination solution module (or system) for predicting complications that may occur after the surgical procedure may be configured.
- each solution module (or system) may selectively use the medical meaning or medical judgment derived from the surgical analysis layer, which is a lower layer.
- each solution module (or system) of the surgical solution layer may use a solution level relation.
- the solution level relationship may mean that a surgical module that needs to be used in a corresponding solution module is determined and designated.
- the medical staff may use a complications determination solution module to determine a patient's complications.
- the complications determination solution module can grasp that a bleeding loss evaluation module and an internal damage detection module are required from a lower level based on the solution level relationship.
- the necessary information may be received from the corresponding module of the lower layer to determine the patient's complications and provide the determination result to the medical staff.
- the medical solution model according to the implementation of the present invention is configured to operate for each module from the lowest layer to the highest layer according to the medical problem. Therefore, even if a new medical problem occurs, it is possible to efficiently provide a solution by configuring a new module for each layer or by configuring a new layer.
- the medical solution model according to the practice of the present invention can be applied to various medical operations, and can be effectively used for minimally invasive surgery using a laparoscopic or endoscope, especially a surgical robot.
- Figure 2 is a flow chart of a computer-aided surgical analysis method according to an embodiment of the present invention.
- a computer-aided surgical analysis method the computer obtaining a surgical image (S200); A computer inputting the surgical image into one or more surgical element recognition models (S400); A computer acquiring a combination of surgical recognition results calculated in each surgical element recognition model (S600; obtaining a surgical recognition result combination); And obtaining a one or more analysis results by inputting the combination of the surgical recognition results into one or more surgical analysis models (S800; obtaining a surgical analysis result).
- S200 surgical image
- S400 surgical element recognition models
- S600 obtaining a surgical recognition result combination
- S800 obtaining a surgical analysis result
- the computer acquires the surgical image (S200).
- the computer may acquire a surgical image in real time while surgery is being performed by a medical staff.
- the computer can acquire the entire surgical image stored after the medical staff completes the operation.
- the computer when a computer uses a surgical image acquired in real time, the computer utilizes the surgical image to provide necessary information during surgery of a medical staff.
- the computer uses the entire image after the surgery is completed, the computer uses the surgical image to perform a post-mortem analysis of the surgery.
- the computer inputs the surgical image into one or more surgical element recognition models (S400). That is, the computer inputs the surgical images into one or more surgical element recognition models in order to recognize the surgical elements included in the surgical images.
- the one or more surgical element recognition models are included in the surgical element recognition layer forming the lowest layer in the surgical analysis system.
- each surgical element recognition model may be constructed in parallel within the surgical element recognition layer and have a connection relationship with each other. That is, it is possible to receive and use surgical elements (ie, recognition results) calculated from the B surgical element recognition model formed in parallel in the process of performing the A surgical element recognition model. Detailed description of each surgical element recognition model described above is omitted.
- the computer acquires a combination of surgical recognition results calculated from each surgical element recognition model (S600; obtaining a surgical recognition result combination). That is, the computer generates data combining each surgical element recognition result calculated from each surgical element recognition model.
- the computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results (S800; obtaining a surgical analysis result).
- the one or more surgical analysis models are included in the surgical analysis layer on the surgical element recognition layer, and are selected according to a user's request. That is, each surgical analysis model in the surgical analysis layer is based on data necessary for analysis, and a connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer, so that a combination of surgical element recognition results required for analysis It can be input to each surgical analysis model. Detailed description of each surgical analysis model described above is omitted.
- the computer inputs one or more analysis results into a specific surgical solution model to provide a surgical output arranged based on the analysis results (S1000).
- the surgical solution model is included in the surgical solution providing layer, and the surgical solution providing layer is created in the surgical analysis system as an upper layer of the surgical analysis layer.
- each surgical solution model is connected to one or more surgical analysis models, one or more surgical analysis results may be input. Detailed description of each surgical solution model described above is omitted.
- the surgical analysis method by the computer according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer that is hardware, and stored in a medium.
- the above-described program is C, C++, JAVA, machine language, etc., in which a processor (CPU) of the computer can be read through a device interface of the computer in order for the computer to read the program and execute the methods implemented as a program.
- It may include a code (Code) coded in the computer language of the.
- code may include functional code related to a function defining functions necessary to execute the above methods, and control code related to an execution procedure necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do.
- the code may further include a memory reference-related code as to which location (address address) of the computer's internal or external memory should be referred to additional information or media necessary for the computer's processor to perform the functions. have.
- the code can be used to communicate with any other computer or server in the remote using the communication module of the computer. It may further include a communication-related code for whether to communicate, what information or media to transmit and receive during communication, and the like.
- the storage medium refers to a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory.
- examples of the storage medium include, but are not limited to, ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device. That is, the program may be stored in various recording media on various servers that the computer can access or various recording media on the user's computer.
- the medium may be distributed over a computer system connected through a network, and code readable by a computer in a distributed manner may be stored.
Abstract
Description
Claims (13)
- 컴퓨터가 수술영상을 획득하는 단계;A computer acquiring a surgical image;컴퓨터가 하나 이상의 수술요소 인식모델에 상기 수술영상을 입력하는 단계;A computer inputting the surgical image into one or more surgical element recognition models;컴퓨터가 각각의 수술요소 인식모델에서 산출된 수술인식결과 조합을 획득하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술인식결과 조합 획득단계; 및A computer acquiring a combination of surgical recognition results calculated from each surgical element recognition model, wherein one or more surgical element recognition models are included in a surgical element recognition layer at a lowest level in the surgical analysis system; And컴퓨터가 상기 수술인식결과 조합을 하나 이상의 수술 분석 모델에 입력하여 하나 이상의 분석결과를 획득하되, 상기 하나 이상의 수술 분석 모델은 상기 수술요소 인식층 위의 수술 분석층에 포함되는 것으로서, 사용자의 요청에 따라 선택되는 것인, 수술 분석결과 획득단계;를 포함하는, 컴퓨터에 의한 수술 분석 방법.The computer inputs the combination of the surgical recognition results into one or more surgical analysis models to obtain one or more analysis results, wherein the one or more surgical analysis models are included in the surgical analysis layer above the surgical element recognition layer. It is selected according to, surgical analysis result acquisition step; including, surgical analysis method using a computer.
- 제1항에 있어서,According to claim 1,상기 수술 분석층 내의 각각의 수술 분석 모델은,Each surgical analysis model in the surgical analysis layer,분석에 필요한 데이터를 기반으로, 상기 수술요소 인식층 내의 하나 이상의 수술요소 인식모델과 연결관계가 설정된 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.Based on the data required for analysis, characterized in that the connection relationship is established with one or more surgical element recognition models in the surgical element recognition layer, a computerized surgical analysis method.
- 제1항에 있어서, According to claim 1,상기 수술요소 인식모델은,The surgical element recognition model,상기 수술영상 내의 장기를 인식하는 장기인식모델;An organ recognition model that recognizes an organ in the surgical image;상기 수술영상 내의 수술도구와 상기 수술도구의 움직임을 인식하는 수술도구 인식모델; 및A surgical tool recognition model for recognizing the movement of the surgical tool and the surgical tool in the surgical image; And상기 수술영상 내에서 발생하는 이벤트를 인식하되, 상기 이벤트는 출혈을 포함하는 수술 중 비이상적인 상황인, 이벤트 인식모델;을 포함하는, 컴퓨터에 의한 수술 분석 방법.Recognizing an event occurring within the surgical image, the event is a non-ideal situation during surgery that includes bleeding, an event recognition model; including, surgical analysis method by a computer.
- 제1항에 있어서, According to claim 1,상기 수술 분석층은,The surgical analysis layer,상기 수술요소 인식층에서 인식된 수술 장기 유형, 수술동작 및 수술 중 발생 이벤트를 포함하는 수술인식결과 조합을 기반으로 혈액 손실 정도를 산출하는 혈액 손실 인식모델; 및A blood loss recognition model for calculating a degree of blood loss based on a combination of surgical recognition results including an operation organ type, a surgical operation, and an event occurring during the operation recognized by the surgical element recognition layer; And상기 수술요소 인식층에서 인식된 수술단계 및 수술시간을 포함하는 수술인식결과 조합을 기반으로 장기 손상 정도를 산출하는 장기 손상 감지모델;을 포함하며, Includes; a long-term damage detection model for calculating the degree of long-term damage based on a combination of surgical recognition results including the operation step and the operation time recognized by the surgical element recognition layer;상기 혈액 손실 인식모델 및 상기 장기 손상 감지모델은 The blood loss recognition model and the organ damage detection model수술 중 또는 수술 후에 각 수술과정에 분석결과 산출에 이용되는 것인, 컴퓨터에 의한 수술 분석 방법.Computer-aided surgical analysis method, which is used to calculate the analysis results during or after surgery.
- 제4항에 있어서, According to claim 4,수술이 완료된 후 수술 결과를 분석하는 경우,When analyzing the results of the surgery after the surgery is completed,상기 수술 분석층은,The surgical analysis layer,상기 수술요소 인식층에서 인식된 수술도구, 상기 수술도구로 동작이 수행되는 장기, 상기 수술도구로 수행되는 전체 수술 중의 세부수술단계, 상기 세부수술단계에서 발생한 이벤트를 기반으로, 잘못된 수술도구의 사용을 탐지하는, 수술도구 오사용 탐지모델;을 더 포함하는, 컴퓨터에 의한 수술 분석 방법.Based on the surgical tool recognized by the surgical element recognition layer, the organ in which the operation is performed with the surgical tool, the detailed surgical step during the entire operation performed with the surgical tool, and the event generated in the detailed surgical step, the use of the wrong surgical tool A surgical analysis method using a computer, further comprising; a detection model for misusing a surgical tool.
- 제1항에 있어서, According to claim 1,상기 수술 분석층은,The surgical analysis layer,상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되,Further comprising; an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer;상기 최적 수술계획 산출모델은, 실시간으로 전에 획득된 수술영상데이터를 분석함에 따라 이후에 수행되어야 하는 최적 수술계획을 산출하여 제공하는 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.The optimal surgical plan calculation model, characterized in that by providing the optimal surgical plan to be performed later by analyzing the surgical image data acquired in real time, computer-aided surgical analysis method.
- 제1항에 있어서, According to claim 1,상기 수술 분석층은,The surgical analysis layer,상기 수술요소 인식층에서 획득된 수술인식결과 조합을 기반으로 최적 수술계획을 산출하는 최적 수술계획 산출모델;을 더 포함하되,Further comprising; an optimal surgical plan calculation model for calculating an optimal surgical plan based on a combination of surgical recognition results obtained from the surgical element recognition layer;상기 최적 수술계획 산출모델은, 수술 전에 환자의 가상신체모델에 수행된 가상수술에 대한 수술인식결과 조합을 기반으로, 실제 환자에 대한 수술계획을 산출하여 제공하는 것을 특징으로 하는, 컴퓨터에 의한 수술 분석 방법.The optimal surgical plan calculation model is based on a combination of surgical recognition results for virtual surgery performed on a patient's virtual body model before surgery, and provides a surgical plan for a real patient by calculating and providing a surgical plan for a computer. Method of analysis.
- 제1항에 있어서, According to claim 1,컴퓨터가 하나 이상의 분석결과를 특정한 수술 솔루션 모델에 입력하여, 분석결과를 기반으로 정리된 수술산출물을 제공하는 단계;를 더 포함하는, 컴퓨터에 의한 수술 분석 방법.The computer inputs one or more analysis results into a specific surgical solution model, providing a surgical output arranged based on the analysis results; further comprising, surgical analysis method by a computer.
- 제8항에 있어서, The method of claim 8,상기 수술 솔루션 모델은,The surgical solution model,상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 평가 결과를 산출하는 수술 평가 모델;A surgical evaluation model for calculating an evaluation result for surgery based on one or more analysis results obtained from the surgical analysis layer;상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술에 대한 차트를 생성하는 차트 생성 모델; 및A chart generation model that generates a chart for surgery based on one or more analysis results obtained from the surgical analysis layer; And상기 수술분석층에서 획득된 하나 이상의 분석결과를 기반으로 수술의 복잡도 또는 난이도를 산출하는 수술 복잡도 산출모델;을 더 포함하는, 컴퓨터에 의한 수술 분석 방법.A computerized surgical analysis method further comprising; a surgical complexity calculation model for calculating the complexity or difficulty of surgery based on one or more analysis results obtained from the surgical analysis layer.
- 제8항에 있어서, The method of claim 8,상기 수술분석층에서 획득된 하나 이상의 분석결과를 학습함에 따라, 수술에 대한 질문에 대한 답변을 산출하는 수술 Q&A 모델;을 포함하며,Includes a surgical Q&A model that calculates an answer to a question about surgery by learning one or more analysis results obtained from the surgical analysis layer.상기 수술산출물 제공단계는,The step of providing the surgical output,의료진의 특정 수술에 대한 질문을 입력함에 따라 답변을 산출하여 제공하는 것인, 컴퓨터에 의한 수술 분석 방법.A computer-aided surgical analysis method in which an answer is calculated and provided according to input of a question about a specific operation of a medical staff.
- 하드웨어인 컴퓨터와 결합되어, 제1항 내지 제10항 중 어느 한 항의 방법을 실행시키기 위하여 매체에 저장된, 컴퓨터에 의한 수술 분석 프로그램.A computer-aided surgical analysis program stored in a medium to execute the method of any one of claims 1 to 10 in combination with a hardware computer.
- 수술영상이 입력됨에 따라 수술인식결과를 산출하는 하나 이상의 수술요소 인식모델을 포함하되, 하나 이상의 수술요소 인식모델은 수술분석시스템 내의 최하위 레벨인 수술요소 인식층에 포함되는 것인, 수술요소 인식층; 및Surgical element recognition layer that includes one or more surgical element recognition models for calculating the surgical recognition result as the surgical image is input, wherein one or more surgical element recognition models are included in the surgical element recognition layer at the lowest level in the surgical analysis system. ; And상기 하나 이상의 수술요소 인식모델에서 제공된 결과의 조합인 수술인식결과 조합을 기반으로 분석결과를 획득하는 하나 이상의 수술 분석 모델을 포함하되, 상기 수술요소 인식층의 상위 층으로 형성되는, 수술 분석층;을 포함하는, 수술영상 분석장치.A surgical analysis model including one or more surgical analysis models for obtaining an analysis result based on a combination of surgical recognition results, which is a combination of results provided by the one or more surgical element recognition models, wherein the surgical analysis layer is formed as an upper layer of the surgical element recognition layer; Including, surgical image analysis device.
- 제12항에 있어서, The method of claim 12,상기 수술 분석층에서 획득된 하나 이상의 분석결과를 기반으로 정리된 수술산출물을 제공하는 하나 이상의 수술 솔루션 모델을 포함하는, 수술 솔루션 제공층;을 더 포함하는, 수술영상 분석장치.A surgical image analysis device further comprising; a surgical solution providing layer, including one or more surgical solution models that provide a surgical output arranged based on one or more analysis results obtained from the surgical analysis layer.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0013320 | 2019-02-01 | ||
KR20190013320 | 2019-02-01 | ||
KR10-2020-0011504 | 2020-01-31 | ||
KR1020200011504A KR20200096155A (en) | 2019-02-01 | 2020-01-31 | Method for analysis and recognition of medical image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020159276A1 true WO2020159276A1 (en) | 2020-08-06 |
Family
ID=71840082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/001475 WO2020159276A1 (en) | 2019-02-01 | 2020-01-31 | Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020159276A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560602A (en) * | 2020-12-02 | 2021-03-26 | 中山大学中山眼科中心 | Cataract surgery step identification method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009543611A (en) * | 2006-07-12 | 2009-12-10 | メディカル サイバーワールド、インコーポレイテッド | Computerized medical training system |
KR101399269B1 (en) * | 2011-01-07 | 2014-05-27 | 레스토레이션 로보틱스, 인코포레이티드 | Methods and systems for modifying a parameter of an automated procedure |
US20180137244A1 (en) * | 2016-11-17 | 2018-05-17 | Terarecon, Inc. | Medical image identification and interpretation |
KR101864380B1 (en) * | 2017-12-28 | 2018-06-04 | (주)휴톰 | Surgical image data learning system |
KR101862360B1 (en) * | 2017-12-28 | 2018-06-29 | (주)휴톰 | Program and method for providing feedback about result of surgery |
-
2020
- 2020-01-31 WO PCT/KR2020/001475 patent/WO2020159276A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009543611A (en) * | 2006-07-12 | 2009-12-10 | メディカル サイバーワールド、インコーポレイテッド | Computerized medical training system |
KR101399269B1 (en) * | 2011-01-07 | 2014-05-27 | 레스토레이션 로보틱스, 인코포레이티드 | Methods and systems for modifying a parameter of an automated procedure |
US20180137244A1 (en) * | 2016-11-17 | 2018-05-17 | Terarecon, Inc. | Medical image identification and interpretation |
KR101864380B1 (en) * | 2017-12-28 | 2018-06-04 | (주)휴톰 | Surgical image data learning system |
KR101862360B1 (en) * | 2017-12-28 | 2018-06-29 | (주)휴톰 | Program and method for providing feedback about result of surgery |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560602A (en) * | 2020-12-02 | 2021-03-26 | 中山大学中山眼科中心 | Cataract surgery step identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102014359B1 (en) | Method and apparatus for providing camera location using surgical video | |
US11737841B2 (en) | Configuring surgical system with surgical procedures atlas | |
WO2019132168A1 (en) | System for learning surgical image data | |
JP2023126480A (en) | Surgical system with training or assist functions | |
WO2019132165A1 (en) | Method and program for providing feedback on surgical outcome | |
WO2019132244A1 (en) | Method for generating surgical simulation information and program | |
KR102146672B1 (en) | Program and method for providing feedback about result of surgery | |
KR20210104190A (en) | Method for analysis and recognition of medical image | |
WO2020159276A1 (en) | Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image | |
KR102628324B1 (en) | Device and method for analysing results of surgical through user interface based on artificial interlligence | |
KR20200096155A (en) | Method for analysis and recognition of medical image | |
CN116075901A (en) | System and method for processing medical data | |
WO2019164277A1 (en) | Method and device for evaluating bleeding by using surgical image | |
WO2019132166A1 (en) | Method and program for displaying surgical assistant image | |
WO2021206517A1 (en) | Intraoperative vascular navigation method and system | |
WO2019164273A1 (en) | Method and device for predicting surgery time on basis of surgery image | |
WO2019164278A1 (en) | Method and device for providing surgical information using surgical image | |
KR102084598B1 (en) | Ai based surgery assistant system for operating lesion of bone | |
WO2023018138A1 (en) | Device and method for generating virtual pneumoperitoneum model of patient | |
WO2023008818A1 (en) | Device and method for matching actual surgery image and 3d-based virtual simulation surgery image on basis of poi definition and phase recognition | |
Casy et al. | “Stand-up straight!”: human pose estimation to evaluate postural skills during orthopedic surgery simulations | |
WO2023003389A1 (en) | Apparatus and method for determining insertion position of trocar on three-dimensional virtual pneumoperitoneum model of patient | |
WO2019164279A1 (en) | Method and apparatus for evaluating recognition level of surgical image | |
KR20190133424A (en) | Program and method for providing feedback about result of surgery | |
WO2023136616A1 (en) | Apparatus and method for providing virtual reality-based surgical environment for each surgical situation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20748512 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20748512 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.06.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20748512 Country of ref document: EP Kind code of ref document: A1 |