WO2023008818A1 - Poi 정의 및 phase 인식 기반의 실제 수술 영상과 3d 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 - Google Patents
Poi 정의 및 phase 인식 기반의 실제 수술 영상과 3d 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 Download PDFInfo
- Publication number
- WO2023008818A1 WO2023008818A1 PCT/KR2022/010608 KR2022010608W WO2023008818A1 WO 2023008818 A1 WO2023008818 A1 WO 2023008818A1 KR 2022010608 W KR2022010608 W KR 2022010608W WO 2023008818 A1 WO2023008818 A1 WO 2023008818A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- image
- information
- virtual
- poi information
- Prior art date
Links
- 238000001356 surgical procedure Methods 0.000 title claims abstract description 69
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004088 simulation Methods 0.000 claims abstract description 35
- 210000004204 blood vessel Anatomy 0.000 claims description 18
- 210000000056 organ Anatomy 0.000 claims description 10
- 238000013136 deep learning model Methods 0.000 claims description 9
- 210000003484 anatomy Anatomy 0.000 claims description 6
- 210000003205 muscle Anatomy 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 210000001519 tissue Anatomy 0.000 claims description 2
- 239000003925 fat Substances 0.000 claims 1
- 238000009877 rendering Methods 0.000 claims 1
- 230000003187 abdominal effect Effects 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 210000001367 artery Anatomy 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 210000003462 vein Anatomy 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000002591 computed tomography Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 239000000835 fiber Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000035606 childbirth Effects 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 208000005646 Pneumoperitoneum Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 210000003491 skin Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000006032 tissue transformation Effects 0.000 description 1
- 238000010977 unit operation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
- A61B2034/104—Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
- A61B2034/254—User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/003—Repetitive work cycles; Sequence of movements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
Definitions
- the present invention is to match an actual surgical image with a 3D-based virtual surgical simulation image. More specifically, the present invention relates to an apparatus and method for matching a real surgical image based on POI definition and phase recognition with a 3D-based simulated surgical image.
- the present invention for solving the above problems recognizes the surgical steps in the surgical image, obtains the position of POI (Point Of Interest) information for each surgical step from a 3D-based virtual surgical simulation image, and provides it together with the surgical image to do for that purpose.
- POI Point Of Interest
- the present invention provides a 3D-based virtual surgical simulation environment based on the virtual relief model in which the actual relief state of the patient is predicted, and matches the 3D-based virtual surgery simulation environment with the actual surgical image, thereby providing a 3D-based
- the purpose of this study is to make the simulated environment of virtual surgery play an excellent rehearsal role for actual surgery.
- Setting POI information Point of Interest
- the POI information setting step includes dividing the surgical image into one or more basic steps based on a surgical object to divide the surgical image into the surgical steps, target anatomy corresponding to the surgical purpose of the surgical object Alternatively, dividing a surgical operation for a target object into one or more subsections and dividing one or more unit images included in each of the subsections into divided sections for each operation by a surgical tool. can include
- the POI information setting step may set the POI information based on the divided subsections.
- the POI information position determination step may include recognizing a basic step corresponding to a real-time surgical step of the actual surgical image based on a deep learning model, and a real-time surgical step among a plurality of subsections included in the basic step. Recognizing a corresponding first subsection; Determining one divided section corresponding to the real-time operation step among a plurality of divided sections of the first subsection as a point in time when the image information is needed; Determining the position of the POI information for each subsection or the position of the POI information for each second subsection, which is a step immediately following the first subsection, and information on the position of the recognized POI information on the virtual relief model Matching may be included.
- the image information obtaining step includes moving the location of the camera inserted into the virtual relief model based on the location of the determined POI information on the UI, and acquiring image information about the corresponding location from the camera. and displaying the obtained image information on the UI at the time point.
- the POI information matching step may include acquiring the virtual relief model for the patient, displaying the virtual relief model on the UI (User Interface), and placing a camera ( endoscope) and sequentially matching the POI information to the virtual relief model in a state in which the camera is inserted.
- UI User Interface
- camera endoscope
- the POI information location determination step may include recognizing the real-time surgical stage of the actual surgical image based on a deep learning model, the position of the POI information for each recognized surgical stage, or the operation immediately following the recognized surgical stage.
- the image information obtaining step may include: Moving the position of the camera inserted into the virtual undulation model based on the position of the determined POI information, acquiring image information about the corresponding position from the camera, and displaying the acquired image information on the UI in real time. It may include a display step.
- the image information may be displayed at a corresponding position of the virtual undulation model, and an indicator indicating a description of the image information may be additionally displayed.
- the image information may be rendered in real time and displayed on the UI for a portion invisible to the naked eye in the actual surgical image.
- the virtual relief model may include at least one of actual organs, blood vessels, fat, and muscles in the patient's relief state.
- the real surgical image and a pre-stored surgical image performing the same surgery as the real surgical image
- the POI information is sequentially matched to the patient's virtual relief model used in the virtual surgery simulation environment, the real-time surgical stage of the actual surgical image is recognized, the location of the POI information for each recognized surgical stage is determined, and the UI A processor may be included to obtain and display image information captured by moving the location of a camera (Endoscope) inserted into the virtual relief model to the same location as the location of the determined POI information.
- the present invention recognizes the surgical steps in the surgical image, obtains the location of the POI information for each surgical step from a 3D-based virtual surgical simulation image, and provides it together with the surgical image, thereby providing Even in the case of a specialist who lacks experience in surgery, by receiving a good surgical guide in real time, the success rate of surgery and the risk of surgery can be significantly reduced.
- the present invention provides a 3D-based virtual surgical simulation environment based on the virtual relief model in which the actual relief state of the patient is predicted, and performs surgery by matching the 3D-based virtual surgery simulation environment and the actual surgical image It has the effect of providing an expert with a great guide.
- FIG. 1 is a diagram schematically illustrating an apparatus 10 for matching a real surgical image and a 3D-based virtual surgical simulation environment according to the present invention.
- FIG. 2 is a diagram explaining a process for matching a real surgery image and a 3D-based virtual surgery simulated environment according to the present invention.
- FIG 3 is an exemplary view showing blood vessel matching in a virtual relief model according to the present invention.
- FIG. 4 is an exemplary view of a UI showing a 3D-based virtual surgery simulation environment according to the present invention.
- FIG. 5 is a diagram explaining moving the position of the camera inserted into the virtual relief model to the position of the POI information of the surgical stage in the actual surgical image according to the present invention.
- FIG. 6 is an exemplary diagram for explaining that an image photographed through a camera inserted into a virtual relief model according to the present invention is displayed on a UI in real time together with an actual surgical image.
- FIG. 7 is an exemplary diagram for explaining that an image photographed through a camera inserted into a virtual relief model according to the present invention is displayed overlaid on an actual surgical image on a UI.
- FIG. 8 is a flowchart illustrating a process of matching a real surgical image and a 3D-based virtual surgical simulation environment according to the present invention.
- FIG. 9 is an exemplary diagram for explaining a process of determining a location of POI information by a processor according to the present invention.
- FIG. 1 is a diagram schematically illustrating an apparatus 10 for matching a real surgical image and a 3D-based virtual surgical simulation environment according to the present invention.
- FIG. 2 is a diagram explaining a process for matching a real surgery image and a 3D-based virtual surgery simulated environment according to the present invention.
- FIG 3 is an exemplary view showing blood vessel matching in a virtual relief model according to the present invention.
- FIG. 4 is an exemplary view of a UI showing a 3D-based virtual surgery simulation environment according to the present invention.
- FIG. 5 is a diagram explaining moving the position of the camera inserted into the virtual relief model to the position of the POI information of the surgical stage in the actual surgical image according to the present invention.
- FIG. 6 is an exemplary diagram for explaining that an image photographed through a camera inserted into a virtual relief model according to the present invention is displayed on a UI in real time together with an actual surgical image.
- FIG. 7 is an exemplary diagram for explaining that an image photographed through a camera inserted into a virtual relief model according to the present invention is displayed overlaid on an actual surgical image on a UI.
- the apparatus 10 may provide information within a patient's body, which is invisible to the naked eye during an actual operation, to a surgeon performing surgery in real time as provided in a virtual 3D-based virtual surgery simulation.
- the device 10 recognizes the surgical steps in the surgical image, obtains the position of the POI information for each surgical step from a 3D-based virtual surgical simulation image, and provides it together with the surgical image, thereby helping those with insufficient experience in the corresponding surgery.
- Specialists can also receive excellent surgical guides in real time, which can have an effect of significantly reducing the success rate of surgery and the risk of surgery.
- the apparatus 10 provides a 3D-based virtual surgical simulation environment based on the virtual relief model in which the actual relief state of the patient is predicted, and performs surgery by matching the 3D-based virtual surgery simulation environment with the actual surgical image. It can have the effect of providing excellent guidance to the performing expert.
- the device 10 may include all of various devices capable of providing results to the user by performing calculation processing.
- device 10 may be in the form of a computer. More specifically, the computer may include all of various devices capable of providing results to users by performing calculation processing.
- a computer includes not only a desktop PC and a notebook (Note Book) but also a smart phone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous/asynchronous A mobile terminal of IMT-2000 (International Mobile Telecommunication-2000), a Palm Personal Computer (Palm PC), and a Personal Digital Assistant (PDA) may also be applicable.
- a Head Mounted Display (HMD) device includes a computing function, the HMD device may become a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- the device 10 may include an acquisition unit 110 , a memory 120 , a display unit 130 and a processor 140 .
- the device 10 may include fewer or more components than those shown in FIG. 1 .
- the acquisition unit 110 is one that enables wireless communication between the device 10 and an external device (not shown), between the device 10 and an external server (not shown), or between the device 10 and a communication network (not shown). It may contain more than one module.
- the acquisition unit 110 may include one or more modules that connect the device 10 to one or more networks.
- the acquisition unit 110 may acquire the virtual relief model used for 3D-based virtual surgery simulation from the external server (not shown) or the memory 120 .
- the external device may be a medical imaging equipment that captures medical image data (hereinafter referred to as abdominal 3D image data).
- the medical image data may include all medical images capable of realizing the patient's body as a 3D model.
- the medical image data may include at least one of a computed tomography (CT) image, a magnetic resonance imaging (MRI) image, and a positron emission tomography (PET) image.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the external server may be a server that stores virtual relief models for each patient, medical data for each patient, and surgical images for each patient for a plurality of patients.
- the medical data for each patient may include data on at least one of the patient's age, gender, height, weight, body mass index, and presence or absence of childbirth.
- a communication network may transmit and receive various information between the device 10, an external device (not shown), and an external server (not shown).
- Various types of communication networks may be used as the communication network, for example, wireless communication methods such as WLAN (Wireless LAN), Wi-Fi, Wibro, Wimax, and HSDPA (High Speed Downlink Packet Access)
- wireless communication methods such as WLAN (Wireless LAN), Wi-Fi, Wibro, Wimax, and HSDPA (High Speed Downlink Packet Access)
- a wired communication method such as Ethernet, xDSL (ADSL, VDSL), HFC (Hybrid Fiber Coax), FTTC (Fiber to The Curb), FTTH (Fiber To The Home) may be used.
- the communication network is not limited to the communication methods presented above, and may include all other types of communication methods that are widely known or will be developed in the future in addition to the above communication methods.
- the memory 120 may store data supporting various functions of the device 10 .
- the memory 120 may store a plurality of application programs (application programs or applications) running in the device 10 , data for operation of the device 10 , and commands. At least some of these applications may exist for basic functions of the device 10 . Meanwhile, the application program may be stored in the memory 120, installed on the device 10, and driven by the processor 140 to perform an operation (or function) of the device 10.
- the memory 120 may include a plurality of processes for matching an actual surgical image and a 3D-based virtual surgical simulation environment to the present invention.
- the plurality of processes will be described later when the operation of the processor 140 is described.
- the memory 120 may store virtual relief models for each patient, surgical images, and the like.
- the virtual relief model may be generated and stored through the processor 140 or obtained and stored from the external server (not shown).
- the virtual relief model may include at least one of actual organs, blood vessels, fat, and muscles in the patient's relief state.
- the display unit 130 may implement a touch screen by forming a mutual layer structure or integrally with the touch sensor. Such a touch screen may provide an input interface between the device 20 and a user.
- the display unit 130 may display the actual surgical image and the virtual relief model.
- the processor 140 may control general operations of the device 10 in addition to operations related to the application program.
- the processor 140 may provide or process appropriate information or functions to a user by processing signals, data, information, etc. input or output through the components described above or by running an application program stored in the memory 120.
- the processor 140 may control at least some of the components discussed in conjunction with FIG. 1 in order to drive an application program stored in the memory 120 . Furthermore, the processor 140 may combine and operate at least two or more of the components included in the device 10 to drive the application program.
- the processor 140 may receive and set POI (Point of Interest) information for each surgical operation in a pre-stored surgical image performing the same operation as the actual surgical image from an expert.
- POI Point of Interest
- the POI information may include information on at least one of blood vessels, organs, and skin, which should be checked as important during surgery, and may be set according to an expert's input.
- the POI information includes a type of indicator (for example, an arrow, an asterisk, a circle, etc.) representing a POI in at least one of a real-time surgical image and a virtual relief model, a POI in at least one of a real-time surgical image and a virtual relief model. It may include at least one of the displayed position (coordinates) of information and detailed information on POI information (notes for surgery at the corresponding surgical stage set by an expert, precautions related to surgery, notifications, etc. are displayed as text images). there is.
- a type of indicator for example, an arrow, an asterisk, a circle, etc.
- the POI information on the blood vessel may include at least one of a branch point of a blood vessel, a branch point of an artery, and a branch point of a vein.
- the expert can define a surgical step based on pre-stored surgical images according to the type of surgery, and can designate POI information that is a point of interest for each surgical stage.
- the processor 140 may receive and set POI information for each surgical operation in the pre-stored surgical image from the expert based on a first process among a plurality of processes.
- the processor 140 may divide the surgical image into one or more basic steps based on a surgical object in order to divide the surgical image into the surgical steps according to the expert's input.
- the surgical object may be an organ or lesion on which a surgical operation is performed.
- the processor 140 divides the surgical image into organ units according to the purpose of the surgical operation for the surgical object according to the expert's input, and the surgical type according to the purpose of the surgical operation for the surgical object. It is possible to segment the surgical image.
- the processor 140 may divide directions included in the surgical image according to the purpose of the surgical operation for the surgical object according to the expert's input.
- the processor 140 may divide the plurality of unit images included in the basic step into main sections according to the expert's input.
- the processor 140 determines the positions of the surgical tools according to specific criteria, definitions, or goals for how to perform surgery on the surgical object according to the expert's input, and accordingly, the plurality of the surgical tools included in the basic step.
- a unit image may be divided into the main section.
- the processor 140 may divide the surgical operation corresponding to the surgical purpose of the surgical object in the main section into one or more subsections according to the expert's input.
- the processor 140 may set the POI information based on the divided subsection according to the expert's input.
- the processor 140 may divide into subsections having one or more hierarchies based on a target anatomy or a target object in which an operation is performed by a surgical tool. .
- the processor 140 may divide the surgical operation into tissue transformation or basic operation according to the purpose of the surgical operation performed on a specific target anatomy or a specific target object. there is.
- the processor 140 may determine the unit image included in each sub section as the specific unit motion and divide it into key movements.
- the target tissue is an anatomical part that is manipulated during surgery
- the target object is a material used in a surgical field necessary for surgery, such as a metal clip, a platelet clip, a suture, a needle, a gauze, and the like. At least one of the drains may be included.
- the processor 140 may divide one or more unit images included in each of the subsections into divided sections for each operation by a surgical tool according to the expert's input.
- the processor 140 determines one or more unit images included in each of the subsections as a specific unit motion according to the expert's input, divides them into key movements, and A single operation of a surgical instrument may be determined as a unit operation and divided into the divided sections.
- the processor 140 may divide the motion into a first single motion according to the spatial coordinate movement of the surgical tool in the surgical image according to the surgical purpose based on the expert's input.
- the processor 140 may divide into a second single motion according to the motion of the joint within the spatial coordinates of the surgical tool in the surgical image according to the surgical purpose based on the expert's input.
- the processor 140 may divide the surgical image into a third single motion according to the movement of the surgical tool in the surgical image according to the surgical purpose based on the expert's input.
- the processor 140 may set POI information for each surgical operation based on the divided divided section according to the expert's input.
- the processor 140 may subdivide the surgical steps of the surgical image according to the expert's input and set POI information during surgery for each subsection among the subdivided surgical steps.
- the processor 140 may match the POI information to the patient's virtual relief model used in the 3D-based virtual surgery simulation environment.
- the virtual relief model may be displayed on a user interface (UI).
- the processor 140 may match the POI information to a virtual relief model of the patient based on a second process among a plurality of processes.
- the processor 140 may acquire or generate the virtual relief model of the patient through the memory 120 or an external server (not shown).
- the virtual relief model may be generated by the processor 140 to predict an actual pneumoperitoneum state of the patient based on the patient's state data, a plurality of landmark data, and body data.
- condition data may include data on at least one of the patient's age, gender, height, weight, body mass index, and childbirth.
- the plurality of landmark data may be displayed on the patient's abdominal 3D image data.
- the plurality of cross-sectional image data is a cross-section of a position where the plurality of landmark data is displayed
- the body data is the height-to-width ratio of the patient's body in the plurality of cross-sectional image data, the skin circumference, the It may include at least one of a direction and distance, a fat area, and a muscle area with respect to the anterior-posterior direction of the body.
- the processor 140 may implement the same blood vessel as the state of the blood vessel in the patient's actual relief state when generating the virtual relief model.
- the processor 140 may use an EAP image to segment/reconstruct an artery on the CT image, and may use a PP image to segment/reconstruct a vein on the CT image.
- the position of the patient may be different when the EAP and the PP are photographed, so matching may be required.
- the processor 140 may perform additional division/restoration of the main part of the artery during the division/reconstruction of the vein, and adjust the position of the artery so that the main part of the artery on the vein coincides.
- the processor 140 may match the vein and the artery or automatically match the vein and the artery by receiving an expert's manipulation according to the main part 301 of the artery.
- the main part 301 of the artery may be POI information and may be a branch point of the artery.
- the virtual relief model can be implemented in the same way as in the actual relief state of the patient, even the shape of the blood vessel.
- the processor 140 describes the POI information (eg, the bifurcation point of the blood vessel) for each surgical stage. Accidents during surgery can be prevented by providing expert with image information obtained from a camera inserted into the virtual relief model.
- the processor 140 may display the virtual relief model on the UI (User Interface).
- the UI 400 may include a main screen area 410 and a preview screen area 420 .
- the main screen area 410 may be an area where an actual surgical image captured in real time through a camera (Endoscope) inserted into the patient's body during an actual surgery is displayed.
- a camera Endoscope
- the preview screen area 420 includes a first area 421 in which the virtual relief model is displayed in a plan view and images captured through the camera (Endoscope) inserted through the reference trocar of the virtual relief model. It may include a second region 422 in which an internal image of the virtual relief model is displayed.
- the processor 140 may simultaneously output a first area 421 viewing the surface of the relief model from above and a second area 422 viewing the surface of the relief model on the UI 400 through an inserted camera (endoscope). there is.
- the first region 421 may be a screen viewing the surface of the relief model from above. Also, the first area 421 may display information on a tool inserted through at least one trocar of the virtual relief model.
- the second area 422 may be an area in which an internal image of the virtual relief model captured by the camera inserted through the reference trocar is displayed.
- a state in which a tool inserted into the at least one trocar enters the inside of the virtual undulation model may be photographed through the camera and displayed in real time.
- the processor 140 may insert a camera (endoscope) into a preset location of the virtual relief model.
- the processor 140 may insert the camera through a trocar inserted at a position spaced apart from the lower end of the navel of the virtual relief model at a predetermined interval.
- the processor 140 may sequentially match the POI information to the virtual relief model in a state in which the camera is inserted.
- the processor 140 may sequentially match the POI information for each surgical stage based on image information obtained through the camera to the virtual relief model in a state in which the camera is inserted.
- the processor 140 may recognize the real-time surgical stage of the actual surgical image and determine the location of the POI information for each recognized surgical stage.
- the processor 140 may recognize the real-time surgical stage of the actual surgical image based on a third process among a plurality of processes and determine the position of the POI information for each recognized surgical stage.
- the processor 140 may recognize a real-time surgical step of the actual surgical image based on a deep learning model.
- the deep learning model may include, but is not limited to, a convolutional neural network (CNN, Convolutional Neural Network, hereinafter referred to as CNN), and may be formed of neural networks having various structures.
- CNN convolutional neural network
- CNN Convolutional Neural Network
- the processor 140 may recognize, based on the deep learning model, that the current viewpoint is the third step among real-time surgical steps (first to Nth steps) of the actual surgical image.
- the processor 140 recognizes the real-time surgical step corresponding to the current point of view
- the POI information corresponding to the recognized real-time surgical step or the recognized POI information for each surgical step set for the pre-stored surgical image POI information corresponding to the next surgical step after the real-time surgical step may be searched, the retrieved POI information may be extracted, and a portion corresponding to the extracted POI information may be displayed in the virtual relief model.
- the actual surgical image may be displayed.
- a plurality of surgical steps may be included in a pre-stored surgical image in which the same surgery is performed, and POI information may be set for each (or part) of the plurality of surgical steps.
- the processor 140 may distinguish each real-time surgical step from the actual surgical image.
- the processor 140 can recognize the current or previous surgical step from the actual surgical image.
- the processor 140 may determine whether POI information is set for a surgical stage corresponding to the recognized surgical stage in the pre-stored surgical image.
- the processor 140 may match information about the location of the determined POI information on the virtual relief model.
- the processor 140 matches the position information of the determined POI information to the position of the POI information on the second area 422 where the screen viewed through the camera (endoscope) is displayed. It can be.
- the processor 140 may recognize a basic step corresponding to a real-time surgical step of the actual surgical image based on a deep learning model.
- the processor 140 may recognize a first subsection corresponding to the real-time surgical step among a plurality of subsections included in the basic step.
- the processor 140 may determine a time point at which the image information is needed for one divided section corresponding to the real-time surgical step among a plurality of divided sections of the first subsection.
- the processor 140 may determine the location of the POI information for each of the first subsections or the location of the POI information for each of the second subsections immediately following the first subsection.
- the processor 140 may match information about the location of the recognized POI information on the virtual relief model.
- the processor 140 may move the position of the camera (Endoscope) inserted into the virtual undulation model to the same position as the position of the determined POI information through the UI 400 to acquire and display image information captured. .
- the processor 140 acquires and displays image information photographed from the camera (Endoscope) moved to the same position as the position of the determined POI information on the UI 400 based on a fourth process among a plurality of processes.
- the processor 140 may move the position of the camera inserted into the virtual relief model based on the position of the determined POI information on the UI 400 .
- the processor 140 may obtain image information about the corresponding location from the camera.
- the processor 140 may display the obtained image information on the UI 400 in real time.
- the processor 140 may obtain an image of a part that cannot be seen in an actual surgical image through the camera inserted into the virtual relief model.
- the processor 140 displays the image information obtained from the camera at a corresponding position of the virtual relief model on the UI 400, but represents “blood vessels” as a description of the image information. Additional indicators may be displayed.
- the processor 140 may render and display the image information on the UI 400 in real time for a portion invisible to the naked eye in the actual surgical image.
- the processor 140 renders blood vessels located behind organs on the main screen area 410 of FIG. 6 based on image information acquired from the camera inserted into the virtual relief model (the location and shape of blood vessels are indicated by dotted lines). It can be indicated by indicating the corresponding part with an arrow).
- FIG. 8 is a flowchart illustrating a process of matching a real surgical image and a 3D-based virtual surgical simulation environment according to the present invention.
- the operation of the processor 140 may be performed by the device 10 .
- the processor 140 may set POI (Point of Interest) information for each surgical operation in a pre-stored surgical image in which the same operation as the actual surgical image was performed (S801).
- POI Point of Interest
- the processor 140 may divide the surgical image into one or more basic steps based on surgical objects to divide the surgical image into the surgical steps according to the expert's input.
- the processor 140 may divide the plurality of unit images included in the basic step into main sections according to the expert's input.
- the processor 140 determines the location of surgical tools according to specific criteria, definitions, or goals for how to perform surgery on the surgical object based on the expert's input, and accordingly, the plurality of surgical tools included in the basic step.
- a unit image of may be divided into the main section.
- the processor 140 may divide the surgical operation corresponding to the surgical purpose of the surgical object in the main section into one or more subsections according to the expert's input.
- the processor 140 may divide one or more unit images included in each of the subsections into divided sections for each operation by a surgical tool according to the expert's input.
- the processor 140 may set the POI information based on the divided subsection according to the expert's input.
- the processor 140 may match the POI information to the patient's virtual relief model used in the 3D-based virtual surgery simulation environment (S802).
- the processor 140 may acquire the virtual relief model of the patient from the memory 120 or an external server (not shown), and display the virtual relief model on the UI (User Interface). .
- the virtual relief model may be displayed on a user interface (UI), and may include at least one of actual organs, blood vessels, fat, and muscles in the patient's relief state.
- UI user interface
- the processor 140 may insert a camera (endoscope) into a preset position of the virtual relief model, and sequentially match the POI information to the virtual relief model in a state where the camera is inserted.
- a camera endoscope
- the processor 140 may recognize the real-time surgical stage of the actual surgical image and determine the location of POI information for each recognized surgical stage (S803).
- the processor 140 recognizes the real-time surgical stage of the actual surgical image based on a deep learning model, and determines the position of the POI information for each recognized surgical stage or the location of the POI information for each surgical stage immediately following the recognized surgical stage.
- the position of POI information can be determined.
- the processor 140 may match information about the location of the determined POI information on the virtual relief model.
- the processor 140 may move the position of the camera (Endoscope) inserted into the virtual undulation model to the same position as the position of the determined POI information through the UI to acquire and display image information captured (S804). .
- the processor 140 may move the position of the camera inserted into the virtual relief model based on the determined position of the POI information on the UI and obtain image information about the corresponding position from the camera.
- the processor 140 may display the obtained image information on the UI in real time.
- the processor 140 may display the image information at a corresponding position of the virtual relief model, and may additionally display an indicator indicating a description of the image information.
- the processor 140 may render and display the image information on the UI in real time for a portion invisible to the naked eye in the actual surgical image.
- FIG. 8 describes that steps S801 to S804 are sequentially executed, but this is merely an example of the technical idea of this embodiment, and those skilled in the art to which this embodiment belongs will Since it will be possible to change and execute the order described in FIG. 8 without departing from the essential characteristics or to execute one or more steps of steps S801 to S804 in parallel, it will be possible to apply various modifications and variations, so FIG. 8 is shown in a time-series order. It is not limited.
- FIG. 9 is an exemplary diagram for explaining a process of determining the location of POI information by the processor 140 according to the present invention.
- the processor 140 may recognize a basic step corresponding to a real-time surgical step of the actual surgical image.
- the processor 140 may recognize a first subsection corresponding to the fourth step, which is the real-time surgical step, among the plurality of subsections included in the basic step.
- the processor 140 may determine a time when image information obtained through a virtual relief model is required for a third divided section corresponding to the real-time surgical step among a plurality of divided sections of the first subsection. .
- the point in time may be a point in time immediately before a point where the POI information is located during surgery.
- the processor 140 may determine the location of the POI information for each of the first subsections or the location of the POI information for each of the second subsections immediately following the first subsection.
- the POI information may be a branch point of a vein.
- the processor 140 may display image information obtained from the camera inserted into the virtual relief model on the UI in real time based on the position of the determined POI information for each of the first subsections.
- the processor 140 displays the image information obtained based on the position of the determined POI information for each of the first subsections on the UI in real time in the third division section at which the image information is required.
- the processor 140 can increase the success rate of surgery through necessary images in real time by providing the image information obtained through the virtual relief model based on the POI information required in real time during actual surgery to the expert.
- the processor 140 may display image information obtained from the camera inserted into the virtual relief model in real time on the UI based on the location of the determined POI information for each second subsection immediately following the first subsection. That is, the processor 140 can display in real time the image information obtained based on the position of the determined POI information for each second subsection in the third division section at which the image information is needed. It can be displayed on the UI.
- the processor 140 can increase the success rate of surgery through real-time necessary images by providing the expert with the image information obtained through the virtual relief model based on the POI information required for the next surgical procedure during the actual surgery. .
- the method according to the present invention described above may be implemented as a program (or application) to be executed in combination with a computer, which is hardware, and stored in a medium.
- the computer may be the device 10 described above.
- the above-described program is C, C++, C#, JAVA, JAVA, C, C++, C#, JAVA, which can be read by a processor (CPU) of the computer through a device interface of the computer, so that the computer reads the program and executes the methods implemented as a program.
- It may include a code coded in a computer language such as machine language.
- These codes may include functional codes related to functions defining necessary functions for executing the methods, and include control codes related to execution procedures necessary for the processor of the computer to execute the functions according to a predetermined procedure. can do.
- these codes may further include memory reference related codes for which location (address address) of the computer's internal or external memory should be referenced for additional information or media required for the computer's processor to execute the functions. there is.
- the code uses the computer's communication module to determine how to communicate with any other remote computer or server. It may further include communication-related codes for whether to communicate, what kind of information or media to transmit/receive during communication, and the like.
- Steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, implemented in a software module executed by hardware, or implemented by a combination thereof.
- a software module may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art to which the present invention pertains.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Human Computer Interaction (AREA)
- Gynecology & Obstetrics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (13)
- 장치에 의해 수행되는, 실제 수술 영상과 3D 기반의 가상 수술 모의 환경을 정합하는 방법에 있어서,상기 실제 수술 영상과 동일한 수술을 수행한 기 저장된 수술 영상 내의 수술 단계별 수술시 POI(Point of Interest) 정보를 설정하는 단계;UI(User Interface) 상에 표시되며 상기 3D 기반의 가상 수술 모의 환경에 이용되는 환자의 가상 기복 모델에 상기 POI 정보를 매칭하는 단계;상기 실제 수술 영상의 실시간 수술 단계를 인식하고 상기 인식한 수술 단계 별 POI 정보의 위치를 판단하는 단계; 및상기 UI를 통해, 상기 판단한 POI 정보의 위치와 동일한 위치로 상기 가상 기복 모델에 삽입된 카메라(Endoscope)의 위치를 이동시켜 촬영한 영상 정보를 획득하여 표시하는 단계;를 포함하는, 방법.
- 제1항에 있어서,상기 POI 정보 설정 단계는,상기 수술 영상을 상기 수술 단계로 분할하기 위해 수술 객체를 기반으로 하나 이상의 기본 단계로 분할하는 단계;상기 수술 객체에 대한 수술 목적에 대응하는 타겟 조직(Target Anatomy) 또는 타겟 오브젝트(Target Object)에 대한 수술 동작을 하나 이상의 하위섹션(SUB SECTION)으로 분할하는 단계; 및상기 각각의 하위섹션 내에 포함된 하나 이상의 단위 영상을 수술 도구에 의한 동작 별로 분할섹션으로 분할하는 단계;를 포함하는, 방법.
- 제2항에 있어서,상기 POI 정보 설정 단계는,상기 분할된 하위섹션을 기반으로 상기 POI 정보를 설정하는, 방법.
- 제3항에 있어서,상기 POI 정보 위치 판단 단계는,딥러닝 모델을 기반으로 상기 실제 수술 영상의 실시간 수술 단계에 해당하는 기본 단계를 인식하는 단계;상기 기본 단계에 포함되는 복수의 하위섹션 중 상기 실시간 수술 단계에 해당하는 제1 하위섹션을 인식하는 단계;상기 제1 하위섹션에 대한 복수의 분할섹션 중 상기 실시간 수술 단계에 해당하는 하나의 분할섹션에 대해 상기 영상 정보가 필요한 시점으로 판단하는 단계;상기 제1 하위섹션 별 상기 POI 정보의 위치 또는 상기 제1 하위섹션 바로 다음 단계인 제2 하위섹션 별 상기 POI 정보의 위치를 판단하는 단계; 및상기 인식한 POI 정보의 위치에 대한 정보를 상기 가상 기복 모델 상에 매칭하는 단계;를 포함하는, 방법.
- 제4항에 있어서,상기 영상 정보 획득 단계는,상기 UI 상에서 상기 판단한 POI 정보의 위치를 기반으로 상기 가상 기복 모델에 삽입된 상기 카메라의 위치를 이동시키는 단계;상기 카메라로부터 해당 위치에 대한 영상 정보를 획득하는 단계; 및상기 획득한 영상 정보를 상기 시점에 상기 UI 상에 표시하는 단계;를 포함하는, 방법.
- 제1항에 있어서,상기 POI 정보 매칭 단계는,상기 환자에 대한 상기 가상 기복 모델을 획득하는 단계;상기 UI(User Interface) 상에 상기 가상 기복 모델을 표시하는 단계;상기 가상 기복 모델의 기 설정된 위치에 카메라(endoscope)를 삽입하는 단계; 및상기 카메라가 삽입된 상태의 상기 가상 기복 모델에 상기 POI 정보를 순차적으로 매칭하는 단계;를 포함하는, 방법.
- 제1항에 있어서,상기 POI 정보 위치 판단 단계는,딥러닝 모델을 기반으로 상기 실제 수술 영상의 실시간 수술 단계를 인식하는 단계;상기 인식한 수술 단계 별 상기 POI 정보의 위치 또는 상기 인식한 수술 단계의 바로 다음 수술 단계 별 상기 POI 정보의 위치를 판단하는 단계; 및상기 판단한 POI 정보의 위치에 대한 정보를 상기 가상 기복 모델 상에 매칭하는 단계;를 포함하는, 방법.
- 제7항에 있어서,상기 영상 정보 획득 단계는,상기 UI 상에서 상기 판단한 POI 정보의 위치를 기반으로 상기 가상 기복 모델에 삽입된 상기 카메라의 위치를 이동시키는 단계;상기 카메라로부터 해당 위치에 대한 영상 정보를 획득하는 단계; 및상기 획득한 영상 정보를 실시간으로 상기 UI 상에 표시하는 단계;를 포함하는, 방법.
- 제8항에 있어서,상기 UI 상에 표시 단계는,상기 가상 기복 모델의 해당 위치에 상기 영상 정보 표시하되,상기 영상 정보의 설명을 나타내는 인디케이터를 추가로 더 표시하는, 방법.
- 제9항에 있어서,상기 UI 상에 표시 단계는,상기 UI 상에 상기 실제 수술 영상에서 육안상 안보이는 부분에 대해 상기 영상 정보를 실시간으로 렌더링(Rendering)하여 표시하는, 방법.
- 제1항에 있어서,상기 가상 기복 모델은,상기 환자의 기복 상태에서의 실제 장기, 혈관, 지방, 근육 중 적어도 하나를 포함하는, 방법.
- 하드웨어인 장치와 결합되어, 제1항 내지 제11항 중 어느 한 항의 실제 수술 영상과 3D 기반의 가상 수술 모의 환경을 정합하는 방법을 실행시키기 위하여 컴퓨터 판독 가능 기록매체에 저장된 컴퓨터 프로그램.
- 실제 수술 영상과 3D 기반의 가상 수술 모의 환경을 정합하는 장치에 있어서,상기 실제 수술 영상과 상기 실제 수술 영상과 동일한 수술을 수행한 기 저장된 수술 영상을 획득하는 획득부;상기 실제 수술 영상과 상기 가상 기복 모델을 표시하는 디스플레이부; 및상기 기 저장된 수술 영상 내의 수술 단계별 수술시 POI 정보를 설정하고,UI(User Interface) 상에 표시되며 상기 3D 기반의 가상 수술 모의 환경에 이용되는 환자의 가상 기복 모델에 상기 POI 정보를 순차적으로 매칭하고,상기 실제 수술 영상의 실시간 수술 단계를 인식하고 상기 인식한 수술 단계 별 POI 정보의 위치를 판단하고,상기 UI를 통해, 상기 판단한 POI 정보의 위치와 동일한 위치로 상기 가상 기복 모델에 삽입된 카메라(Endoscope)의 위치를 이동시켜 촬영한 영상 정보를 획득하여 표시하는 프로세서;를 포함하는, 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22849794.7A EP4364685A1 (en) | 2021-07-29 | 2022-07-20 | Device and method for matching actual surgery image and 3d-based virtual simulation surgery image on basis of poi definition and phase recognition |
CN202280052613.2A CN117881355A (zh) | 2021-07-29 | 2022-07-20 | 整合基于兴趣点定义及步骤识别的实际手术图像与基于3d的虚拟模拟手术图像的装置及方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2021-0099678 | 2021-07-29 | ||
KR1020210099678A KR102628325B1 (ko) | 2021-07-29 | 2021-07-29 | POI 정의 및 Phase 인식 기반의 실제 수술 영상과 3D 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023008818A1 true WO2023008818A1 (ko) | 2023-02-02 |
Family
ID=85038046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/010608 WO2023008818A1 (ko) | 2021-07-29 | 2022-07-20 | Poi 정의 및 phase 인식 기반의 실제 수술 영상과 3d 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11986248B2 (ko) |
EP (1) | EP4364685A1 (ko) |
KR (2) | KR102628325B1 (ko) |
CN (1) | CN117881355A (ko) |
WO (1) | WO2023008818A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120111871A (ko) * | 2011-03-29 | 2012-10-11 | 삼성전자주식회사 | 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치 |
KR20150043245A (ko) * | 2012-08-14 | 2015-04-22 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | 다중 비전 시스템의 정합을 위한 시스템 및 방법 |
KR20190099999A (ko) * | 2018-02-20 | 2019-08-28 | (주)휴톰 | 수술 시뮬레이션 정보 구축 방법, 장치 및 프로그램 |
KR102008891B1 (ko) * | 2018-05-29 | 2019-10-23 | (주)휴톰 | 수술보조 영상 표시방법, 프로그램 및 수술보조 영상 표시장치 |
KR20210034178A (ko) * | 2019-09-20 | 2021-03-30 | 울산대학교 산학협력단 | 수술 영상 제공 시스템 및 그 방법 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7607440B2 (en) * | 2001-06-07 | 2009-10-27 | Intuitive Surgical, Inc. | Methods and apparatus for surgical planning |
EP3426179B1 (en) * | 2016-03-12 | 2023-03-22 | Philipp K. Lang | Devices for surgery |
US11011077B2 (en) | 2017-06-29 | 2021-05-18 | Verb Surgical Inc. | Virtual reality training, simulation, and collaboration in a robotic surgical system |
JP2021510110A (ja) * | 2018-01-10 | 2021-04-15 | コヴィディエン リミテッド パートナーシップ | 外科用ポートの配置のためのガイダンス |
KR102276862B1 (ko) | 2018-03-06 | 2021-07-13 | (주)휴톰 | 수술영상 재생제어 방법, 장치 및 프로그램 |
JP7188970B2 (ja) | 2018-10-11 | 2022-12-13 | ザイオソフト株式会社 | ロボット手術支援装置、ロボット手術支援装置の作動方法、及びプログラム |
US11382696B2 (en) * | 2019-10-29 | 2022-07-12 | Verb Surgical Inc. | Virtual reality system for simulating surgical workflows with patient models |
-
2021
- 2021-07-29 US US17/389,133 patent/US11986248B2/en active Active
- 2021-07-29 KR KR1020210099678A patent/KR102628325B1/ko active IP Right Grant
-
2022
- 2022-07-20 WO PCT/KR2022/010608 patent/WO2023008818A1/ko active Application Filing
- 2022-07-20 EP EP22849794.7A patent/EP4364685A1/en active Pending
- 2022-07-20 CN CN202280052613.2A patent/CN117881355A/zh active Pending
-
2024
- 2024-01-18 KR KR1020240007971A patent/KR20240014549A/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120111871A (ko) * | 2011-03-29 | 2012-10-11 | 삼성전자주식회사 | 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치 |
KR20150043245A (ko) * | 2012-08-14 | 2015-04-22 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | 다중 비전 시스템의 정합을 위한 시스템 및 방법 |
KR20190099999A (ko) * | 2018-02-20 | 2019-08-28 | (주)휴톰 | 수술 시뮬레이션 정보 구축 방법, 장치 및 프로그램 |
KR102008891B1 (ko) * | 2018-05-29 | 2019-10-23 | (주)휴톰 | 수술보조 영상 표시방법, 프로그램 및 수술보조 영상 표시장치 |
KR20210034178A (ko) * | 2019-09-20 | 2021-03-30 | 울산대학교 산학협력단 | 수술 영상 제공 시스템 및 그 방법 |
Also Published As
Publication number | Publication date |
---|---|
US20230031396A1 (en) | 2023-02-02 |
EP4364685A1 (en) | 2024-05-08 |
KR20230019281A (ko) | 2023-02-08 |
KR102628325B1 (ko) | 2024-01-24 |
CN117881355A (zh) | 2024-04-12 |
KR20240014549A (ko) | 2024-02-01 |
US11986248B2 (en) | 2024-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018155894A1 (ko) | 영상 정합 장치 및 영상 정합 방법 | |
KR102014359B1 (ko) | 수술영상 기반 카메라 위치 제공 방법 및 장치 | |
WO2015099427A1 (ko) | 의료용 바늘의 삽입 경로의 생성 방법 | |
WO2019132244A1 (ko) | 수술 시뮬레이션 정보 생성방법 및 프로그램 | |
WO2019083227A1 (en) | MEDICAL IMAGE PROCESSING METHOD, AND MEDICAL IMAGE PROCESSING APPARATUS IMPLEMENTING THE METHOD | |
WO2019132165A1 (ko) | 수술결과에 대한 피드백 제공방법 및 프로그램 | |
WO2019164275A1 (ko) | 수술도구 및 카메라의 위치 인식 방법 및 장치 | |
WO2018174507A1 (ko) | 가상현실을 이용한 신경질환 진단 장치 및 방법 | |
WO2021157851A1 (ko) | 초음파 진단 장치 및 그 동작 방법 | |
WO2019209052A1 (en) | Medical imaging apparatus and method of controlling the same | |
WO2023008818A1 (ko) | Poi 정의 및 phase 인식 기반의 실제 수술 영상과 3d 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 | |
WO2019132166A1 (ko) | 수술보조 영상 표시방법 및 프로그램 | |
WO2016093453A1 (en) | Ultrasound diagnostic apparatus and method of operating the same | |
WO2022124705A1 (ko) | 의료 영상 기반 홀로그램 제공 장치 및 방법 | |
WO2021206518A1 (ko) | 수술 후 수술과정 분석 방법 및 시스템 | |
WO2019244896A1 (ja) | 情報処理システム、情報処理装置及び情報処理方法 | |
WO2016006765A1 (ko) | 엑스선 장치 | |
WO2020159276A1 (ko) | 수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램 | |
WO2023136695A1 (ko) | 환자의 가상 폐 모델을 생성하는 장치 및 방법 | |
WO2019164273A1 (ko) | 수술영상을 기초로 수술시간을 예측하는 방법 및 장치 | |
WO2021206517A1 (ko) | 수술 중 혈관 네비게이션 방법 및 시스템 | |
WO2023018138A1 (ko) | 환자의 가상 기복 모델을 생성하는 장치 및 방법 | |
WO2012015093A1 (ko) | 원격진료방법 및 전자기기 | |
WO2021096054A1 (ko) | 자동 의료그림 생성시스템, 이를 이용한 자동 의료그림 생성 방법 및 머신 러닝 기반의 자동 의료그림 생성 시스템 | |
WO2023003389A1 (ko) | 환자의 3차원 가상 기복 모델 상에 트로카의 삽입 위치를 결정하는 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22849794 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280052613.2 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022849794 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022849794 Country of ref document: EP Effective date: 20240129 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |