WO2019132614A1 - 수술영상 분할방법 및 장치 - Google Patents
수술영상 분할방법 및 장치 Download PDFInfo
- Publication number
- WO2019132614A1 WO2019132614A1 PCT/KR2018/016913 KR2018016913W WO2019132614A1 WO 2019132614 A1 WO2019132614 A1 WO 2019132614A1 KR 2018016913 W KR2018016913 W KR 2018016913W WO 2019132614 A1 WO2019132614 A1 WO 2019132614A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- classification unit
- surgical
- surgical image
- computer
- recognizing
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/363—Use of fiducial points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
Definitions
- the present invention relates to a surgical image segmentation method and apparatus.
- Deep learning is defined as a set of machine learning algorithms that try to achieve high levels of abstraction (a task that summarizes key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation techniques. Deep learning can be viewed as a field of machine learning that teaches computers how people think in a big way.
- a surgical image segmentation method comprising: obtaining a surgical image by a computer; recognizing at least one object included in each of one or more frames of the surgical image; The method of claim 1, further comprising the steps of: determining a position and a motion of each of the at least one recognized object; and determining, based on the determination result, Segmentation into one group of classification units.
- the step of dividing into the first classification unit group may include a step of dividing the position and the movement of each of the photographing apparatus and the one or more objects based on a result of the determination as to the position and the motion of the photographing apparatus and the one or more objects, And dividing the surgical image into at least one second classification unit group that can be recognized as an operation of the set second classification unit.
- the step of dividing into the second group of classification units may include a step of recognizing an operation of the predetermined third classification unit based on the at least one second classification unit operation recognized for each of the photographing apparatus and the at least one object Dividing the surgical image into one or more third classification unit groups, wherein the third classification unit group includes one or more of the second classification unit groups.
- the step of recognizing the operation of the first classification unit comprises the step of recognizing the motion of the first classification unit based on a positional relationship between the body part and the at least one object included in the surgical image, And recognizing the operation of the mobile terminal.
- the recognizing operation of the first classification unit may include acquiring modeling information corresponding to the body part, performing a surgical simulation including the modeling information and the virtual one or more objects, The one or more virtual objects moving according to the movement of the one or more objects, and recognizing the operation of the first classification unit using the simulation result.
- the dividing into the first classification unit group may include recognizing an operation of the predetermined fourth classification unit based on the operation of the at least one first classification unit recognized for each of the at least one first classification unit group Dividing the surgical image into one or more fourth classification unit groups, wherein the fourth classification unit group includes one or more of the first classification unit groups.
- the first classification unit may include information on one or more predetermined operations, and each of the predetermined one or more operations may be assigned a code capable of identifying each of the one or more predetermined operations.
- the method may further include recognizing an event occurring in the surgical image and dividing the surgical image into one or more event groups including the recognized event.
- an apparatus for segmenting a surgical image comprising: a memory for storing one or more instructions; and a processor for executing the one or more instructions stored in the memory, An imaging device that acquires a surgical image by executing one or more instructions, recognizes one or more objects included in each of the one or more frames of the surgical image, captures the surgical image, And divides the surgical image into at least one first classification unit group capable of recognizing an operation of a predetermined first classification unit, based on the determination result.
- a computer program stored in a recording medium readable by a computer to perform a surgical image segmentation method according to the disclosed embodiments in combination with a computer, which is hardware, is provided to solve the above-described problems.
- FIG. 1 is a diagram illustrating a robot surgery system in accordance with the disclosed embodiment.
- FIG. 2 is a flowchart illustrating a method of segmenting a surgical image according to an embodiment.
- FIG. 3 is a flowchart illustrating a method of spatially segmenting a surgical image according to an embodiment.
- FIG. 4 is a diagram showing an example of a surgical image.
- FIG. 5 is a flowchart illustrating a method of temporally dividing an operation image according to an embodiment.
- FIG. 6 is a flowchart illustrating an example of segmenting and recognizing a surgical image step by step.
- FIG. 7 is a diagram showing an example of a method of recognizing a surgical operation step by step.
- the term “part” or “module” refers to a hardware component, such as a software, FPGA, or ASIC, and a “component” or “module” performs certain roles. However, “part” or “ module “ is not meant to be limited to software or hardware. A “module “ or “ module “ may be configured to reside on an addressable storage medium and configured to play back one or more processors. Thus, by way of example, “a” or " module " is intended to encompass all types of elements, such as software components, object oriented software components, class components and task components, Microcode, circuitry, data, databases, data structures, tables, arrays, and variables, as used herein. Or " modules " may be combined with a smaller number of components and "parts " or " modules " Can be further separated.
- spatially relative can be used to easily describe a correlation between an element and other elements.
- Spatially relative terms should be understood in terms of the directions shown in the drawings, including the different directions of components at the time of use or operation. For example, when inverting an element shown in the figures, an element described as “below” or “beneath” of another element may be placed “above” another element .
- the exemplary term “below” can include both downward and upward directions.
- the components can also be oriented in different directions, so that spatially relative terms can be interpreted according to orientation.
- image may refer to multi-dimensional data composed of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a 3D image).
- image may include a medical image or the like of the object obtained by the CT photographing apparatus.
- an " object" may be a person or an animal, or part or all of a person or an animal.
- the subject may comprise at least one of the following: liver, heart, uterus, brain, breast, organs such as the abdomen, and blood vessels.
- the term "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging specialist, or the like, and may be a technician repairing a medical device.
- " medical image data " is a medical image captured by a medical image capturing apparatus, and includes all medical images capable of realizing a body of a subject as a three-dimensional model.
- Medical image data may include a computed tomography (CT) image, a magnetic resonance imaging (MRI), a positron emission tomography (PET) image, and the like.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- " refers to a model generated based on medical image data in accordance with an actual patient's body.
- the " virtual body model " may be generated by modeling the medical image data in three dimensions as it is, or may be corrected after modeling as in actual surgery.
- " As used herein, the term " detailed operation " means a minimum unit of a surgical operation divided according to a specific criterion.
- the terms " computer " and " device " encompass all of the various devices that can perform computational processing to provide results to a user.
- the computer may be a smart phone, a tablet PC, a cellular phone, a personal communication service phone (PCS phone), a synchronous / asynchronous A mobile terminal of IMT-2000 (International Mobile Telecommunication-2000), a Palm Personal Computer (PC), a personal digital assistant (PDA), and the like.
- the HMD device when the head mounted display (HMD) device includes a computing function, the HMD device can be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 1 is a diagram illustrating a robot surgery system in accordance with the disclosed embodiment.
- FIG. 1 there is shown a simplified schematic representation of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
- the robotic surgery system includes a medical imaging apparatus 10, a server 20, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging equipment 10 may be omitted from the robotic surgery system according to the disclosed embodiment.
- the surgical robot 34 includes a photographing device 36 and a surgical tool 38.
- robotic surgery is performed by the user controlling the surgical robot 34 using the control unit 30.
- robot surgery may be performed automatically by the control unit 30 without user control.
- the server 20 is a computing device including at least one processor and a communication unit.
- the control unit 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the photographing apparatus 36 includes at least one image sensor. That is, the photographing device 36 includes at least one camera device, and is used for photographing a body part of the object, that is, a surgical part. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed at the photographing device 36 is displayed on the display 340.
- the surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, anchoring, grabbing, etc., of the surgical site.
- the surgical tool 38 is used in combination with the surgical arm of the surgical robot 34.
- the control unit 30 receives information necessary for surgery from the server 20, or generates information necessary for surgery and provides the information to the user. For example, the control unit 30 displays on the display 32 information necessary for surgery, which is generated or received.
- the user operates the control unit 30 while viewing the display 32 to perform the robot surgery by controlling the movement of the surgical robot 34.
- the server 20 generates information necessary for robot surgery using the medical image data of the object (patient) photographed beforehand from the medical imaging apparatus 10, and provides the generated information to the control unit 30.
- the control unit 30 provides the information received from the server 20 to the user by displaying the information on the display 32 or controls the surgical robot 34 using the information received from the server 20.
- the means that can be used in the medical imaging equipment 10 is not limited, and various other medical imaging acquiring means such as CT, X-Ray, PET, MRI and the like may be used.
- the surgical image obtained in the photographing device 36 is transmitted to the control section 30.
- control unit 30 may segment the surgical image obtained during the operation in real time.
- control unit 30 transmits a surgical image to the server 20 during or after surgery.
- the server 20 can divide and analyze the surgical image.
- the " computer " performs the surgical image segmentation method according to the embodiment disclosed herein.
- the " computer " may mean both the server 20 and the control unit 30.
- the surgical image may be segmented by various criteria.
- a surgical image may be segmented based on the type of object included in the image.
- the division method based on the kind of object requires a step in which the computer recognizes each object.
- the object recognized in the surgical image includes the human body, the object introduced from the outside, and the object generated by itself.
- the human body includes a body part taken by medical imaging (e.g., CT) followed by surgery and a body part not taken.
- medical imaging e.g., CT
- a body part photographed by a medical imaging includes an organ, a blood vessel, a bone, a tendon, etc., and such a body part can be recognized based on a 3D modeling image generated based on a medical image.
- the position, size, and shape of each body part are recognized in advance by a 3D analysis method based on a medical image.
- the computer defines an algorithm that can grasp the position of a body part corresponding to a surgical image in real time, and based on the information, information on the position, size, and shape of each body part included in the surgical image
- the body part that is not captured by the medical imaging includes the omentum and the like, which is not captured by the medical image, so it is necessary to recognize it in real time during the operation.
- the computer can determine the location and size of the omentum using image recognition methods, and predict the location of the vessel in the presence of blood vessels within the omentum.
- Objects introduced from the outside include, for example, surgical tools, gauzes, clips, and the like. Since it has predetermined morphological characteristics, it can recognize the computer in real time through image analysis during surgery.
- Internally generated objects include, for example, bleeding from the body part. This allows the computer to recognize images in real time during image analysis during surgery.
- the movements of organs and organs included in body parts and the causes of objects are all caused by the movement of objects from outside.
- a surgical image can be segmented based on the motion of each object.
- the surgical image may be segmented based on the motion, i.e., action, of the externally introduced object.
- the computer judges the type of each object recognized in the surgical image.
- the computer determines a motion of each object, that is, a motion of each object based on a predetermined operation defined in advance, a series of operations, The action can be recognized.
- the computer recognizes the type of each action, and also recognizes the cause of each action.
- the computer can divide the surgical image based on the recognized action, and it can recognize from each detailed operation to the type of the whole operation through the stepwise division.
- the computer determines the type of predefined operation corresponding to the surgical image from the judgment of the action.
- the type of surgery information about the entire surgical procedure can be obtained. If there are multiple surgical processes for the same kind of surgery, one surgical process may be selected based on the doctor's choice, or based on actions recognized up to a certain point in time.
- the computer can recognize and predict the surgical stage based on the acquired surgical procedure. For example, if a particular step in a series of surgical procedures is recognized, then the steps following it can be predicted or candidates for possible steps can be culled. Therefore, it is possible to greatly reduce the error rate of the surgical image recognition caused by the occlusion or the like. Further, when the surgical image deviates greatly from the predictable surgical stage by more than a predetermined error range, it may be recognized that a surgical error situation has occurred.
- the computer can make a judgment on each action based on the recognition of each action. For example, a computer can recognize the necessity and effectiveness of each action.
- the computer can make a judgment as to whether each action was necessary or unnecessary.
- the computer can determine whether each action was performed efficiently, even if each action was required. This is used to provide an operative report, eliminate unnecessary operations in the surgical procedure, and streamline inefficient operations.
- the surgical image is largely divided into components including body parts (organ and omentum), objects introduced from the outside, objects generated inside, actions, types of surgery, necessity and efficiency of each action . That is, instead of recognizing the surgical image as a whole, the computer divides the surgical image into a component unit that minimizes mutual overlapping, including all elements of the surgical image as much as possible. Based on the divided component units, By recognizing, the surgical image can be recognized more specifically and more easily.
- FIG. 2 is a flowchart illustrating a method of segmenting a surgical image according to an embodiment.
- step S110 the computer acquires a surgical image.
- a surgical image is acquired from the imaging device 36 shown in FIG.
- the surgical image includes one or more frames.
- the surgical image comprises a portion of a body part of a subject, i.e., a surgical site.
- the surgical image comprises one or more objects.
- the one or more objects included in the surgical image include one or more surgical tools 38.
- one or more objects included in the surgical image may include tools and consumables used in surgery, such as gauze or clip.
- step S120 the computer recognizes one or more objects included in each of one or more frames of the surgical image obtained in step S110.
- the computer obtains information about the location of the imaging device 36 and the one or more surgical tools 38 from the surgical robot 34, and based on the acquired information, The position can be judged and recognized. In addition, the computer can determine the kind of one or more objects based on the information obtained from the surgical robot 34.
- the computer may perform image analysis of the surgical image to recognize one or more objects.
- the computer may also perform image analysis of the surgical image to determine the type of one or more recognized objects.
- the computer can perform learning based on a surgical image labeled with the location and type of one or more objects.
- the types of learning methods are not limited, and various machine learning methods such as instructional learning, non-instructional learning, and reinforcement learning can be used.
- the computer can perform image analysis based on the learning results and determine the location and type of one or more objects.
- the computer may use the image analysis results for the surgical image and the information obtained from the surgical robot 34 to determine the location and type of one or more objects.
- step S130 the computer determines the position and movement of each of the one or more objects recognized in step S120. Further, the computer determines the position and movement of the photographing device 36 for photographing the surgical image.
- " determining position and motion " is understood as a concept including recognizing a state in which there is no change in position and motion. Accordingly, even when the computer judges or recognizes only one of the position and the motion of the photographing apparatus or the object, it can be understood that the other one (i.e., the motion or the position) is also determined to be inferable.
- the computer can determine which portion of the object is being photographed by determining the position and motion of the imaging device 36.
- the computer may generate a virtual body model that includes the surgical site of the object based on the surgical site image of the object photographed from the medical imaging equipment 10.
- the virtual body model can generate a 3D modeling image including the surgical site of the object.
- the computer can determine the positional relationship between the camera and the virtual body model by matching the virtual body model with the actual object and reflecting the position of the camera to the virtual body model information.
- the computer can determine the position and movement of the camera and determine information on which part of the user's body part the camera is shooting based on the virtual body model.
- the computer can determine the position and movement of one or more objects (e.g., surgical tool 38) based on information obtained from the surgical robot 34.
- the computer may determine the positional relationship between the one or more objects and the virtual body model by reflecting the position and motion of one or more objects on the virtual body model information.
- the computer performs image analysis of the surgical image taken by the imaging device 36, thereby obtaining information on the body part included in the surgical image, that is, information of the body part taken by the imaging device 36 The position and the shape can be judged.
- the computer can use image analysis methods to determine the location and movement of one or more objects.
- the location of the imaging device 36 and one or more objects may be represented by two-dimensional or three-dimensional coordinates.
- the motion of the photographing device 36 and one or more objects can be expressed by the direction, distance, speed, and the like of the motion on two-dimensional or three-dimensional coordinates.
- step S140 based on the determination result in step S130, the computer divides the surgical image acquired from the photographing device 36 into at least one first classification unit group capable of recognizing the operation of the predetermined first classification unit )do.
- each group may comprise one or more frames.
- the first classification unit may be a concept that is spatially divided, or may be a concept that is temporally divided. The concept of each division will be described in detail below with reference to the drawings.
- the surgical operation may be stepwise divided to have a hierarchy of steps.
- the surgical operation can be divided into a large class, a middle class, a small class, and the like, and can be classified into three types according to the type of operation, the segment according to each operation type, the subsegment and the component But it is not limited thereto.
- the computer divides the surgical image into the smallest classification units and divides the surgical image into smaller, larger, classification units based thereon.
- the computer may be aimed at segmenting the surgical image by itself, or it may be the purpose of segmenting and recognizing the surgical operation based on each of the segmented portions.
- the computer divides and recognizes the operation operation, may store only the recognition result, or may divide and store the divided surgical image corresponding to each recognized operation operation.
- the computer may store the result of recognizing the surgical operation, and may also store information indicating an interval of the image corresponding to each divided surgical operation.
- the operation of the predetermined first classification unit includes predefined surgical operations, and includes a preset name for each operation and predetermined code assigned to identify each operation.
- the computer obtains information about the type, location, and movement of the imaging device 36 and one or more objects in step S130.
- the computer recognizes the operation of the predetermined first classification unit included in the surgical image based on the obtained information.
- the operation of the first classification unit may include cropping, catching, moving, and the like.
- the computer may learn the operation of each classification unit using a surgical image labeled with the name or code of each operation.
- the method used by the computer for learning is not limited, and can be learned by the above-described machine learning method.
- the computer may be learned to output the surgical operation corresponding to each surgical image, with the surgical image itself being an input.
- the computer may learn from the surgical image to input the result of judging the position and motion of the imaging equipment 36 and one or more objects determined according to step S130, and output the corresponding surgical operation, But is not limited thereto.
- FIG. 3 is a flowchart illustrating a method of spatially segmenting a surgical image according to an embodiment.
- step S140 shown in Fig. 3
- step S210 the computer determines the position and movement of each of the photographing apparatus and one or more objects included in the surgical image.
- the method described in connection with FIG. 2 may be used as a method by which the computer determines the position and movement of each of the imaging device and one or more objects.
- step S220 the computer determines the operation of each of the photographing apparatus 36 and one or more objects based on the determination result of the position and the motion of each of the photographing apparatus 36 and the one or more objects.
- the first and second classification unit groups can be divided into at least one second classification unit group.
- the second classification unit is a classification unit that defines the operation of each of the photographing apparatus 36 and one or more objects included in the surgical image.
- the operation of the second classification unit may include moving, rotating, redirecting, gripping, cutting, moving, and clipping operations of the camera.
- the computer can determine the position of one or more objects in the acquired surgical image, and determine the motion of each of the one or more objects by separately analyzing only the portion containing the recognized one or more objects.
- the computer recognizes the position of the joint of the surgical arm corresponding to each surgical tool, the coordinate of the end of each surgical tool, and the like, and tracks the motion of each surgical tool by tracking the same.
- the computer can divide only a portion of the recognized region of one or more objects, and process the divided region alone, without separately processing the entire surgical image, thereby individually determining the operation of each of the one or more objects.
- FIG. 4 an example of a surgical image is shown.
- each of the one or more objects may include one or more surgical tools 310 and 320 and one or more clips 330.
- the computer divides some of the regions 302 and 304 of the locations where the surgical tools 310 and 320 are recognized from each frame of the surgical image 300, May be processed to determine the operation of each surgical tool 310 and 320.
- the computer recognizes the movement of the surgical tools 310 and 320 so that the areas 302 and 304 where the surgical tools 310 and 320 are recognized also move accordingly and the area 302 of the moved position And 304, and the operation of the surgical tool 310 can be recognized by processing the divided surgical images.
- the surgical image 300 may include one or more body parts.
- the surgical image 300 may include a blood vessel 340 and an organ (350, e.g., stomach).
- the computer can recognize the motion of one or more objects based on the positional relationship between the body part and one or more objects included in the surgical image 300 and the motion of the one or more objects.
- the computer may recognize that the surgical tool 310 is moved toward the blood vessel 340 and performs an operation of cutting the blood vessel of the clipped region to the clip 330.
- the computer can determine the positional relationship between the surgical tool 310 and the blood vessel 340 in the surgical image 300 through image processing.
- the computer recognizes the operation of cutting the blood vessel 340 by the surgical tool 310 from the surgical image 300 through the image processing and recognizes the result of the cutting of the blood vessel 340, ) Can be judged to have been performed.
- the computer may use the virtual body model to determine the operation of each surgical tool 310 and 320.
- a computer can perform a surgical simulation using a virtual body model and a virtual object.
- the surgical simulation can be performed using the virtual body model in the same manner as the actual surgical situation or the surgical image.
- one or more virtual objects move according to the motion of one or more objects included in the actual surgical image, or follow the motion of one or more objects included in the actual surgical image.
- the computer recognizes or determines the movement of one or more objects included in the surgical image, receives the recognized or determined information, and moves the virtual object based on the information.
- the computer may determine the operation of one or more objects included in the surgical image 300, receive information that determines the operation of the object (e.g., a name or code corresponding to the operation of the second classification unit)
- the motion of the virtual object can be determined based on the motion of the virtual object.
- the computer can determine the correlation of the virtual body model with motion or motion of one or more virtual objects and determine the operation of one or more objects included in the surgical image 300 from the correlation.
- the computer determines through the virtual body model simulation that the surgical tool 310 has performed the trimming operation at the position where the blood vessel 340 is clipped And it is recognized that the surgical tool 310 has performed an operation of cutting the clipped position of the blood vessel 340.
- step S230 based on the operation of the second classification unit recognized on the basis of the second classification unit group divided in step S220, the computer determines whether the operation of the one or more third classification units Surgical images are divided into groups.
- the third classification unit group includes one or more second classification unit groups.
- the third classification unit group is a classification unit for recognizing the operation of the entire system, including the operation of the photographing apparatus and the operation of one or more objects included in the surgical image.
- the third classification unit group may be a classification unit for recognizing what an operation involving the operation of one or more objects included in the same frame means and dividing a surgical image.
- the operation of each surgical tool can be recognized, but it may be difficult to know the cause of the operation or the target of the operation.
- the third classification unit determines the operation operation corresponding to the operation image by using the operation of the second classification unit recognized from one or more objects and the position where each operation occurs.
- the operation tool 320 may perform an operation of holding and lifting a specific part
- the operation tool 310 may perform an operation of cutting the same. It is difficult to know the cause or purpose of judgment individually, but if we use it together, we can judge the cause and purpose of the whole surgical operation.
- the third classification unit group includes one or more second classification unit groups.
- the computer may be trained to recognize the operation of the third classification unit from the operation of one or more second classification units.
- the learning data may be individually labeled for each classification unit, or may be labeled only for a part of the surgical image or a part of the surgical image based on the surgical image.
- the computer can analyze the surgical image and generate necessary learning data.
- FIG. 5 is a flowchart illustrating a method of temporally dividing an operation image according to an embodiment.
- step S410 the computer recognizes the operation of the first classification unit.
- the first classification unit may refer to a unit of a method for temporally segmenting a surgical image.
- the first classification unit may mean a minimum unit of a method of temporally dividing a surgical image, but is not limited thereto.
- the operation of the first classification unit includes recognized surgical operations based on the motion of one or more objects contained in each frame or one or more frames by a spatial segmentation method as described in connection with FIG. can do.
- the first classification unit means a unit of temporal division
- the temporally divided surgical image can be spatially re-divided and analyzed through the second classification unit and the third classification unit.
- the operation of the first classification unit may be described as a set of each of the second classification unit operations of one or more objects included in the surgical image, described in connection with FIG.
- the first classification unit in recognizing the operation of the first classification unit, it may further include determining the location and movement of each of the one or more objects, as well as the cause of each location and movement.
- the computer may use the location of each location and the cause of the motion together to determine the surgical operation.
- step S420 the computer divides the surgical image into one or more fourth classification unit groups capable of recognizing the operation of the predetermined fourth classification unit based on the one or more first classification unit operations recognized in step S410.
- the fourth classification unit group includes one or more first classification unit groups.
- the fourth classification unit may refer to a unit of a method of temporally segmenting the surgical image.
- the fourth classification unit may refer to an upper unit successive to the first classification unit in the method of stepwise dividing the surgical image based on the temporal concept, but is not limited thereto.
- the computer may recognize the operation of the fourth classification unit, which includes the operation of one or more first classification units.
- the computer is trained to recognize the operation of the fourth classification unit based on the operation of one or more first classification units.
- the learning method that can be used by the computer is not limited, and the above-described machine learning method can be utilized.
- the learning data may be labeled for each classification unit, or may be labeled for only a portion of the surgical image or a portion of the surgical image based on the surgical image.
- the computer can analyze the surgical image and generate necessary learning data.
- the computer can perform learning to allow the operation of the first classification unit to recognize the operation of the fourth classification unit based on information performed in a predetermined order.
- the computer may divide the surgical image into fourth classification units based on the learning result and recognize the operation of the fourth classification unit corresponding to the divided portions.
- FIG. 6 an example in which a surgical image is segmented and recognized step by step is shown.
- the computer can sequentially recognize the operations of the first classification unit, the fourth classification unit, the fifth classification unit, and the sixth classification unit included in the surgical image according to steps S510, S520, S530, and S540 .
- the first classification unit is a component unit
- the fourth classification unit is a subsegment unit
- the fifth classification unit is a segmentation unit
- the sixth classification unit is a segmentation unit
- FIG. 7 an example of a stepwise recognition of a surgical operation is shown.
- the surgical operation is divided into a first classification unit 610 and recognized, and the fourth classification unit 620, the fifth classification unit 630, and the sixth classification unit 640 are stepwise And a method of recognizing the divided data is schematically shown.
- each code shown in FIG. 7 may refer to a code that can identify actions included in each classification unit.
- the operation of the first classification unit may include catching, cutting, moving, etc.
- the operation of the fourth classification unit may include vascularization, fat removal, Long term resection, long term resection, suture, etc.
- the operation of the sixth classification unit may include gastric cancer surgery.
- each operation of gastric cancer surgery can largely include laparotomy, gastrectomy, long-term connection and suture, and each operation is more concretely a vasectomy, And the connection of some parts of other organs and the like, and each operation can be more specifically embodied by cutting the blood vessels, removing obstacles such as fat, etc., and this can be accomplished by more simple operations such as movement, Can be further specified.
- the hierarchy can be used in reverse, and the operation can be divided into the minimum detail unit, and the computer can be learned to recognize the upper operation step by step using the divided result.
- the surgical site is different for each patient, each disease differs in shape, and the operation patterns are different depending on the type of operation.
- the learning model it is possible to provide a surgical motion recognition model that can be applied to the patient regardless of the physical condition of the patient or the type of surgery, and, if necessary, Or may provide a surgical motion recognition model that is tailored to the type of condition or surgery.
- the computer can recognize an event that occurs in a surgical image.
- the event includes a surgical error situation, such as bleeding.
- the computer can recognize this through image processing of the surgical image.
- the computer may divide the surgical image into one or more event groups including recognized events.
- the divided event groups may be managed separately, included in a classification unit according to the disclosed embodiment, or may be utilized as an independent classification unit for analysis of a surgical operation.
- the computer can determine the cause of the event based on the recognized event and the surgical operation before and after the event was recognized.
- the computer may generate learning data for analyzing the cause of the event by storing the operations of the predetermined classification unit before and after the occurrence of the event together with information on the event.
- the computer can perform learning using the generated learning data, and learn the correlation between the operation and the events of each classification unit.
- the computer can determine the cause of the event occurrence and provide feedback to the user.
- the computer may perform learning for optimization of surgical operations based on operation of a given classification unit.
- the computer can learn an optimized sequence and method for performing the operation of each classification unit according to the physical condition of the patient and the type of surgery.
- the computer may perform learning for optimization of the surgical operation based on the operation of the first classification unit.
- the computer may obtain one or more reference surgery information.
- the computer can perform learning based on the order of the operation operations included in the one or more reference operation information and determine the order of the optimized operation operations for each operation according to the learning results.
- the operation of the first classification unit is a minimum unit operation commonly applied in any operation, when learning is performed based on the first classification unit, the order of the optimized operation operations regardless of the type of operation and the body condition of the patient Can be obtained. Likewise, it is also possible to obtain an optimized learning model according to the type of surgery and the patient's physical condition through fine adjustment to the learned model.
- FIG. 8 is a block diagram of an apparatus 700 according to an embodiment.
- the processor 710 may include one or more cores (not shown) and a connection path (e. G., Bus, etc.) to transmit and receive signals to and / or from a graphics processing unit .
- a connection path e. G., Bus, etc.
- the processor 710 performs the surgical image segmentation method described with respect to Figures 1-7 by executing one or more instructions stored in the memory 720.
- the processor 710 may be configured to acquire a surgical image by executing one or more instructions stored in memory, to recognize one or more objects included in each of the one or more frames of the surgical image, And a controller for determining the position and motion of each of the recognized one or more objects and dividing the surgical image into at least one first classification unit group capable of recognizing the operation of the predetermined first classification unit based on the determination result .
- the processor 710 may include a random access memory (RAM) (not shown) and a read-only memory (ROM) for temporarily and / or permanently storing signals (or data) , Not shown).
- the processor 710 may be implemented as a system-on-chip (SoC) including at least one of a graphics processing unit, a RAM, and a ROM.
- SoC system-on-chip
- the memory 720 may store programs (one or more instructions) for processing and control of the processor 710. Programs stored in the memory 720 can be divided into a plurality of modules according to functions.
- the surgical image segmentation method may be implemented as a program (or an application) to be executed in combination with a hardware computer and stored in a medium.
- the above-described program may be stored in a computer-readable medium such as C, C ++, JAVA, machine language, or the like that can be read by the processor (CPU) of the computer through the device interface of the computer, And may include a code encoded in a computer language of the computer.
- code may include a functional code related to a function or the like that defines necessary functions for executing the above methods, and includes a control code related to an execution procedure necessary for the processor of the computer to execute the functions in a predetermined procedure can do.
- code may further include memory reference related code as to whether the additional information or media needed to cause the processor of the computer to execute the functions should be referred to at any location (address) of the internal or external memory of the computer have.
- the code may be communicated to any other computer or server remotely using the communication module of the computer
- a communication-related code for determining whether to communicate, what information or media should be transmitted or received during communication, and the like.
- the medium to be stored is not a medium for storing data for a short time such as a register, a cache, a memory, etc., but means a medium that semi-permanently stores data and is capable of being read by a device.
- examples of the medium to be stored include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, but are not limited thereto.
- the program may be stored in various recording media on various servers to which the computer can access, or on various recording media on the user's computer.
- the medium may be distributed to a network-connected computer system so that computer-readable codes may be stored in a distributed manner.
- the steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both.
- the software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Primary Health Care (AREA)
- Robotics (AREA)
- Epidemiology (AREA)
- Radiology & Medical Imaging (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Image Analysis (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
Abstract
Description
Claims (11)
- 컴퓨터가 수술영상을 획득하는 단계;상기 수술영상의 하나 이상의 프레임 각각에 포함된 하나 이상의 객체를 인식하는 단계;상기 수술영상을 촬영하는 촬영장치 및 상기 인식된 하나 이상의 객체 각각의 위치 및 움직임을 판단하는 단계; 및상기 판단 결과에 기초하여, 상기 수술영상을 기 설정된 제1 분류단위의 동작을 인식할 수 있는 하나 이상의 제1 분류단위 그룹으로 분할(segmentation)하는 단계; 를 포함하는, 수술영상 분할방법.
- 제1 항에 있어서,상기 제1 분류단위 그룹으로 분할하는 단계는,상기 촬영장치 및 상기 하나 이상의 객체 각각의 위치 및 움직임에 대한 판단 결과에 기초하여, 상기 촬영장치 및 상기 하나 이상의 객체 각각의 위치 및 움직임을 기 설정된 제2 분류단위의 동작으로 인식할 수 있는 하나 이상의 제2 분류단위 그룹으로 상기 수술영상을 분할하는 단계; 를 포함하는, 수술영상 분할방법.
- 제2 항에 있어서,상기 제2 분류단위 그룹으로 분할하는 단계는,상기 촬영장치 및 상기 하나 이상의 객체 각각에 대하여 인식된 하나 이상의 제2 분류단위 동작에 기초하여 기 설정된 제3 분류단위의 동작을 인식할 수 있는 하나 이상의 제3 분류단위 그룹으로 상기 수술영상을 분할하되, 상기 제3 분류단위 그룹은 하나 이상의 상기 제2 분류단위 그룹을 포함하는, 단계; 를 포함하는, 수술영상 분할방법.
- 제1 항에 있어서,상기 수술영상으로부터 상기 제1 분류단위의 동작을 인식하는 단계; 를 더 포함하고,상기 제1 분류단위의 동작을 인식하는 단계는,상기 수술영상에 포함된 신체부위와 상기 하나 이상의 객체 사이의 위치관계 및 상기 하나 이상의 객체의 움직임에 기초하여 상기 제1 분류단위의 동작을 인식하는 단계; 를 포함하는, 수술영상 분할방법.
- 제4 항에 있어서,상기 제1 분류단위의 동작을 인식하는 단계는,상기 신체부위에 대응하는 모델링 정보를 획득하는 단계;상기 모델링 정보 및 가상의 상기 하나 이상의 객체를 포함하는 수술 시뮬레이션을 수행하되, 상기 시뮬레이션에서 상기 하나 이상의 가상의 객체는 상기 하나 이상의 객체의 움직임에 따라 움직이는, 단계; 및상기 시뮬레이션 결과를 이용하여 상기 제1 분류단위의 동작을 인식하는 단계; 를 포함하는, 수술영상 분할방법.
- 제1 항에 있어서,상기 제1 분류단위 그룹으로 분할하는 단계는,하나 이상의 상기 제1 분류단위 그룹 각각에 대하여 인식된 하나 이상의 제1 분류단위의 동작에 기초하여 기 설정된 제4 분류단위의 동작을 인식할 수 있는 하나 이상의 제4 분류단위 그룹으로 상기 수술영상을 분할하되, 상기 제4 분류단위 그룹은 하나 이상의 상기 제1 분류단위 그룹을 포함하는, 단계; 를 포함하는, 수술영상 분할방법.
- 제6 항에 있어서,상기 분류된 하나 이상의 제1 분류단위 그룹 각각에 대응하는 하나 이상의 상기 제1 분류단위의 동작을 인식하는 단계; 및상기 하나 이상의 제1 분류단위의 동작에 기초하여 상기 분류된 하나 이상의 제4 분류단위 그룹 각각에 대응하는 하나 이상의 상기 제4 분류단위의 동작을 인식하는 단계; 를 더 포함하는, 수술영상 분할방법.
- 제1 항에 있어서,상기 제1 분류단위는 기 설정된 하나 이상의 동작에 대한 정보를 포함하고,상기 기 설정된 하나 이상의 동작 각각에는 상기 기 설정된 하나 이상의 동작 각각을 식별할 수 있는 코드가 부여되는, 수술영상 분할방법.
- 제1 항에 있어서,상기 수술영상에서 발생하는 이벤트를 인식하는 단계;상기 인식된 이벤트를 포함하는 하나 이상의 이벤트 그룹으로 상기 수술영상을 분할하는 단계; 를 더 포함하는, 수술영상 분할방법.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써,수술영상을 획득하고,상기 수술영상의 하나 이상의 프레임 각각에 포함된 하나 이상의 객체를 인식하고,상기 수술영상을 촬영하는 촬영장치 및 상기 인식된 하나 이상의 객체 각각의 위치 및 움직임을 판단하고,상기 판단 결과에 기초하여, 상기 수술영상을 기 설정된 제1 분류단위의 동작을 인식할 수 있는 하나 이상의 제1 분류단위 그룹으로 분할하는, 수술영상을 분할하는 장치.
- 하드웨어인 컴퓨터와 결합되어, 제1 항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0182898 | 2017-12-28 | ||
KR10-2017-0182900 | 2017-12-28 | ||
KR20170182899 | 2017-12-28 | ||
KR20170182898 | 2017-12-28 | ||
KR20170182900 | 2017-12-28 | ||
KR10-2017-0182899 | 2017-12-28 | ||
KR10-2018-0019389 | 2018-02-19 | ||
KR20180019389 | 2018-02-19 | ||
KR10-2018-0022962 | 2018-02-26 | ||
KR1020180022962A KR101926123B1 (ko) | 2017-12-28 | 2018-02-26 | 수술영상 분할방법 및 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019132614A1 true WO2019132614A1 (ko) | 2019-07-04 |
Family
ID=64671393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/016913 WO2019132614A1 (ko) | 2017-12-28 | 2018-12-28 | 수술영상 분할방법 및 장치 |
Country Status (2)
Country | Link |
---|---|
KR (2) | KR101926123B1 (ko) |
WO (1) | WO2019132614A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11389132B2 (en) * | 2019-07-16 | 2022-07-19 | Fujifilm Corporation | Radiographic image processing apparatus, radiographic image processing method, and radiographic image processing program |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102180921B1 (ko) * | 2019-10-18 | 2020-11-19 | 주식회사 엠티이지 | 인공지능 기반의 수술 동영상에 대한 주석 삽입 장치 및 방법 |
JP7442300B2 (ja) | 2019-11-21 | 2024-03-04 | 慶應義塾 | 再生制御装置及び再生制御プログラム |
KR102222699B1 (ko) * | 2019-12-09 | 2021-03-03 | 김용휘 | 사용자의 액션에 기반하여 비디오 데이터를 분할하기 위한 전자 장치 및 방법 |
KR102544629B1 (ko) * | 2019-12-26 | 2023-06-16 | 주식회사 엠티이지 | 수술 동영상에 대한 영상 분석에 기초한 수술 행위 추론 장치 및 방법 |
KR102321157B1 (ko) | 2020-04-10 | 2021-11-04 | (주)휴톰 | 수술 후 수술과정 분석 방법 및 시스템 |
KR102505016B1 (ko) * | 2020-08-03 | 2023-03-02 | (주)휴톰 | 수술영상 내 단위동작의 서술정보 생성 시스템 및 그 방법 |
CN113157820B (zh) * | 2021-03-31 | 2023-07-21 | 生态环境部卫星环境应用中心 | 适用于蜿蜒河流的缓冲带自动化分段与分类方法和装置 |
WO2023022258A1 (ko) * | 2021-08-19 | 2023-02-23 | 한국로봇융합연구원 | 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템 |
KR20240048076A (ko) * | 2022-10-05 | 2024-04-15 | 주식회사 엠티이지 | 음성 입력을 이용한 어노테이션을 제공하는 방법 및 디바이스 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100611373B1 (ko) * | 2005-09-16 | 2006-08-11 | 주식회사 사이버메드 | 수술용 항법 장치의 보정 방법 |
KR20110057978A (ko) * | 2009-11-25 | 2011-06-01 | 주식회사 프라이머스코즈 | 의사 손동작으로 제어되는 수술실용 환자정보 디스플레이 장치와 그 제어방법 |
KR20120046439A (ko) * | 2010-11-02 | 2012-05-10 | 서울대학교병원 (분사무소) | 3d 모델링을 이용한 수술 시뮬레이션 방법 및 자동 수술장치 |
KR101302595B1 (ko) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | 수술 진행 단계를 추정하는 시스템 및 방법 |
KR20160119307A (ko) * | 2015-04-02 | 2016-10-13 | 울산대학교 산학협력단 | 신체 내부 형상 3차원 모델링 장치 및 그 방법, 그리고 신체 내부 형상 3차원 모델 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011036371A (ja) | 2009-08-10 | 2011-02-24 | Tohoku Otas Kk | 医療画像記録装置 |
-
2018
- 2018-02-26 KR KR1020180022962A patent/KR101926123B1/ko active IP Right Grant
- 2018-11-30 KR KR1020180152679A patent/KR20190080736A/ko active Application Filing
- 2018-12-28 WO PCT/KR2018/016913 patent/WO2019132614A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100611373B1 (ko) * | 2005-09-16 | 2006-08-11 | 주식회사 사이버메드 | 수술용 항법 장치의 보정 방법 |
KR20110057978A (ko) * | 2009-11-25 | 2011-06-01 | 주식회사 프라이머스코즈 | 의사 손동작으로 제어되는 수술실용 환자정보 디스플레이 장치와 그 제어방법 |
KR20120046439A (ko) * | 2010-11-02 | 2012-05-10 | 서울대학교병원 (분사무소) | 3d 모델링을 이용한 수술 시뮬레이션 방법 및 자동 수술장치 |
KR101302595B1 (ko) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | 수술 진행 단계를 추정하는 시스템 및 방법 |
KR20160119307A (ko) * | 2015-04-02 | 2016-10-13 | 울산대학교 산학협력단 | 신체 내부 형상 3차원 모델링 장치 및 그 방법, 그리고 신체 내부 형상 3차원 모델 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11389132B2 (en) * | 2019-07-16 | 2022-07-19 | Fujifilm Corporation | Radiographic image processing apparatus, radiographic image processing method, and radiographic image processing program |
Also Published As
Publication number | Publication date |
---|---|
KR20190080736A (ko) | 2019-07-08 |
KR101926123B1 (ko) | 2018-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019132614A1 (ko) | 수술영상 분할방법 및 장치 | |
KR102014385B1 (ko) | 수술영상 학습 및 학습 기반 수술동작 인식 방법 및 장치 | |
WO2019132169A1 (ko) | 수술영상 재생제어 방법, 장치 및 프로그램 | |
WO2019132168A1 (ko) | 수술영상데이터 학습시스템 | |
WO2019083227A1 (en) | MEDICAL IMAGE PROCESSING METHOD, AND MEDICAL IMAGE PROCESSING APPARATUS IMPLEMENTING THE METHOD | |
WO2016074169A1 (zh) | 一种对目标物体的检测方法、检测装置以及机器人 | |
WO2016002986A1 (ko) | 시선 추적 장치 및 방법, 이를 수행하기 위한 기록매체 | |
WO2019132165A1 (ko) | 수술결과에 대한 피드백 제공방법 및 프로그램 | |
WO2019164275A1 (ko) | 수술도구 및 카메라의 위치 인식 방법 및 장치 | |
WO2016072586A1 (en) | Medical image processing apparatus and method | |
WO2019004530A1 (ko) | 영상에서 처리 대상 객체를 제거하는 방법 및 이러한 방법을 수행하는 장치 | |
WO2019132244A1 (ko) | 수술 시뮬레이션 정보 생성방법 및 프로그램 | |
WO2014208950A1 (en) | Method and apparatus for managing medical data | |
WO2023182727A1 (en) | Image verification method, diagnostic system performing same, and computer-readable recording medium having the method recorded thereon | |
WO2014200230A1 (en) | Method and apparatus for image registration | |
WO2021206518A1 (ko) | 수술 후 수술과정 분석 방법 및 시스템 | |
WO2020080734A1 (ko) | 얼굴 인식 방법 및 얼굴 인식 장치 | |
WO2017090815A1 (ko) | 관절 가동 범위를 측정하는 장치 및 방법 | |
WO2015182979A1 (ko) | 영상 처리 방법 및 영상 처리 장치 | |
WO2013042889A1 (ko) | 의료영상에서의 세그멘테이션 방법 및 그 장치 | |
WO2015167081A1 (ko) | 신체 부분 검출 방법 및 장치 | |
WO2019164276A1 (ko) | 수술동작 인식 방법 및 장치 | |
WO2020246773A1 (ko) | 초음파 영상과 3차원 의료 영상의 정렬을 위한 장치 및 방법 | |
WO2013168930A1 (en) | Ultrasonic diagnostic apparatus and control method thereof | |
WO2023008818A1 (ko) | Poi 정의 및 phase 인식 기반의 실제 수술 영상과 3d 기반의 가상 모의 수술 영상을 정합하는 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18894159 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18894159 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM1205 DATED 03.02.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18894159 Country of ref document: EP Kind code of ref document: A1 |