WO2021206517A1 - Intraoperative vascular navigation method and system - Google Patents

Intraoperative vascular navigation method and system Download PDF

Info

Publication number
WO2021206517A1
WO2021206517A1 PCT/KR2021/004532 KR2021004532W WO2021206517A1 WO 2021206517 A1 WO2021206517 A1 WO 2021206517A1 KR 2021004532 W KR2021004532 W KR 2021004532W WO 2021206517 A1 WO2021206517 A1 WO 2021206517A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
surgical
blood vessel
navigation
blood vessels
Prior art date
Application number
PCT/KR2021/004532
Other languages
French (fr)
Korean (ko)
Inventor
김하진
허성환
박성현
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2021206517A1 publication Critical patent/WO2021206517A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/254User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information

Definitions

  • the present invention relates to a method and system for vascular navigation during surgery. More specifically, the present invention recognizes individual stages of an image during surgery through a learning model, and superimposes a patient's blood vessel image extracted based on medical image data (eg, CT image) on the recognized staged surgical image. It relates to a method and system for providing a modeled screen.
  • medical image data eg, CT image
  • the problem to be solved by the present invention is to recognize an individual phase of an image during surgery through a surgical stage learning model, and the patient's blood vessels extracted based on medical image data (eg, CT image) in the recognized staged surgical image. It is to provide a 3D modeled screen overlaid with images.
  • medical image data eg, CT image
  • the intraoperative blood vessel navigation method performed by the system includes the steps of: constructing a three-dimensional blood vessel model using a blood vessel learning model from two-dimensional medical image data of an object; , Recognizing the surgical stage of the object using the surgical stage learning model in the photographed surgical image, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image and providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel can be added or removed based on a user input.
  • the navigation image to which the main blood vessels are matched may include an image in which an object is 3D modeled using the medical image data or the photographed surgical image.
  • the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using surgical data, inputting a learning image for each defined label, and the surgical data is It may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
  • the blood vessel learning model may assign blood vessel types by sequentially applying blood vessel bifurcation points after modeling veins and arteries based on medical image data.
  • the step of matching the major blood vessels to the navigation image includes collecting the location information of the surgical image screen, and synchronizing the three-dimensionally modeled major blood vessels and the object to the surgical image based on the surgical step and location information. (SYNC) can be done.
  • the step of providing the navigation image may highlight and display the extracted major blood vessels in the navigation image.
  • the step of highlighting and displaying the main blood vessel includes displaying a list including the extracted main blood vessel item on the navigation image, and when a specific item is selected from the list, a scene of the navigation image is displayed on the selected specific item It is possible to move to a scene of major blood vessels corresponding to , and highlight and display major blood vessels corresponding to the specific item in the moved scene.
  • the step of highlighting and displaying the main blood vessels includes a first highlight display method for displaying the main blood vessels by zooming in at a preset magnification, a second highlight display method for displaying the main blood vessels with a preset color, and the main blood vessels.
  • a third highlight display method for displaying the border of blood vessels by blinking a fourth highlight display method for displaying text indicating the main blood vessels on the main blood vessels, and a fifth highlight display method for displaying the main blood vessels in a plurality of angles At least one or two or more of them may be highlighted in a combined form.
  • the present invention for solving the above problems is a computer program stored in a computer-readable recording medium that is combined with a computer, which is hardware, so as to perform a blood vessel navigation method during surgery, wherein the computer program is a two-dimensional (2D) object.
  • a process of constructing a three-dimensional blood vessel model using a blood vessel learning model from medical image data a process of recognizing a surgical stage of an object using a surgical stage learning model in a photographed surgical image, and corresponding to the recognized surgical stage performing a process of extracting major blood vessels from the constructed three-dimensional blood vessel model and matching them to a navigation image, and providing the navigation image to a user, wherein the navigation image includes blood vessels branching from the major blood vessels,
  • the branched blood vessel may be added or removed based on a user input.
  • An intraoperative blood vessel navigation system for solving the above-described problems includes a medical image photographing device for photographing a surgical image, a display unit for providing a surgical navigation image to a user, and one or more processors and a control unit including at least one memory in which instructions for causing the at least one processor to perform an operation when executed by the one or more processors are stored, wherein the operation performed by the control unit is performed in two-dimensional medical image data of an object
  • An operation for constructing a three-dimensional blood vessel model using the blood vessel learning model, an operation for recognizing the surgical stage of an object using the surgical stage learning model in the photographed surgical image, and a major blood vessel corresponding to the recognized surgical stage an operation of extracting from the constructed 3D blood vessel model and matching to a navigation image, and an operation of providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel may be added or removed based on user input.
  • various information can be provided to the user by matching the three-dimensionally modeled blood vessel image to the surgical step-by-step image recognized during surgery.
  • FIG. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
  • 3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
  • FIG. 4 is a view for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
  • FIG. 5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
  • FIG. 6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
  • FIGS. 8A and 8B are diagrams for explaining an example of a blood vessel navigation image according to an embodiment of the present invention.
  • image may mean multi-dimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
  • the image may include a medical image of the object obtained by the CT imaging apparatus.
  • an “object” may be a human or an animal, or a part or all of a human or animal.
  • the object may include at least one of organs such as liver, heart, uterus, brain, breast, abdomen, and blood vessels.
  • a “user” may be a medical professional, such as a doctor, a nurse, a clinical pathologist, or a medical imaging specialist, and may be a technician repairing a medical device, but is not limited thereto.
  • medical image data is a medical image captured by a medical imaging device, and includes all medical images that can be implemented as a three-dimensional model of the body of an object.
  • Medical image data may include a computed tomography (CT) image, a magnetic resonance imaging (MRI), a positron emission tomography (PET) image, and the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • the term “virtual body model” refers to a model generated to match the actual patient's body based on medical image data.
  • the “virtual body model” may be generated by modeling medical image data in 3D as it is, or may be corrected after modeling to be the same as during actual surgery.
  • “virtual surgical data” refers to data including rehearsal or simulation actions performed on a virtual body model. “Virtual surgical data” may be image data on which rehearsal or simulation is performed on a virtual body model in a virtual space, or data recorded on a surgical operation performed on the virtual body model. In addition, “virtual surgery data” may include learning data for learning the surgical learning model.
  • actual surgical data refers to data obtained by actual medical staff performing surgery.
  • Standard data may be image data of a surgical site taken in an actual surgical procedure, or may be data recorded on a surgical operation performed in an actual surgical procedure.
  • a surgical phase refers to a basic phase that is sequentially performed in the entire operation of a specific type of operation.
  • the term “computer” includes various devices capable of providing a result to a user by performing arithmetic processing.
  • computers include desktop PCs, notebooks (Note Books) as well as smart phones, tablet PCs, cellular phones, PCS phones (Personal Communication Service phones), synchronous/asynchronous A mobile terminal of International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may be a server or a navigation system that receives a request from a client and performs information processing.
  • FIG. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
  • FIG. 1 a schematic diagram of a system capable of performing robotic surgery according to an embodiment is shown.
  • the robotic surgery system includes a medical image capturing device 10 , a navigation system 20 , and a controller 30 provided in an operating room, an image capturing unit 36 , a display 32 and a surgical robot 34 .
  • the medical imaging equipment 10 may be omitted from the robotic surgery system according to an embodiment.
  • the robotic surgery may be performed by the user controlling the surgical robot 34 using the control unit 30, and may be automatically performed by the control unit 30 without the user's control (manipulation). .
  • the navigation system 20 is a computing device including at least one processor, a memory and a communication unit.
  • the navigation system 20 may include a data processing server outside the operating room and a graphic processing terminal device in the operating room, or may include only the graphic processing terminal device.
  • the control unit 30 includes a computing device including at least one processor, a memory, and a communication unit.
  • the controller 30 includes hardware and software interfaces for controlling the surgical robot 34 .
  • the control unit 30 may be divided into a user console device and a control device.
  • the image capturing unit 36 includes at least one image sensor. That is, the image capturing unit 36 includes at least one camera device, and is used to photograph the surgical site. In one embodiment, the imaging unit 36 is used in combination with the surgical robot (34). For example, the imaging unit 36 may include at least one camera coupled to the surgical arm of the surgical robot 34 .
  • the image captured by the image capturing unit 36 is displayed on the display 32 .
  • the controller 30 receives information necessary for surgery from the navigation system 20 or generates information necessary for surgery and provides it to the user. For example, the controller 30 displays generated or received information necessary for surgery on the display 32 .
  • the user performs robotic surgery by controlling the movement of the surgical robot 34 by manipulating the controller 30 while viewing the display 32 .
  • the navigation system 20 generates information necessary for robotic surgery using medical image data of an object (patient) photographed in advance from the medical imaging device 10 , and provides the generated information to the controller 30 .
  • the controller 30 may provide information received from the navigation system 20 to the user by displaying it on the display 32 , and control the operation of the surgical robot 34 using the received information.
  • the means that can be used in the medical imaging equipment 10 is not limited, for example, CT, X-Ray, PET, MRI, etc., various other medical image acquisition means may be used.
  • each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of it is performed by the server 20 or the control unit 30 can be performed.
  • the surgical image captured by the medical imaging device 10 may be divided according to various criteria. For example, the surgical image may be divided based on the type of object included in the image. The segmentation method based on the type of object requires the computer to recognize each object.
  • Objects recognized in the surgical image largely include a human body, an object introduced from the outside, and an object created by itself.
  • the human body includes body parts that are imaged by medical imaging (eg, CT) prior to surgery and body parts that are not imaged.
  • body parts photographed by medical imaging include organs, blood vessels, bones, tendons, and the like, and these body parts may be recognized based on a 3D modeling image generated based on the medical image.
  • the position, size, shape, etc. of each body part are recognized in advance by a 3D analysis method based on a medical image.
  • the computer defines an algorithm that can determine the position of the body part corresponding to the surgical image in real time, and based on this, information on the position, size, shape, etc. of each body part included in the surgical image without performing separate image recognition can be obtained.
  • body parts that are not photographed by medical imaging include omentum, etc., which are not photographed by medical imaging, so it is necessary to recognize them in real time during surgery.
  • the computer may determine the position and size of the omentum through the image recognition method, and if there is a blood vessel inside the omentum, the location of the blood vessel may also be predicted.
  • the objects introduced from the outside include, for example, surgical tools, gauze, clips, and the like. Since it has preset morphological characteristics, the computer can recognize it in real time through image analysis during surgery.
  • the object generated inside includes, for example, bleeding occurring in a body part. This can be recognized in real time by the computer through image analysis during surgery.
  • the movement of organs or omentum included in the body part, and the cause of the internal creation of an object, are all caused by the movement of an object introduced from the outside.
  • the surgical image may be divided into several surgical phases based on the movement of each object.
  • the surgical image may be divided based on a movement of an externally introduced object, that is, an action.
  • the computer determines the type of each object recognized in the surgical image, and the movement of each object, that actions can be recognized.
  • the computer can recognize the type of each action and further recognize the cause of each action.
  • the computer can segment the surgical image based on the recognized action, and can recognize from each detailed surgical operation to the type of the entire operation through stepwise segmentation.
  • a computer extracts feature information from a surgical image taken based on a surgical stage learning model that performs machine learning of a convolutional neural network method, and generates images for each surgical stage based on the feature information. It can be divided or recognized at which stage of surgery it is currently located.
  • the computer may determine a predefined type of operation corresponding to the operation image from the determination of the action.
  • determining the type of surgery information on the entire surgical process may be acquired.
  • one surgical process may be selected according to a selection of a doctor or based on actions recognized up to a specific time point.
  • the computer may recognize and predict the surgical stage based on the acquired surgical process. For example, when a specific step is recognized in a series of surgical processes, the subsequent steps may be predicted or candidates for possible steps may be selected.
  • the computer extracts navigation information about the main vessels corresponding to each surgical stage and the vessels branched to the main vessels according to the blood flow, thereby assisting the user for effective surgery. can do.
  • the computer may determine whether bleeding has occurred due to a surgical error based on the image recognition of the surgical stage.
  • the computer can determine the location, time, and magnitude of each bleeding. It can also determine whether surgery should be stopped due to bleeding. Accordingly, according to an embodiment, the computer may be used to provide data on an error situation and bleeding situation as a surgical result report, to exclude unnecessary operations or mistakes in the surgical process, and to streamline the surgical process.
  • the calculation performed on the computer is the calculation of constructing a 3D blood vessel model using the blood vessel learning model in the 2D medical image data of the object, in the photographed surgical image, Calculation of recognizing the surgical stage of an object using the surgical stage learning model, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image, and providing the navigation image to the user It can include operations provided.
  • the computer may further include an operation for adding or removing a blood vessel branching from a main blood vessel based on a user input.
  • the navigation image to which the major blood vessels are matched may be a three-dimensional object image modeled from the photographed surgical image or two-dimensional medical image data of the object.
  • the computer collects the position information of the captured surgical image screen, and performs an operation of synchronizing (SYNC) the three-dimensional modeled main blood vessels and the object to the surgical image based on the recognized surgical stage and position information.
  • SYNC synchronizing
  • FIG. 2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
  • each of the steps shown in FIG. 2 is time-series performed by the navigation system 20 or the controller 30 shown in FIG. 1 .
  • each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of the navigation system 20 or the control unit 30 can be performed in
  • step S200 the computer according to an embodiment builds a 3D blood vessel model by using the blood vessel learning model from the two-dimensional medical image data of the object.
  • the blood vessel learning model may be to find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data.
  • the blood vessel learning model will be described later with reference to FIGS. 3A to 3C and 4 .
  • step S210 the computer according to an embodiment recognizes the surgical stage of the object by using the surgical stage learning model in the photographed surgical image.
  • the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using virtual surgery data, and inputting a learning image for each defined label.
  • the step of recognizing the operation stage of the object using the operation stage learning model may be recognizing the operation stage using the learned operation stage learning model based on the actual operation data.
  • the actual surgical data may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
  • step S220 the computer according to an embodiment extracts major blood vessels corresponding to the recognized surgical steps from the constructed 3D blood vessel model and matches them with the navigation image.
  • the navigation image to which the major blood vessels are matched may be a three-dimensionally modeled object image using the captured surgical image or the medical image data.
  • step S230 the computer according to an embodiment provides the matched navigation image to the user.
  • a blood vessel branched from a main blood vessel may be added or removed according to a blood vessel flow based on a user input.
  • FIGS. 7 and 8A and 8B an example of a method of providing and manipulating a blood vessel navigation image to a user will be described later with reference to FIGS. 7 and 8A and 8B .
  • 3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
  • a computer generates 3D image data 301 of an object (organs and blood vessels) through a process of converting medical image data (CT image) into a 3D object. do. More specifically, after veins and arteries are modeled based on medical image data, vascular branch points are sequentially assigned blood vessel types.
  • labeling for separating major blood vessels may be performed for each operation step, and machine learning may be performed on each labeled blood vessel.
  • the names of organs and blood vessels appearing in the surgical image are listed up, and the listed names are tagged on the image so that each learning model can perform machine learning. Then, each organ and blood vessel can be divided using the machine-learned blood vessel learning model. Accordingly, as shown in FIG. 3B , the user can check the list 302 of blood vessels labeled with the names of organs and blood vessels, and further, as shown in FIG. 3C , the user can check the image 303 tagged with the names can
  • FIG. 4 is a diagram for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
  • the computer may label a blood vessel name by separating a blood vessel from a 3D image through a blood vessel learning model and then generating a point for each branch point 401 of each blood vessel. Therefore, according to the intraoperative blood vessel navigation method according to an embodiment, since the blood vessel learning model is machine-learned based on the branch point, when the blood vessel navigation image is provided to the user, the blood vessel may be displayed to be added or removed based on the branch point. .
  • the computer can more accurately find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data. may be
  • the location of different blood vessels for each person can be accurately recognized from the medical image data through such machine learning, and the location of blood vessels can be accurately guided to the user during future surgery.
  • FIG. 5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
  • step S500 the computer selects the type of surgery.
  • the type of surgery may be automatically selected by recognizing an object or an object in the input image, or may be directly input by the user.
  • step S510 the computer defines the operation as a label for each stage using the actual operation data.
  • Each surgical phase may be divided automatically by recognizing images of camera movement, equipment movement, and organs, and the user may classify an image suitable for a standardized surgical process as a learning image and learn it in advance.
  • a surgical stage can be defined as about 21 stages, including stages.
  • step S520 the computer selects a learning image corresponding to the defined label.
  • the learning image may be selected by automatically dividing the inputted surgical image by a computer, or the user may select and input the learning image in advance based on actual surgical data.
  • the computer may perform machine learning using a convolutional neural network based on the selected learning images. For example, when the frame of each surgical image defined in this way is learned using an action recognition model such as SlowFast Network, a highly accurate surgical stage learning model can be trained.
  • an action recognition model such as SlowFast Network
  • the trained surgical stage learning model is used to more accurately and automatically recognize the surgical stage during surgery.
  • FIG. 6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
  • a computer may collect position information of a surgical image screen, and synchronize (SYNC) the three-dimensional modeled major blood vessels and an object to the surgical image based on the surgical stage and position information.
  • SYNC synchronize
  • the navigation image to which the blood vessels are matched may be an image of an object (organs and blood vessels) that is 3D modeled using the photographed surgical image or the medical image data.
  • the computer extracts the major blood vessels and the object of the surgical stage recognized in the gastric cancer surgery image 601 from the 3D blood vessel model, and then extracts the extracted major blood vessels and 3
  • the dimensionally modeled image of the object may be matched and provided to the user.
  • FIG. 7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
  • a three-dimensional blood vessel model in the case of a three-dimensional blood vessel model according to an embodiment of the present invention, it may be modeled to enable navigation of surrounding blood vessels according to the flow of blood vessels based on main blood vessels.
  • the flow and depth are divided for each blood vessel recognized from the medical image data based on the branch point. Because it can be classified.
  • the computer defines each step for displaying blood vessels by depth so that the guidance of the surrounding blood vessels can be performed centering on the selected main blood vessel 701 , and defines the blood vessel flow for each depth.
  • a decision 702 is made.
  • peripheral blood vessels may be added or removed according to the flow for each depth to be displayed.
  • FIGS. 8A and 8B show an example of a blood vessel navigation image according to an embodiment of the present invention.
  • the computer may display only the major blood vessels 801 synchronized with the surgical image on the navigation image screen.
  • the vascular navigation image required by the user in the surgical stage can provide
  • the computer identifies a plurality of points of interest (eg, major blood vessels) through analysis of surgical requirements included in information required for surgery, and reconstructs the identified plurality of major parts. After this, a list of the plurality of main parts is displayed on the blood vessel navigation image, and when a specific main part is selected from the list, the scene (or viewpoint) of the vessel navigation image is selected as the scene (or viewpoint) of the selected specific main part. It can be moved, and the specific main part can be highlighted and displayed within the scene of the moved specific main part.
  • points of interest eg, major blood vessels
  • the highlight display is a first highlight display method in which the specific main part is displayed prominently within the scene of the specific main part, and the specific main part is displayed by zooming in at a preset magnification.
  • a second highlighting method for displaying a main part with a preset color, a third highlighting method for displaying the edge of the specific main part by blinking, and a third highlighting method for displaying text indicating the specific main part on the specific main part At least one or a combination of two or more of the fourth highlighting method and the fifth highlighting method of displaying the specific main part in a plurality of angles may be highlighted. That is, the computer can highlight and mark the parts of the virtual body model that need to be looked at carefully in the surgical operation.
  • the part of the virtual body model that needs to be carefully observed in the surgical operation may be a bifurcation point of a specific blood vessel and a starting point of a specific blood vessel.
  • the starting point position of a specific blood vessel entering a specific organ may be displayed in the above-described highlighting method.
  • the computer may define in advance the main parts of the virtual body model that need to be carefully viewed in the surgical operation, and show a list of items of the defined main parts to the user.
  • the user can identify the part of the virtual body model that needs to be carefully viewed before or during the operation through the list, thereby compensating for the limitation in the process of minimally invasive surgery, which has a limited field of view, and related Damage to organs and blood vessels and side effects after surgery can be minimized.
  • a software module may contain random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Gynecology & Obstetrics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

An intraoperative vascular navigation method and system are disclosed. A method by which a system performs intraoperative vascular navigation, according to one embodiment, comprises the steps of: constructing, from two-dimensional medical image data of an object, a three-dimensional vascular model by using a vascular learning model; recognizing, from the photographed surgical image, the surgical stage of the object by using a surgical stage learning model; extracting, from the constructed three-dimensional vascular model, a main blood vessel corresponding to the recognized surgical stage, and matching same with a navigation image; and providing the navigation image to a user, wherein the navigation image includes the blood vessels branched from the main blood vessel, and the branched blood vessels can be added or removed on the basis of a user input.

Description

수술 중 혈관 네비게이션 방법 및 시스템Intraoperative vascular navigation method and system
본 발명은 수술 중 혈관 네비게이션 방법 및 시스템에 관한 것이다. 보다 상세하게는 본 발명은 수술 중인 영상의 개별 단계를 학습모델을 통해 인식하고, 인식된 단계별 수술영상에 의료영상데이터(예컨대, CT영상)을 기반으로 추출된 환자의 혈관영상을 중첩한 3차원 모델링된 화면을 제공하는 방법 및 시스템에 관한 것이다.The present invention relates to a method and system for vascular navigation during surgery. More specifically, the present invention recognizes individual stages of an image during surgery through a learning model, and superimposes a patient's blood vessel image extracted based on medical image data (eg, CT image) on the recognized staged surgical image. It relates to a method and system for providing a modeled screen.
일반적으로 의사들이 수술에 들어가기 앞서 환자의 수술 계획을 수립할 때, 수술 부위에 대한 정보를 환자의 CT(Computed Tomographic) 또는 MRI(Magnetic Resonance Imaging) 사진 등의 2차원 의료 영상을 참고하여 수술 계획을 세운다. 이 경우, 대부분 환자의 장기 내부에 존재하는 병변의 위치나 병변의 위치에 따른 혈관과의 관계 등을 2차원 의료 영상에 매칭하기 어려우며, 수술 전에 장기 촬영 정보로 활용할 수단이 없으므로, 수술 시장기 내부 병변의 위치와 주위 혈관 분포 등을 파악하는데 많은 한계가 있다.In general, when doctors establish a patient's surgical plan prior to surgery, they refer to 2D medical images such as CT (Computed Tomographic) or MRI (Magnetic Resonance Imaging) images of the patient for information about the surgical site to plan the operation. stand up In this case, it is difficult to match the location of the lesion existing inside the patient's organs or the relationship with the blood vessels according to the location of the lesion with the two-dimensional medical image, and there is no means to use it as organ imaging information before surgery. There are many limitations in determining the location of the lesion and the distribution of surrounding blood vessels.
이러한 문제점을 해결하기 위해, CT나 MRI 사진들을 3차원 영상으로 구현하여 제공하는 기술이 연구되고 있다. In order to solve this problem, a technology for providing CT or MRI images as a 3D image is being studied.
3차원의 증강 현실 영상으로 구현된 의료 영상을 이용하여 수술 계획을 세운다 하더라도, 수술 내비게이션 시스템을 사용하는 의사는 사람마다 다른 혈관의 위치를 정확하게 인식하지 못할 경우 원치 않는 출혈을 발생시키거나 장기를 손상시킬 우려가 있다.Even if a surgical plan is planned using medical images implemented as 3D augmented reality images, doctors using the surgical navigation system may cause unwanted bleeding or damage organs if they cannot accurately recognize the location of different blood vessels for each person. there is a risk of making
본 발명이 해결하고자 하는 과제는 수술 중인 영상의 개별 단계(Phase)를 수술단계학습모델을 통해 인식하고, 인식된 단계별 수술영상에 의료영상데이터(예컨대, CT영상)을 기반으로 추출된 환자의 혈관영상을 중첩한 3차원 모델링된 화면을 제공하는 제공하는 것이다.The problem to be solved by the present invention is to recognize an individual phase of an image during surgery through a surgical stage learning model, and the patient's blood vessels extracted based on medical image data (eg, CT image) in the recognized staged surgical image. It is to provide a 3D modeled screen overlaid with images.
또한, 환자의 의료영상 데이터에서 혈관학습모델을 이용하여 환자 특이적 3차원 해부모형을 구축하고, 특정 단계의 수술영상에서 수술자가 선택에 따라 분기되는 혈관을 추가하거나 제외할 수 있도록 할 수 있다.In addition, it is possible to construct a patient-specific three-dimensional anatomical model using a blood vessel learning model from the patient's medical image data, and to allow the operator to add or exclude branching blood vessels according to selection in the surgical image of a specific stage.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the following description.
상술한 과제를 해결하기 위한 본 발명의 일 실시예에 따른 시스템에 의해 수행되는 수술 중 혈관 네비게이션 방법은, 대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 단계와, 촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 단계와, 상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 단계와, 상기 네비게이션 영상을 사용자에게 제공하는 단계를 포함하고, 상기 네비게이션 영상은 상기 주요혈관에서 분기된 혈관을 포함하고, 상기 분기된 혈관은 사용자 입력에 기초하여 추가 또는 제거 가능한 것을 특징으로 한다.In order to solve the above problems, the intraoperative blood vessel navigation method performed by the system according to an embodiment of the present invention includes the steps of: constructing a three-dimensional blood vessel model using a blood vessel learning model from two-dimensional medical image data of an object; , Recognizing the surgical stage of the object using the surgical stage learning model in the photographed surgical image, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image and providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel can be added or removed based on a user input.
이때, 상기 주요 혈관이 정합되는 네비게이션 영상은, 상기 의료영상데이터를 이용하여 대상체가 3차원 모델링된 영상 또는 상기 촬영된 수술영상을 포함할 수 있다.In this case, the navigation image to which the main blood vessels are matched may include an image in which an object is 3D modeled using the medical image data or the photographed surgical image.
또한, 상기 수술단계학습모델은 수술데이터를 이용하여 상기 대상체의 수술에 필요한 적어도 하나 이상의 단계를 레이블로 정의하고, 상기 정의된 레이블 별로 학습영상을 입력하여 기계학습된 것일 수 있고, 상기 수술데이터는 로봇암의 움직임에 대한 추적 데이터 또는 수술영상 프레임에서 획득된 데이터를 포함할 수 있다.In addition, the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using surgical data, inputting a learning image for each defined label, and the surgical data is It may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
또한, 상기 혈관학습모델은 의료영상데이터를 기초로 정맥 및 동맥을 모델링 한 후 혈관분기점을 순차적으로 적용하여 혈관 종류를 부여할 수 있다.In addition, the blood vessel learning model may assign blood vessel types by sequentially applying blood vessel bifurcation points after modeling veins and arteries based on medical image data.
또한, 상기 주요혈관을 상기 네비게이션 영상에 정합하는 단계는, 상기 수술영상 화면의 위치정보를 수집하고, 상기 수술단계 및 위치정보를 기반으로 상기 3차원 모델링된 주요혈관과 대상체를 상기 수술영상에 동조(SYNC)시킬 수 있다.In addition, the step of matching the major blood vessels to the navigation image includes collecting the location information of the surgical image screen, and synchronizing the three-dimensionally modeled major blood vessels and the object to the surgical image based on the surgical step and location information. (SYNC) can be done.
또한, 상기 네비게이션 영상을 제공하는 단계는 상기 네비게이션 영상 내에서 상기 추출된 주요혈관을 강조하여 표시할 수 있다. 이때, 상기 주요 혈관을 강조하여 표시하는 단계는 상기 추출된 주요 혈관의 항목을 포함한 리스트를 상기 네비게이션 영상에 표시하고, 상기 리스트에서 특정 항목이 선택될 경우, 상기 네비게이션 영상의 장면을 상기 선택된 특정 항목에 해당하는 주요 혈관의 장면으로 이동하고, 상기 이동된 장면 내에서 상기 특정 항목에 해당하는 주요 혈관을 강조하여 표시할 수 있다. 또한, 상기 주요 혈관을 강조하여 표시하는 단계는 상기 주요 혈관을 중심으로 기 설정된 배율로 줌인시켜 표시하는 제1 강조 표시 방식, 상기 주요 혈관을 기 설정된 색상으로 표시하는 제2 강조 표시 방식, 상기 주요 혈관의 테두리를 블링킹시켜 표시하는 제3 강조 표시 방식, 상기 주요 혈관 상에 상기 주요 혈관을 나타내는 텍스트를 표시하는 제4 강조 표시 방식 및 상기 주요 혈관을 복수의 다각도로 표시하는 제5 강조 표시 방식 중 적어도 하나 또는 둘 이상이 조합된 형태로 강조 표시할 수 있다.In addition, the step of providing the navigation image may highlight and display the extracted major blood vessels in the navigation image. In this case, the step of highlighting and displaying the main blood vessel includes displaying a list including the extracted main blood vessel item on the navigation image, and when a specific item is selected from the list, a scene of the navigation image is displayed on the selected specific item It is possible to move to a scene of major blood vessels corresponding to , and highlight and display major blood vessels corresponding to the specific item in the moved scene. In addition, the step of highlighting and displaying the main blood vessels includes a first highlight display method for displaying the main blood vessels by zooming in at a preset magnification, a second highlight display method for displaying the main blood vessels with a preset color, and the main blood vessels. A third highlight display method for displaying the border of blood vessels by blinking, a fourth highlight display method for displaying text indicating the main blood vessels on the main blood vessels, and a fifth highlight display method for displaying the main blood vessels in a plurality of angles At least one or two or more of them may be highlighted in a combined form.
또한, 상술한 과제를 해결하기 위한 본 발명은 하드웨어인 컴퓨터와 결합되어, 수술 중 혈관 네비게이션 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터 프로그램에 있어서, 상기 컴퓨터 프로그램은 대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 프로세스와, 촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 프로세스와, 상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 프로세스와, 상기 네비게이션 영상을 사용자에게 제공하는 프로세스를 수행하고, 상기 네비게이션 영상은 상기 주요혈관에서 분기된 혈관을 포함하고, 상기 분기된 혈관은 사용자 입력에 기초하여 추가 또는 제거 가능할 수 있다.In addition, the present invention for solving the above problems is a computer program stored in a computer-readable recording medium that is combined with a computer, which is hardware, so as to perform a blood vessel navigation method during surgery, wherein the computer program is a two-dimensional (2D) object. A process of constructing a three-dimensional blood vessel model using a blood vessel learning model from medical image data, a process of recognizing a surgical stage of an object using a surgical stage learning model in a photographed surgical image, and corresponding to the recognized surgical stage performing a process of extracting major blood vessels from the constructed three-dimensional blood vessel model and matching them to a navigation image, and providing the navigation image to a user, wherein the navigation image includes blood vessels branching from the major blood vessels, The branched blood vessel may be added or removed based on a user input.
상술한 과제를 해결하기 위한 본 발명의 일 실시예에 따른 수술 중 혈관 네비게이션 시스템은, 수술영상을 촬영하기 위한 의료영상 촬영장비와, 수술 네비게이션 영상을 사용자에게 제공하기 위한 디스플레이부와, 하나 이상의 프로세서와, 상기 하나 이상의 프로세서에 의한 실행 시, 상기 하나 이상의 프로세서가 연산을 수행하도록 하는 명령들이 저장된 하나 이상의 메모리를 포함하는 제어부를 포함하고, 상기 제어부에서 수행되는 연산은 대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 연산과, 촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 연산과, 상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 연산과, 상기 네비게이션 영상을 사용자에게 제공하는 연산을 포함하고, 상기 네비게이션 영상은 상기 주요혈관에서 분기된 혈관을 포함하고, 상기 분기된 혈관은 사용자 입력에 기초하여 추가 또는 제거 가능할 수 있다.An intraoperative blood vessel navigation system according to an embodiment of the present invention for solving the above-described problems includes a medical image photographing device for photographing a surgical image, a display unit for providing a surgical navigation image to a user, and one or more processors and a control unit including at least one memory in which instructions for causing the at least one processor to perform an operation when executed by the one or more processors are stored, wherein the operation performed by the control unit is performed in two-dimensional medical image data of an object An operation for constructing a three-dimensional blood vessel model using the blood vessel learning model, an operation for recognizing the surgical stage of an object using the surgical stage learning model in the photographed surgical image, and a major blood vessel corresponding to the recognized surgical stage an operation of extracting from the constructed 3D blood vessel model and matching to a navigation image, and an operation of providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel may be added or removed based on user input.
상기 본 발명의 일 실시 예에 따르면, 수술 중에 인식된 수술 단계별 영상에 3차원으로 모델링한 혈관 영상을 정합하여 사용자에게 다양한 정보를 제공할 수 있다.According to an embodiment of the present invention, various information can be provided to the user by matching the three-dimensionally modeled blood vessel image to the surgical step-by-step image recognized during surgery.
또한, 기계학습을 통해 환자 특이적 3차원 해부모형을 구축하고, 특정 단계의 수술영상에서 사용자의 선택에 따라 분기되는 혈관을 추가하거나 제외할 수 있도록 함으로써 수술 진행을 원활히 하고, 사용자가 사람마다 다른 혈관의 위치를 정확하게 인식하여, 원치 않는 출혈을 발생시키거나 장기를 손상시킬 가능성을 낮출 수 있다.In addition, by constructing a patient-specific three-dimensional anatomical model through machine learning, and allowing branching blood vessels to be added or excluded according to the user's selection in the surgical image of a specific stage, the operation process is facilitated, and the user can have different characteristics for each person. By accurately recognizing the location of blood vessels, it is possible to reduce the likelihood of causing unwanted bleeding or damaging organs.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the following description.
도 1은 본 발명의 일 실시 예에 따른 로봇수술 시스템을 도시한 도면이다. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 수술 중 혈관 네비게이션 방법을 설명하기 위한 순서도이다.2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
도 3a 내지 도 3c는 본 발명의 일 실시 예에 따른 혈관분할 방법을 설명하기 위한 도면이다.3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따라 혈관의 분기를 모델링 하는 일 예를 설명하기 위한 도면이다.4 is a view for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 수술단계학습 모델을 설명하기 위한 순서도이다.5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 혈관 네비게이션 영상을 정합하는 일 예를 설명하기 위한 도면이다.6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 혈관 흐름을 모델링 하는 일 예를 설명하기 위한 도면이다.7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
도 8a 및 도 8b는 본 발명의 일 실시예에 따른 혈관 네비게이션 영상의 일 예를 설명하기 위한 도면이다.8A and 8B are diagrams for explaining an example of a blood vessel navigation image according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. Advantages and features of the present invention and methods of achieving them will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various different forms, and only these embodiments allow the disclosure of the present invention to be complete, and those of ordinary skill in the art to which the present invention pertains. It is provided to fully understand the scope of the present invention to those skilled in the art, and the present invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terminology used herein is for the purpose of describing the embodiments and is not intended to limit the present invention. As used herein, the singular also includes the plural unless specifically stated otherwise in the phrase. As used herein, “comprises” and/or “comprising” does not exclude the presence or addition of one or more other components in addition to the stated components. Like reference numerals refer to like elements throughout, and "and/or" includes each and every combination of one or more of the recited elements. Although "first", "second", etc. are used to describe various elements, these elements are not limited by these terms, of course. These terms are only used to distinguish one component from another. Accordingly, it goes without saying that the first component mentioned below may be the second component within the spirit of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used herein will have the meaning commonly understood by those of ordinary skill in the art to which this invention belongs. In addition, terms defined in a commonly used dictionary are not to be interpreted ideally or excessively unless specifically defined explicitly.
본 명세서에서 "영상"은 이산적인 영상 요소들(예를 들어, 2차원 영상에 있어서의 픽셀들 및 3D 영상에 있어서의 복셀들)로 구성된 다차원(multi-dimensional) 데이터를 의미할 수 있다. 예를 들어, 영상은 CT 촬영 장치에 의해 획득된 대상체의 의료 영상 등을 포함할 수 있다. In this specification, "image" may mean multi-dimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image). For example, the image may include a medical image of the object obtained by the CT imaging apparatus.
본 명세서에서 "대상체(object)"는 사람 또는 동물, 또는 사람 또는 동물의 일부 또는 전부일수 있다. 예를 들어, 대상체는 간, 심장, 자궁, 뇌, 유방, 복부 등의 장기, 및 혈관 중 적어도 하나를 포함할 수 있다. As used herein, an “object” may be a human or an animal, or a part or all of a human or animal. For example, the object may include at least one of organs such as liver, heart, uterus, brain, breast, abdomen, and blood vessels.
본 명세서에서 "사용자"는 의료 전문가로서 의사, 간호사, 임상 병리사, 의료 영상 전문가 등이 될 수 있으며, 의료 장치를 수리하는 기술자가 될 수 있으나, 이에 한정되지 않는다.As used herein, a “user” may be a medical professional, such as a doctor, a nurse, a clinical pathologist, or a medical imaging specialist, and may be a technician repairing a medical device, but is not limited thereto.
본 명세서에서 "의료영상데이터"는 의료영상 촬영장비로 촬영되는 의료영상으로서, 대상체의 신체를 3차원 모델로 구현 가능한 모든 의료영상을 포함한다. "의료영상데이터"는 컴퓨터 단층촬영(Computed Tomography;CT)영상, 자기공명영상(Magnetic Resonance Imaging; MRI), 양전자 단층촬영(Positron Emission Tomography; PET) 영상 등을 포함할 수 있다.As used herein, "medical image data" is a medical image captured by a medical imaging device, and includes all medical images that can be implemented as a three-dimensional model of the body of an object. "Medical image data" may include a computed tomography (CT) image, a magnetic resonance imaging (MRI), a positron emission tomography (PET) image, and the like.
본 명세서에서 "가상신체모델"은 의료영상데이터를 기반으로 실제 환자의 신체에 부합하게 생성된 모델을 의미한다. "가상신체모델"은 의료영상데이터를 그대로 3차원으로 모델링하여 생성한 것일 수도 있고, 모델링 후에 실제 수술 시와 같게 보정한 것일 수도 있다.As used herein, the term "virtual body model" refers to a model generated to match the actual patient's body based on medical image data. The "virtual body model" may be generated by modeling medical image data in 3D as it is, or may be corrected after modeling to be the same as during actual surgery.
본 명세서에서 "가상수술데이터"는 가상신체모델에 대해 수행되는 리허설 또는 시뮬레이션 행위를 포함하는 데이터를 의미한다. "가상수술데이터"는 가상공간에서 가상신체모델에 대해 리허설 또는 시뮬레이션이 수행된 영상데이터일 수도 있고, 가상신체모델에 대해 수행된 수술동작에 대해 기록된 데이터일 수도 있다. 또한, "가상수술데이터"는 수술학습모델을 학습시키기 위한 학습데이터를 포함할 수도 있다.As used herein, “virtual surgical data” refers to data including rehearsal or simulation actions performed on a virtual body model. “Virtual surgical data” may be image data on which rehearsal or simulation is performed on a virtual body model in a virtual space, or data recorded on a surgical operation performed on the virtual body model. In addition, "virtual surgery data" may include learning data for learning the surgical learning model.
본 명세서에서 "실제수술데이터"는 실제 의료진이 수술을 수행함에 따라 획득되는 데이터를 의미한다. "수술데이터"는 실제 수술과정에서 수술부위를 촬영한 영상데이터일 수도 있고, 실제 수술과정에서 수행된 수술동작에 대해 기록된 데이터일 수도 있다.As used herein, “actual surgical data” refers to data obtained by actual medical staff performing surgery. "Surgery data" may be image data of a surgical site taken in an actual surgical procedure, or may be data recorded on a surgical operation performed in an actual surgical procedure.
본 명세서에서 수술단계(phase)는 특정한 수술유형의 전체 수술에서 순차적으로 수행되는 기본단계를 의미한다.In the present specification, a surgical phase refers to a basic phase that is sequentially performed in the entire operation of a specific type of operation.
본 명세서에서 "컴퓨터"는 연산처리를 수행하여 사용자에게 결과를 제공할 수 있는 다양한 장치들이 모두 포함된다. 예를 들어, 컴퓨터는 데스크 탑 PC, 노트북(Note Book) 뿐만 아니라 스마트폰(Smart phone), 태블릿 PC, 셀룰러폰(Cellular phone), 피씨에스폰(PCS phone; Personal Communication Service phone), 동기식/비동기식 IMT-2000(International Mobile Telecommunication-2000)의 이동 단말기, 팜 PC(Palm Personal Computer), 개인용 디지털 보조기(PDA; Personal Digital Assistant) 등도 해당될 수 있다. 또한, 헤드마운트 디스플레이(Head Mounted Display; HMD) 장치가 컴퓨팅 기능을 포함하는 경우, HMD장치가 컴퓨터가 될 수 있다. 또한, 컴퓨터는 클라이언트로부터 요청을 수신하여 정보처리를 수행하는 서버 또는 네비게이션 시스템이 될 수 있다. As used herein, the term “computer” includes various devices capable of providing a result to a user by performing arithmetic processing. For example, computers include desktop PCs, notebooks (Note Books) as well as smart phones, tablet PCs, cellular phones, PCS phones (Personal Communication Service phones), synchronous/asynchronous A mobile terminal of International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable. Also, when a head mounted display (HMD) device includes a computing function, the HMD device may be a computer. In addition, the computer may be a server or a navigation system that receives a request from a client and performs information processing.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일 실시 예에 따른 로봇수술 시스템을 도시한 도면이다. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
도 1을 참조하면, 일 실시 예에 따라 로봇수술을 수행할 수 있는 시스템을 간략하게 도식화한 도면이 도시되어 있다.Referring to FIG. 1 , a schematic diagram of a system capable of performing robotic surgery according to an embodiment is shown.
도 1에 따르면, 로봇수술 시스템은 의료영상 촬영장비(10), 네비게이션 시스템(20) 및 수술실에 구비된 제어부(30), 영상촬영부(36), 디스플레이(32) 및 수술로봇(34)을 포함한다. 실시 예에 따라서, 의료영상 촬영장비(10)는 일 실시예에 따른 로봇수술 시스템에서 생략될 수 있다.According to FIG. 1 , the robotic surgery system includes a medical image capturing device 10 , a navigation system 20 , and a controller 30 provided in an operating room, an image capturing unit 36 , a display 32 and a surgical robot 34 . include According to an embodiment, the medical imaging equipment 10 may be omitted from the robotic surgery system according to an embodiment.
일 실시 예에서, 로봇수술은 사용자가 제어부(30)를 이용하여 수술용 로봇(34)을 제어함으로써 수행될 수 있고, 사용자의 제어(조작)없이 제어부(30)에 의해 자동으로 수행될 수도 있다. In one embodiment, the robotic surgery may be performed by the user controlling the surgical robot 34 using the control unit 30, and may be automatically performed by the control unit 30 without the user's control (manipulation). .
네비게이션 시스템(20)는 적어도 하나의 프로세서, 메모리 및 통신부를 포함하는 컴퓨팅 장치이다. 일 실시예에서, 네비게이션 시스템(20)은 수술실 외부 데이터처리서버 및 수술실 내 그래픽 처리 단말 장치를 포함하거나, 또는 상기 그래픽 처리 단말 장치만을 포함할 수도 있다. The navigation system 20 is a computing device including at least one processor, a memory and a communication unit. In one embodiment, the navigation system 20 may include a data processing server outside the operating room and a graphic processing terminal device in the operating room, or may include only the graphic processing terminal device.
제어부(30)는 적어도 하나의 프로세서, 메모리 및 통신부를 포함하는 컴퓨팅 장치를 포함한다. 일 실시 예에서, 제어부(30)는 수술용 로봇(34)을 제어하기 위한 하드웨어 및 소프트웨어 인터페이스를 포함한다. 또한, 제어부(30)는 사용자 콘솔 장치 및 제어 장치로 구분될 수 있다.The control unit 30 includes a computing device including at least one processor, a memory, and a communication unit. In one embodiment, the controller 30 includes hardware and software interfaces for controlling the surgical robot 34 . Also, the control unit 30 may be divided into a user console device and a control device.
영상촬영부(36)는 적어도 하나의 이미지 센서를 포함한다. 즉, 영상촬영부(36)는 적어도 하나의 카메라 장치를 포함하여, 수술부위를 촬영하는 데 이용된다. 일 실시 예에서, 영상촬영부(36)는 수술로봇(34)과 결합되어 이용된다. 예를 들어, 영상촬영부(36)는 수술로봇(34)의 수술 암(Arm)과 결합된 적어도 하나의 카메라를 포함할 수있다.The image capturing unit 36 includes at least one image sensor. That is, the image capturing unit 36 includes at least one camera device, and is used to photograph the surgical site. In one embodiment, the imaging unit 36 is used in combination with the surgical robot (34). For example, the imaging unit 36 may include at least one camera coupled to the surgical arm of the surgical robot 34 .
일 실시 예에서, 영상촬영부(36)에서 촬영된 영상은 디스플레이(32)에 표시된다. In an embodiment, the image captured by the image capturing unit 36 is displayed on the display 32 .
제어부(30)는 네비게이션 시스템(20)로부터 수술에 필요한 정보를 수신하거나, 수술에 필요한 정보를 생성하여 사용자에게 제공한다. 예를 들어, 제어부(30)는 생성 또는 수신된, 수술에 필요한 정보를 디스플레이(32)에 표시한다.The controller 30 receives information necessary for surgery from the navigation system 20 or generates information necessary for surgery and provides it to the user. For example, the controller 30 displays generated or received information necessary for surgery on the display 32 .
예를 들어, 사용자는 디스플레이(32)를 보면서 제어부(30)를 조작하여 수술로봇(34)의 움직임을 제어함으로써 로봇수술을 수행한다.For example, the user performs robotic surgery by controlling the movement of the surgical robot 34 by manipulating the controller 30 while viewing the display 32 .
네비게이션 시스템(20)는 의료영상 촬영장비(10)로부터 사전에 촬영된 대상체(환자)의 의료영상데이터를 이용하여 로봇수술에 필요한 정보를 생성하고, 생성된 정보를 제어부(30)에 제공한다. The navigation system 20 generates information necessary for robotic surgery using medical image data of an object (patient) photographed in advance from the medical imaging device 10 , and provides the generated information to the controller 30 .
제어부(30)는 네비게이션 시스템(20)로부터 수신된 정보를 디스플레이(32)에 표시함으로써 사용자에게 제공하고, 상기 수신된 정보를 이용하여 수술로봇(34)의 동작을 제어할 수 있다. The controller 30 may provide information received from the navigation system 20 to the user by displaying it on the display 32 , and control the operation of the surgical robot 34 using the received information.
일 실시 예에서, 의료영상 촬영장비(10)에서 사용될 수 있는 수단은 제한되지 않으며, 예를 들어 CT, X-Ray, PET, MRI 등 다른 다양한 의료영상 획득수단이 사용될 수 있다. In one embodiment, the means that can be used in the medical imaging equipment 10 is not limited, for example, CT, X-Ray, PET, MRI, etc., various other medical image acquisition means may be used.
이하에서는, 설명의 편의를 위하여 각 단계들이 "컴퓨터"에 의하여 수행되는 것으로 서술하나, 각 단계의 수행주체는 특정 장치에 제한되지 않고, 그 전부 또는 일부가 서버(20) 또는 제어부(30)에서 수행될 수 있다.Hereinafter, each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of it is performed by the server 20 or the control unit 30 can be performed.
일 실시 예에서, 의료영상 촬영장비(10)에서 촬영된 수술 영상은 다양한 기준으로 분할될 수 있다. 일 예로, 수술 영상은 영상에 포함된 객체의 종류를 기초로 하여 분할될 수 있다. 객체의 종류를 기초로 하는 분할방법은 컴퓨터가 각 객체를 인식하는 단계를 필요로 한다.In an embodiment, the surgical image captured by the medical imaging device 10 may be divided according to various criteria. For example, the surgical image may be divided based on the type of object included in the image. The segmentation method based on the type of object requires the computer to recognize each object.
수술 영상에서 인식되는 객체는 크게 인체, 외부에서 유입된 객체 및 자체적으로 생성된 객체를 포함한다. 인체는 수술에 선행되는 의료영상 촬영(예를 들어, CT)에 의하여 촬영되는 신체부위와 촬영되지 않는 신체부위를 포함한다.Objects recognized in the surgical image largely include a human body, an object introduced from the outside, and an object created by itself. The human body includes body parts that are imaged by medical imaging (eg, CT) prior to surgery and body parts that are not imaged.
예를 들어, 의료영상 촬영에 의하여 촬영되는 신체부위는 장기, 혈관, 뼈, 힘줄 등을 포함하며, 이러한 신체부위는 의료영상에 기초하여 생성되는 3D 모델링 영상에 기초하여 인식될 수 있다.For example, body parts photographed by medical imaging include organs, blood vessels, bones, tendons, and the like, and these body parts may be recognized based on a 3D modeling image generated based on the medical image.
구체적으로, 각 신체부위의 위치와 크기, 모양 등이 의료영상에 기초한 3D 분석방법에 의하여 사전에 인지된다. 컴퓨터는 실시간으로 수술영상에 대응하는 신체부위의 위치를 파악할 수 있는 알고리즘을 정의하고, 이에 기초하여 별도의 이미지 인식을 수행하지 않아도 수술영상에 포함되는 각 신체부위의 위치, 크기 및 모양 등에 대한 정보를 획득할 수 있다. Specifically, the position, size, shape, etc. of each body part are recognized in advance by a 3D analysis method based on a medical image. The computer defines an algorithm that can determine the position of the body part corresponding to the surgical image in real time, and based on this, information on the position, size, shape, etc. of each body part included in the surgical image without performing separate image recognition can be obtained.
또한, 의료영상 촬영에 의하여 촬영되지 않는 신체부위는 오멘텀(omentum) 등을 포함하며, 이는 의료영상에 의하여 촬영되지 않으므로 수술 중에 실시간으로 인식하는 것이 필요하다. 예를 들어, 컴퓨터는 이미지 인식방법을 통하여 오멘텀의 위치 및 크기를 판단하고, 오멘텀 내부에 혈관이 있는 경우 혈관의 위치 또한 예측할 수 있다.In addition, body parts that are not photographed by medical imaging include omentum, etc., which are not photographed by medical imaging, so it is necessary to recognize them in real time during surgery. For example, the computer may determine the position and size of the omentum through the image recognition method, and if there is a blood vessel inside the omentum, the location of the blood vessel may also be predicted.
외부에서 유입된 객체는, 예를 들어 수술도구, 거즈, 클립 등을 포함한다. 이는 기 설정된 형태적 특징을 가지므로, 컴퓨터가 수술 중에 이미지 분석을 통하여 실시간으로 인식할 수 있다.The objects introduced from the outside include, for example, surgical tools, gauze, clips, and the like. Since it has preset morphological characteristics, the computer can recognize it in real time through image analysis during surgery.
내부에서 생성되는 객체는, 예를 들어 신체부위에서 발생하는 출혈 등을 포함한다. 이는 컴퓨터가 수술 중에 이미지 분석을 통하여 실시간으로 인식할 수 있다.The object generated inside includes, for example, bleeding occurring in a body part. This can be recognized in real time by the computer through image analysis during surgery.
신체부위에 포함된 장기나 오멘텀의 움직임, 그리고 객체가 내부에서 생성되는 원인은 모두 외부에서 유입된 객체의 움직임에 기인한다.The movement of organs or omentum included in the body part, and the cause of the internal creation of an object, are all caused by the movement of an object introduced from the outside.
따라서, 수술 영상은 각 객체를 인식하는 것에 더하여, 각 객체의 움직임에 기초하여 여러 수술 단계(phase)로 분할될 수 있다. 일 실시 예에서, 수술 영상은 외부에서 유입된 객체의 움직임, 즉 액션에 기초하여 분할될 수 있다. Therefore, in addition to recognizing each object, the surgical image may be divided into several surgical phases based on the movement of each object. In an embodiment, the surgical image may be divided based on a movement of an externally introduced object, that is, an action.
컴퓨터는 수술영상에서 인식된 각 객체의 종류를 판단하고, 각 객체의 종류에 따라 사전에 정의된 특정한 동작, 일련의 동작, 동작에 따라 발생하는 상황이나 결과 등에 기초하여, 각 객체의 움직임, 즉 액션을 인식할 수 있다.The computer determines the type of each object recognized in the surgical image, and the movement of each object, that actions can be recognized.
컴퓨터는 각 액션의 종류를 인식하고, 나아가 각 액션의 원인 또한 인식할 수 있다. 컴퓨터는 인식되는 액션에 기초하여 수술영상을 분할할 수 있고, 단계적 분할을 통해 각각의 세부수술동작부터 전체 수술의 종류까지 인식할 수 있다. The computer can recognize the type of each action and further recognize the cause of each action. The computer can segment the surgical image based on the recognized action, and can recognize from each detailed surgical operation to the type of the entire operation through stepwise segmentation.
예를 들면, 컴퓨터는 컨볼루션신경망네트워크(Convolutional Neural Network) 방식의 기계학습을 수행하는 수술단계학습모델을 기초로 촬영된 수술영상에서 특징정보를 추출하고, 특징정보를 기초로 수술단계별로 영상을 분할하거나 현재 어떤 수술단계에 위치하는지 인식할 수 있다. For example, a computer extracts feature information from a surgical image taken based on a surgical stage learning model that performs machine learning of a convolutional neural network method, and generates images for each surgical stage based on the feature information. It can be divided or recognized at which stage of surgery it is currently located.
나아가, 컴퓨터는 액션에 대한 판단으로부터 수술영상에 대응하는, 기 정의된 수술의 종류를 판단할 수 있다. 수술의 종류를 판단하는 경우, 전체 수술 프로세스에 대한 정보를 획득할 수 있다. 동일한 종류의 수술에 대하여 복수 개의 수술 프로세스가 존재하는 경우, 의사의 선택에 따라서, 또는 특정 시점까지 인식된 액션들에 기초하여 하나의 수술 프로세스를 선택할 수 있다. Furthermore, the computer may determine a predefined type of operation corresponding to the operation image from the determination of the action. When determining the type of surgery, information on the entire surgical process may be acquired. When a plurality of surgical processes exist for the same type of surgery, one surgical process may be selected according to a selection of a doctor or based on actions recognized up to a specific time point.
컴퓨터는 획득된 수술 프로세스에 기초하여 수술단계를 인식 및 예측할 수 있다. 예를 들어, 일련의 수술 프로세스 중 특정 단계가 인식되는 경우, 이에 후속되는 단계들을 예측하거나 가능한 단계들의 후보를 추려낼 수 있다. The computer may recognize and predict the surgical stage based on the acquired surgical process. For example, when a specific step is recognized in a series of surgical processes, the subsequent steps may be predicted or candidates for possible steps may be selected.
따라서, 오멘텀 등에 의하여 발생하는 수술영상 인식의 오류율을 크게 낮출 수 있다. 또한, 수술영상이 예측가능한 수술단계로부터 소정의 오차범위 이상 크게 벗어나는 경우, 수술오류(surgical error)상황이 발생한 것으로 인식할 수도 있다. 예를 들면, 정해진 수술 프로세스를 벗어나는 수술단계 스위칭이 빈번히 일어난다면 수술 오류 상황이 발생한 것으로 인식할 수 있다. Therefore, it is possible to significantly reduce the error rate of surgical image recognition caused by omentum. In addition, when the surgical image greatly deviates from the predictable surgical stage by more than a predetermined error range, it may be recognized that a surgical error situation has occurred. For example, if a surgical stage switching that deviates from a predetermined surgical process frequently occurs, it may be recognized that a surgical error situation has occurred.
또한 컴퓨터는 수술단계에 대한 영상 인식에 기반하여, 각 수술단계에 대응하는 주요혈관들 및 혈액 흐름에 따라 주요혈관에 분기된 혈관들에 대한 네비게이션 정보를 추출하여 사용자에게 제공함으로써 효과적인 수술이 되도록 보조할 수 있다.In addition, based on the image recognition of the surgical stage, the computer extracts navigation information about the main vessels corresponding to each surgical stage and the vessels branched to the main vessels according to the blood flow, thereby assisting the user for effective surgery. can do.
또한, 컴퓨터는 수술단계에 대한 영상 인식에 기반하여, 수술오류로 인한 출혈이 발생하였는지 판단할 수 있다.In addition, the computer may determine whether bleeding has occurred due to a surgical error based on the image recognition of the surgical stage.
구체적으로, 컴퓨터는 각각의 출혈의 위치, 시간, 규모를 판단을 할 수 있다. 또한, 출혈로 인해 수술이 중단되어야 하는지 여부를 판단할 수 있다. 따라서, 일 실시예에 따라 컴퓨터는 오류 상황 및 출혈 상황에 대한 데이터를 수술결과 리포트로 제공하고, 수술과정에서 불필요한 동작이나 실수를 배제하고, 수술 과정을 효율화하는 데 이용될 수 있다.Specifically, the computer can determine the location, time, and magnitude of each bleeding. It can also determine whether surgery should be stopped due to bleeding. Accordingly, according to an embodiment, the computer may be used to provide data on an error situation and bleeding situation as a surgical result report, to exclude unnecessary operations or mistakes in the surgical process, and to streamline the surgical process.
일 실시예에 따라 수술 중 혈관 네비게이션을 제공하기 위해, 컴퓨터에서 수행되는 연산은, 대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 연산, 촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 연산, 상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 연산 및 상기 네비게이션 영상을 사용자에게 제공하는 연산을 포함할 수 있다.In order to provide blood vessel navigation during surgery according to an embodiment, the calculation performed on the computer is the calculation of constructing a 3D blood vessel model using the blood vessel learning model in the 2D medical image data of the object, in the photographed surgical image, Calculation of recognizing the surgical stage of an object using the surgical stage learning model, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image, and providing the navigation image to the user It can include operations provided.
또한, 컴퓨터는 네비게이션 영상을 제공함에 있어서, 사용자 입력에 기초하여 주요혈관에서 분기된 혈관을 추가 또는 제거하도록 하는 연산을 더 포함할 수 있다.In addition, in providing the navigation image, the computer may further include an operation for adding or removing a blood vessel branching from a main blood vessel based on a user input.
또한, 주요 혈관이 정합되는 네비게이션 영상은 상기 촬영된 수술영상 또는 대상체의 2차원 의료영상 데이터에서 모델링된 3차원 대상체 영상일 수 있다. 또한, 컴퓨터는 촬영된 수술영상 화면의 위치정보를 수집하고, 상기 인식된 수술단계 및 위치정보를 기반으로 상기 3차원 모델링된 주요혈관과 대상체를 상기 수술영상에 동조(SYNC)시키는 연산을 수행할 수 있다.In addition, the navigation image to which the major blood vessels are matched may be a three-dimensional object image modeled from the photographed surgical image or two-dimensional medical image data of the object. In addition, the computer collects the position information of the captured surgical image screen, and performs an operation of synchronizing (SYNC) the three-dimensional modeled main blood vessels and the object to the surgical image based on the recognized surgical stage and position information. can
이하에서는, 도면을 참조하여 수술 중 혈관 네이게이션 방법에 대하여 보다 상세하게 설명한다. Hereinafter, a blood vessel navigation method during surgery will be described in more detail with reference to the drawings.
도 2는 본 발명의 일 실시예에 따른 수술 중 혈관 네비게이션 방법을 설명하기 위한 순서도이다.2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
도 2에 도시된 각 단계들은 도 1에 도시된 네비게이션 시스템(20) 또는 제어부(30)에서 시계열적으로 수행된다. 이하에서는, 설명의 편의를 위하여 각 단계들이 "컴퓨터"에 의하여 수행되는 것으로 서술하나, 각 단계의 수행주체는 특정 장치에 제한되지 않고, 그 전부 또는 일부가 네비게이션 시스템(20) 또는 제어부(30)에서 수행될 수 있다. Each of the steps shown in FIG. 2 is time-series performed by the navigation system 20 or the controller 30 shown in FIG. 1 . Hereinafter, each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of the navigation system 20 or the control unit 30 can be performed in
단계 S200에서, 일 실시예에 따른 컴퓨터는 대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축한다. In step S200, the computer according to an embodiment builds a 3D blood vessel model by using the blood vessel learning model from the two-dimensional medical image data of the object.
여기서, 혈관학습모델은 임상데이터에서 획득한 사용자 특이 정보 및 2차원 의료영상데이터를 기초로 주요혈관, 주요혈관을 기준으로 분기된 혈관 및 혈관 흐름을 찾아내 모델링 하는 것일 수 있다.Here, the blood vessel learning model may be to find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data.
혈관학습모델에 대해서는 도 3a 내지 도 3c 및 도 4를 참조하여 후술한다.The blood vessel learning model will be described later with reference to FIGS. 3A to 3C and 4 .
다음으로 단계 S210에서, 일 실시예에 따른 컴퓨터는 촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식한다.Next, in step S210, the computer according to an embodiment recognizes the surgical stage of the object by using the surgical stage learning model in the photographed surgical image.
여기서, 수술단계학습모델은, 가상수술데이터를 이용하여 상기 대상체의 수술에 필요한 적어도 하나 이상의 단계를 레이블로 정의하고, 정의된 레이블 별로 학습영상을 입력하여 기계학습 된 것일 수 있다.Here, the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using virtual surgery data, and inputting a learning image for each defined label.
또한, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 단계는, 실제수술데이터를 기초로 상기 학습된 수술단계학습모델을 이용하여 수술단계를 인식하는 것일 수 있다. In addition, the step of recognizing the operation stage of the object using the operation stage learning model may be recognizing the operation stage using the learned operation stage learning model based on the actual operation data.
또한, 실제수술데이터는 로봇암의 움직임에 대한 추적 데이터 또는 수술영상 프레임에서 획득된 데이터를 포함할 수 있다.In addition, the actual surgical data may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
이하, 도 5를 참조하여 수술단계학습 모델을 후술한다.Hereinafter, a surgical step learning model will be described with reference to FIG. 5 .
단계 S220에서, 일 실시예에 따른 컴퓨터는 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합한다.In step S220, the computer according to an embodiment extracts major blood vessels corresponding to the recognized surgical steps from the constructed 3D blood vessel model and matches them with the navigation image.
상기 주요 혈관이 정합되는 네비게이션 영상은 상기 촬영된 수술영상 또는 상기 의료영상데이터를 이용하여 3차원 모델링된 대상체 영상일 수 있다.The navigation image to which the major blood vessels are matched may be a three-dimensionally modeled object image using the captured surgical image or the medical image data.
이하, 일 실시예에 따른 혈관 네비게이션 영상을 정합하는 일 예는 도 6을 참조하여 후술한다.Hereinafter, an example of matching a blood vessel navigation image according to an embodiment will be described later with reference to FIG. 6 .
단계 S230에서, 일 실시예에 따른 컴퓨터는 정합된 네비게이션 영상을 사용자에게 제공한다. 여기서, 네비게이션 영상은 주요혈관에서 분기된 혈관을 사용자 입력에 기초하여 혈관 흐름에 따라 추가 또는 제거 가능할 수 있다.In step S230, the computer according to an embodiment provides the matched navigation image to the user. Here, in the navigation image, a blood vessel branched from a main blood vessel may be added or removed according to a blood vessel flow based on a user input.
이하, 혈관 네비게이션 영상을 사용자에게 제공하고 조작하는 방법의 일 예는 도 7과, 도 8a 및 도 8b를 참조하여 후술한다.Hereinafter, an example of a method of providing and manipulating a blood vessel navigation image to a user will be described later with reference to FIGS. 7 and 8A and 8B .
도 3a 내지 도 3c는 본 발명의 일 실시 예에 따른 혈관분할 방법을 설명하기 위한 도면이다.3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
도 3a를 참조하면, 본 발명의 일 실시예에 따른 컴퓨터는 의료영상 데이터(CT 촬영본)을 3차원 객체로 변환하는 과정을 거쳐 대상체(장기 및 혈관)에 대한 3D 영상 데이터(301)를 생성한다. 보다 구체적으로 의료영상 데이터를 기초로 정맥 및 동맥을 모델링 한 후 혈관 분기점을 순차적으로 혈관 종류를 부여한다. Referring to FIG. 3A , a computer according to an embodiment of the present invention generates 3D image data 301 of an object (organs and blood vessels) through a process of converting medical image data (CT image) into a 3D object. do. More specifically, after veins and arteries are modeled based on medical image data, vascular branch points are sequentially assigned blood vessel types.
이때 수술 단계별로 주요 혈관을 분리하기 위한 라벨링이 수행될 수 있으며, 각각 라벨링된 혈관에 대해 기계학습이 수행될 수 있다. In this case, labeling for separating major blood vessels may be performed for each operation step, and machine learning may be performed on each labeled blood vessel.
위암에 대한 수술을 예로 들면, 수술 영상에 등장하는 장기와 혈관의 이름을 리스트 업하고, 리스트업된 이름을 영상에 태깅하여 각각의 학습모델이 기계학습을 수행할 수 있도록 한다. 그리고 기계 학습된 혈관학습모델을 이용하여 각 장기 및 혈관이 분할될 수 있도록 한다. 따라서, 도 3b에 도시된 바와 같이, 사용자는 장기와 혈관의 이름이 라벨링된 혈관의 리스트(302)를 확인할 수 있으며, 나아가 도 3c에 도시된 바와 같이, 이름이 태그된 영상(303)을 확인할 수 있다.Taking the operation for stomach cancer as an example, the names of organs and blood vessels appearing in the surgical image are listed up, and the listed names are tagged on the image so that each learning model can perform machine learning. Then, each organ and blood vessel can be divided using the machine-learned blood vessel learning model. Accordingly, as shown in FIG. 3B , the user can check the list 302 of blood vessels labeled with the names of organs and blood vessels, and further, as shown in FIG. 3C , the user can check the image 303 tagged with the names can
예를 들면, 도 4는 본 발명의 일 실시예에 따라 혈관의 분기를 모델링 하는 일 예를 설명하기 위한 도면이다.For example, FIG. 4 is a diagram for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
도 4를 참조하면, 본 발명의 일 실시예에 따른 컴퓨터는 혈관학습모델을 통해 3차원 영상에서 혈관을 분리한 뒤 각 혈관의 분기점(401)마다 포인트를 생성하여 혈관 이름을 라벨링 할 수 있다. 따라서 일 실시예에 따른 수술 중 혈관 네비게이션 방법에 따르면, 혈관학습모델은 분기점을 기준으로 혈관이 기계학습 되기 때문에 사용자에게 혈관 네비게이션 영상을 제공 시 분기점을 기준으로 혈관이 추가 또는 제거되도록 표시될 수 있다.Referring to FIG. 4 , the computer according to an embodiment of the present invention may label a blood vessel name by separating a blood vessel from a 3D image through a blood vessel learning model and then generating a point for each branch point 401 of each blood vessel. Therefore, according to the intraoperative blood vessel navigation method according to an embodiment, since the blood vessel learning model is machine-learned based on the branch point, when the blood vessel navigation image is provided to the user, the blood vessel may be displayed to be added or removed based on the branch point. .
또한, 본 발명의 일 실시예에 따른 컴퓨터는 임상데이터에서 획득한 사용자 특이 정보 및 2차원 의료영상데이터를 기초로 주요혈관, 주요혈관을 기준으로 분기된 혈관 및 혈관 흐름을 보다 정확하게 찾아내 모델링할 수도 있다. In addition, the computer according to an embodiment of the present invention can more accurately find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data. may be
따라서, 이러한 기계학습을 통해 의료영상 데이터로부터 사람마다 다른 혈관의 위치를 정확하게 인식하여, 향후 수술 시 혈관 위치를 정확하게 사용자에게 안내할 수 있다.Therefore, the location of different blood vessels for each person can be accurately recognized from the medical image data through such machine learning, and the location of blood vessels can be accurately guided to the user during future surgery.
도 5는 본 발명의 일 실시예에 따른 수술단계학습 모델을 설명하기 위한 순서도이다.5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
도 5를 참조하면, 단계 S500에서 컴퓨터는 수술 종류를 선정한다. 수술 종류는 입력된 영상에서 객체 또는 대상체를 인식하여 자동으로 선정될 수도 있으며 사용자가 직접 입력할 수도 있다.Referring to FIG. 5 , in step S500, the computer selects the type of surgery. The type of surgery may be automatically selected by recognizing an object or an object in the input image, or may be directly input by the user.
다음으로 단계 S510에서 컴퓨터는 실제수술데이터를 이용하여 수술을 단계별 레이블로 정의한다.Next, in step S510, the computer defines the operation as a label for each stage using the actual operation data.
각각의 수술 단계(phase)는 카메라의 움직임, 장비 들의 움직임 및 장기들을 영상들의 인식함으로써 자동으로 분할될 수도 있으며, 사용자가 정형화된 수술 프로세스에 맞는 영상을 학습영상으로 분류하여 미리 학습시킬 수도 있다. Each surgical phase may be divided automatically by recognizing images of camera movement, equipment movement, and organs, and the user may classify an image suitable for a standardized surgical process as a learning image and learn it in advance.
예를 들면, 위암을 수술하는 경우, 준비단계, 장기를 노출하는 단계, 혈관을 자르는 단계, 위의 왼쪽 아래 장기를 자르는 단계, 위의 오른쪽 아래 장기를 자르는 단계 또는 십이지장과 위의 연결부분을 자르는 단계 등 약 21개의 단계로 수술 단계를 정의할 수 있다.For example, in the case of stomach cancer surgery, preparation steps, exposing organs, cutting blood vessels, cutting the lower left organ of the stomach, cutting the lower right organs of the stomach, or cutting the connection between the duodenum and the stomach A surgical stage can be defined as about 21 stages, including stages.
다음으로 단계 S520에서 컴퓨터는 정의된 레이블에 해당하는 학습영상을 선정한다. 학습영상은 컴퓨터가 입력된 수술영상을 자동으로 분할하여 선정할 수도 있으며, 사용자가 미리 실제의 수술데이터를 기반으로 학습영상을 선정하여 입력할 수도 있다. Next, in step S520, the computer selects a learning image corresponding to the defined label. The learning image may be selected by automatically dividing the inputted surgical image by a computer, or the user may select and input the learning image in advance based on actual surgical data.
다음으로 단계 S530에서 컴퓨터는 선정된 학습영상들을 기초로 컨볼루션 신경망 네트워크를 이용한 기계학습을 수행할 수 있다. 예를 들면, 이렇게 정의된 수술단계 각각의 수술 영상의 프레임을 SlowFast Network 등의 액션 인지 모델을 이용하여 학습한 경우 정확도 높은 수술단계학습모델을 훈련시킬 수 있다.Next, in step S530, the computer may perform machine learning using a convolutional neural network based on the selected learning images. For example, when the frame of each surgical image defined in this way is learned using an action recognition model such as SlowFast Network, a highly accurate surgical stage learning model can be trained.
이와 같이, 훈련된 수술단계학습모델은 수술 중에 수술단계를 보다 정확하게 자동으로 인식하는데 활용된다.In this way, the trained surgical stage learning model is used to more accurately and automatically recognize the surgical stage during surgery.
도 6은 본 발명의 일 실시예에 따른 혈관 네비게이션 영상을 정합하는 일 예를 설명하기 위한 도면이다.6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
본 발명의 일 실시예에 따라 컴퓨터는 수술영상 화면의 위치정보를 수집하고, 상기 수술단계 및 위치정보를 기반으로 상기 3차원 모델링된 주요혈관과 대상체를 상기 수술영상에 동조(SYNC)시킬 수 있다. According to an embodiment of the present invention, a computer may collect position information of a surgical image screen, and synchronize (SYNC) the three-dimensional modeled major blood vessels and an object to the surgical image based on the surgical stage and position information. .
여기서, 혈관이 정합되는 네비게이션 영상은 상기 촬영된 수술영상 또는 상기 의료영상데이터를 이용하여 3차원 모델링된 대상체의(장기 및 혈관) 영상일 수 있다.Here, the navigation image to which the blood vessels are matched may be an image of an object (organs and blood vessels) that is 3D modeled using the photographed surgical image or the medical image data.
예를 들면, 도 6의 화면(602)에 도시된 영상과 같이 컴퓨터는 위암 수술영상(601)에서 인식된 수술단계의 주요혈관 및 대상체를 3차원혈관모델로부터 추출한뒤, 추출된 주요혈관과 3차원모델링된 대상체 영상을 정합하여 사용자에게 제공할 수 있다.For example, as in the image shown in the screen 602 of FIG. 6 , the computer extracts the major blood vessels and the object of the surgical stage recognized in the gastric cancer surgery image 601 from the 3D blood vessel model, and then extracts the extracted major blood vessels and 3 The dimensionally modeled image of the object may be matched and provided to the user.
도 7은 본 발명의 일 실시예에 따른 혈관 흐름을 모델링 하는 일 예를 설명하기 위한 도면이다.7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
도 7을 참조하면, 본 발명의 일 실시예에 따른 3차원 혈관모델의 경우 주요혈관을 기준으로 혈관 흐름에 따라 주위혈관에 대한 네비게이션이 가능하도록 모델링 될 수 있다. 일 실시예에 따른 3차원 혈관 모델의 경우 주요혈관을 중심으로 분기되는 각각의 혈관을 라벨링하여 학습하기 때문에 의료영상데이터로부터 인식된 각각의 혈관에 대해 분기점을 기준으로 흐름 및 뎁스(depth)를 나눠 분류할 수 있기 때문이다.Referring to FIG. 7 , in the case of a three-dimensional blood vessel model according to an embodiment of the present invention, it may be modeled to enable navigation of surrounding blood vessels according to the flow of blood vessels based on main blood vessels. In the case of the three-dimensional blood vessel model according to an embodiment, since each blood vessel branching around the main blood vessel is labeled and learned, the flow and depth are divided for each blood vessel recognized from the medical image data based on the branch point. Because it can be classified.
따라서, 도 7에 도시된 것과 같이 컴퓨터는 선택된 주요혈관(701)을 중심으로 주위혈관에 대한 안내가 수행될 수 있도록 혈관 표시를 위한 각 단계를 뎁스(depth)별로 정의하고, 뎁스별 혈관흐름을 판단(702)한다. 그리고, 사용자 선택에 따라 뎁스별로 주변 혈관이 흐름에 따라 추가 또는 제거되어 표시되도록 할 수 있다. Therefore, as shown in FIG. 7 , the computer defines each step for displaying blood vessels by depth so that the guidance of the surrounding blood vessels can be performed centering on the selected main blood vessel 701 , and defines the blood vessel flow for each depth. A decision 702 is made. In addition, according to a user's selection, peripheral blood vessels may be added or removed according to the flow for each depth to be displayed.
또한, 도 8a 및 도 8b는 본 발명의 일 실시예에 따른 혈관 네비게이션 영상의 일 예를 도시한다.8A and 8B show an example of a blood vessel navigation image according to an embodiment of the present invention.
도 8a에 도시된 바와 같이, 컴퓨터는 수술영상에 동조된, 주요혈관(801)만 네비게이션 영상 화면에 표시되도록 할 수 있다.As shown in FIG. 8A , the computer may display only the major blood vessels 801 synchronized with the surgical image on the navigation image screen.
도 8a에 도시된 바와 같이, 사용자 선택에 따라 최초 정합 영상(802)에서 장기나 주변혈관을 제거(802)하거나 투명화(804)되도록 하는 사용자 인터페이스를 통해 사용자가 수술단계에서 필요로 하는 혈관네비게이션 영상을 제공할 수 있다. 또한, 단계별로 주요혈관에서 혈관의 흐름에 따라 분기된 혈관이 추가되어 표시되도록 할 수도 있다.As shown in FIG. 8A, through a user interface that removes (802) or transparent (804) organs or peripheral blood vessels from the initial registration image 802 according to the user's selection, the vascular navigation image required by the user in the surgical stage can provide In addition, it is also possible to display the branched blood vessels according to the flow of blood vessels from the main blood vessels step by step.
또한, 컴퓨터는 수술에 필요한 정보에 포함된 수술 요구사항의 분석을 통해 복수의 주요 부분(Point Of Interest)(일 예로, 주요 혈관)을 파악하고, 상기 파악된 복수의 주요 부분을 재구성(reconstruction)한 후에 혈관네비게이션 영상에 상기 복수의 주요 부분의 리스트를 표시하고, 상기 리스트에서 특정 주요 부분이 선택될 경우 상기 혈관 네비게이션 영상의 장면(또는 시점)을 상기 선택된 특정 주요 부분의 장면(또는 시점)을 이동할 수 있고, 상기 이동된 특정 주요 부분의 장면 내에서 상기 특정 주요 부분을 강조하여 표시할 수 있다.In addition, the computer identifies a plurality of points of interest (eg, major blood vessels) through analysis of surgical requirements included in information required for surgery, and reconstructs the identified plurality of major parts. After this, a list of the plurality of main parts is displayed on the blood vessel navigation image, and when a specific main part is selected from the list, the scene (or viewpoint) of the vessel navigation image is selected as the scene (or viewpoint) of the selected specific main part. It can be moved, and the specific main part can be highlighted and displayed within the scene of the moved specific main part.
이때, 상기 강조 표시는 상기 특정 주요 부분의 장면 내에서 상기 특정 주요 부분이 도드라지게 표시되도록 하는 것으로써, 상기 특정 주요 부분을 중심으로 기 설정된 배율로 줌인시켜 표시하는 제1 강조 표시 방식, 상기 특정 주요 부분을 기 설정된 색상으로 표시하는 제2 강조 표시 방식, 상기 특정 주요 부분의 테두리를 블링킹시켜 표시하는 제3 강조 표시 방식, 상기 특정 주요 부분 상에 상기 특정 주요 부분을 나타내는 텍스트를 표시하는 제4 강조 표시 방식 및 상기 특정 주요 부분을 복수의 다각도로 표시하는 제5 강조 표시 방식 중 적어도 하나 또는 둘 이상이 조합된 형태로 강조 표시할 수 있다. 즉, 컴퓨터는 수술 행위에서 주의 깊게 볼 필요가 있는 가상 신체 모델의 부분을 강조하여 표시할 수 있다. In this case, the highlight display is a first highlight display method in which the specific main part is displayed prominently within the scene of the specific main part, and the specific main part is displayed by zooming in at a preset magnification. A second highlighting method for displaying a main part with a preset color, a third highlighting method for displaying the edge of the specific main part by blinking, and a third highlighting method for displaying text indicating the specific main part on the specific main part At least one or a combination of two or more of the fourth highlighting method and the fifth highlighting method of displaying the specific main part in a plurality of angles may be highlighted. That is, the computer can highlight and mark the parts of the virtual body model that need to be looked at carefully in the surgical operation.
이때, 수술 행위에서 주의 깊게 볼 필요가 있는 가상 신체 모델의 부분은 특정 혈관의 분기점 및 특정 혈관의 시작점 일 수 있다. 일 예로, 특정 장기로 들어가는 특정 혈관의 시작점 위치를 위에서 설명한 강조 표시 방식으로 표시할 수 있다.In this case, the part of the virtual body model that needs to be carefully observed in the surgical operation may be a bifurcation point of a specific blood vessel and a starting point of a specific blood vessel. As an example, the starting point position of a specific blood vessel entering a specific organ may be displayed in the above-described highlighting method.
컴퓨터는 수술 행위에서 주의 깊게 볼 필요가 있는 가상 신체 모델의 주요 부분들을 사전에 정의하고, 상기 정의된 주요 부분들의 항목들의 리스트를 사용자에게 보여줄 수 있다.The computer may define in advance the main parts of the virtual body model that need to be carefully viewed in the surgical operation, and show a list of items of the defined main parts to the user.
이를 통해, 사용자는 상기 리스트를 통해 수술 전 또는 수술 중으로 주의 깊게 볼 필요가 있는 가상 신체 모델의 부분을 확인할 수 있고, 이로 인해 시야가 제한적인 최소 침습 수술의 과정에서의 한계를 보완할 수 있고 관련된 장기 및 혈관의 손상 및 수술 후 부작용에 대한 부분을 최소화할 수 있다.Through this, the user can identify the part of the virtual body model that needs to be carefully viewed before or during the operation through the list, thereby compensating for the limitation in the process of minimally invasive surgery, which has a limited field of view, and related Damage to organs and blood vessels and side effects after surgery can be minimized.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of a method or algorithm described in relation to an embodiment of the present invention may be implemented directly in hardware, as a software module executed by hardware, or by a combination thereof. A software module may contain random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.As mentioned above, although embodiments of the present invention have been described with reference to the accompanying drawings, those skilled in the art to which the present invention pertains can realize that the present invention can be embodied in other specific forms without changing its technical spirit or essential features. you will be able to understand Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive.

Claims (13)

  1. 시스템에 의해 수행되는 수술 중 혈관 네비게이션 방법에 있어서,A method for intraoperative vascular navigation performed by the system, comprising:
    대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 단계;constructing a three-dimensional blood vessel model using the blood vessel learning model from the two-dimensional medical image data of the subject;
    촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 단계; Recognizing the surgical stage of the object using the surgical stage learning model in the photographed surgical image;
    상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 단계; 및extracting major blood vessels corresponding to the recognized surgical steps from the constructed 3D blood vessel model and matching them to a navigation image; and
    상기 네비게이션 영상을 사용자에게 제공하는 단계;를 포함하고,Including; providing the navigation image to the user;
    상기 네비게이션 영상은, 상기 주요혈관에서 분기된 혈관을 포함하고, The navigation image includes blood vessels branching from the main blood vessels,
    상기 분기된 혈관은, 사용자 입력에 기초하여 추가 또는 제거 가능한, 수술 중 혈관 네비게이션 방법.The branched blood vessel can be added or removed based on a user input, an intraoperative blood vessel navigation method.
  2. 제 1항에 있어서,The method of claim 1,
    상기 주요 혈관이 정합되는 네비게이션 영상은, 상기 의료영상데이터를 이용하여 대상체가 3차원 모델링된 영상 또는 상기 촬영된 수술영상을 포함하는, 수술 중 혈관 네비게이션 방법.The navigation image to which the major blood vessels are matched includes an image in which an object is 3D modeled using the medical image data or the photographed surgical image.
  3. 제 1항에 있어서,The method of claim 1,
    상기 수술단계학습모델은, 수술데이터를 이용하여 상기 대상체의 수술에 필요한 적어도 하나 이상의 단계를 레이블로 정의하고, 상기 정의된 레이블 별로 학습영상을 입력하여 기계학습된, 수술 중 혈관 네비게이션 방법.The surgical step learning model is machine-learned by defining at least one or more steps necessary for the operation of the object as a label using surgical data, and inputting a learning image for each defined label, a method for vascular navigation during surgery.
  4. 제 3항에 있어서,4. The method of claim 3,
    상기 수술데이터는, 로봇암의 움직임에 대한 추적 데이터 또는 수술영상 프레임에서 획득된 데이터를 포함하는, 수술 중 혈관 네비게이션 방법.The surgical data, including tracking data for the movement of the robot arm or data obtained from the surgical image frame, intraoperative blood vessel navigation method.
  5. 제 1항에 있어서,The method of claim 1,
    상기 혈관학습모델은, 의료영상데이터를 기초로 정맥 및 동맥을 모델링 한 후 혈관분기점을 순차적으로 적용하여 혈관 종류를 부여하는, 수술 중 혈관 네비게이션 방법.The blood vessel learning model is, after modeling veins and arteries based on medical image data, sequentially applying blood vessel bifurcation points to give a blood vessel type, an intraoperative blood vessel navigation method.
  6. 제 1항에 있어서,The method of claim 1,
    상기 주요혈관을 상기 네비게이션 영상에 정합하는 단계는, The step of matching the main blood vessels to the navigation image comprises:
    상기 수술영상 화면의 위치정보를 수집하고, Collecting the location information of the surgical image screen,
    상기 수술단계 및 위치정보를 기반으로 상기 3차원 모델링된 주요혈관과 대상체를 상기 수술영상에 동조(SYNC)시키는, 수술 중 혈관 네비게이션 방법.An intraoperative blood vessel navigation method of synchronizing (SYNC) the three-dimensionally modeled main blood vessel and the object to the surgical image based on the operation stage and location information.
  7. 제 1항에 있어서,The method of claim 1,
    상기 네비게이션 영상을 제공하는 단계는,The step of providing the navigation image includes:
    상기 네비게이션 영상 내에서 상기 추출된 주요혈관을 강조하여 표시하는, 수술 중 혈관 네비게이션 방법.An intraoperative blood vessel navigation method for highlighting and displaying the extracted major blood vessels in the navigation image.
  8. 제 7항에 있어서,8. The method of claim 7,
    상기 주요 혈관을 강조하여 표시하는 단계는, The step of highlighting and displaying the main blood vessels is
    상기 추출된 주요 혈관의 항목을 포함한 리스트를 상기 네비게이션 영상에 표시하고,Displaying a list including items of the extracted major blood vessels on the navigation image,
    상기 리스트에서 특정 항목이 선택될 경우, 상기 네비게이션 영상의 장면을 상기 선택된 특정 항목에 해당하는 주요 혈관의 장면으로 이동하고,When a specific item is selected from the list, the scene of the navigation image is moved to the scene of the main blood vessel corresponding to the selected specific item,
    상기 이동된 장면 내에서 상기 특정 항목에 해당하는 주요 혈관을 강조하여 표시하는, 수술 중 혈관 네비게이션 방법.In the moved scene, the main vessel corresponding to the specific item is highlighted and displayed, the intraoperative vessel navigation method.
  9. 제 8항에 있어서,9. The method of claim 8,
    상기 주요 혈관을 강조하여 표시하는 단계는, The step of highlighting and displaying the main blood vessels is
    상기 주요 혈관을 중심으로 기 설정된 배율로 줌인시켜 표시하는 제1 강조 표시 방식, 상기 주요 혈관을 기 설정된 색상으로 표시하는 제2 강조 표시 방식, 상기 주요 혈관의 테두리를 블링킹시켜 표시하는 제3 강조 표시 방식, 상기 주요 혈관 상에 상기 주요 혈관을 나타내는 텍스트를 표시하는 제4 강조 표시 방식 및 상기 주요 혈관을 복수의 다각도로 표시하는 제5 강조 표시 방식 중 적어도 하나 또는 둘 이상이 조합된 형태로 강조 표시하는, 수술 중 혈관 네비게이션 방법.A first highlight display method for displaying the main blood vessels by zooming in at a preset magnification, a second highlight display method for displaying the main blood vessels with a preset color, and a third highlight display method for displaying the main blood vessels by blinking the edges of the main blood vessels At least one or a combination of two or more of a display method, a fourth highlight display method for displaying text indicating the main blood vessel on the main blood vessel, and a fifth highlight display method for displaying the main blood vessel in a plurality of angles Indicative, intraoperative vascular navigation method.
  10. 하드웨어인 컴퓨터와 결합되어, 수술 중 혈관 네비게이션 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터 프로그램에 있어서,In combination with a computer that is hardware, a computer program stored in a computer-readable recording medium to perform a blood vessel navigation method during surgery,
    상기 컴퓨터 프로그램은,The computer program is
    대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 프로세스;a process of constructing a three-dimensional blood vessel model using a blood vessel learning model from two-dimensional medical image data of an object;
    촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 프로세스; A process of recognizing a surgical stage of an object using a surgical stage learning model in the photographed surgical image;
    상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 프로세스; 및a process of extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D blood vessel model and matching them to a navigation image; and
    상기 네비게이션 영상을 사용자에게 제공하는 프로세스를 수행하고,performing the process of providing the navigation image to the user,
    상기 네비게이션 영상은, 상기 주요혈관에서 분기된 혈관을 포함하고, The navigation image includes blood vessels branching from the main blood vessels,
    상기 분기된 혈관은, 사용자 입력에 기초하여 추가 또는 제거 가능한, 컴퓨터 프로그램.The branched blood vessels may be added or removed based on a user input, a computer program.
  11. 수술영상을 촬영하기 위한 의료영상 촬영장비; medical imaging equipment for taking surgical images;
    수술 네비게이션 영상을 사용자에게 제공하기 위한 디스플레이부; 및a display unit for providing a surgical navigation image to a user; and
    하나 이상의 프로세서; 및 one or more processors; and
    상기 하나 이상의 프로세서에 의한 실행 시, 상기 하나 이상의 프로세서가 연산을 수행하도록 하는 명령들이 저장된 하나 이상의 메모리를 포함하는 제어부를 포함하고,When executed by the one or more processors, the one or more processors comprising a control unit including one or more memories stored in the instructions to perform an operation,
    상기 제어부에서 수행되는 연산은,The operation performed by the control unit is,
    대상체의 2차원 의료영상데이터에서 혈관학습모델을 이용하여 3차원 혈관모델을 구축하는 연산;an operation of constructing a 3D blood vessel model using a blood vessel learning model from the two-dimensional medical image data of an object;
    촬영된 수술영상에서, 수술단계학습모델을 이용하여 대상체의 수술단계를 인식하는 연산; an operation for recognizing the surgical stage of the object using the surgical stage learning model in the photographed surgical image;
    상기 인식된 수술단계에 대응하는 주요혈관을 상기 구축된 3차원 혈관모델에서 추출하고 네비게이션 영상에 정합하는 연산; 및an operation of extracting major blood vessels corresponding to the recognized surgical steps from the constructed 3D blood vessel model and matching them to a navigation image; and
    상기 네비게이션 영상을 사용자에게 제공하는 연산을 포함하고,Comprising the operation of providing the navigation image to the user,
    상기 네비게이션 영상은, 상기 주요혈관에서 분기된 혈관을 포함하고, The navigation image includes blood vessels branching from the main blood vessels,
    상기 분기된 혈관은, 사용자 입력에 기초하여 추가 또는 제거 가능한, 수술 중 혈관 네비게이션 시스템.The branched blood vessel can be added or removed based on a user input, an intraoperative blood vessel navigation system.
  12. 제 11항에 있어서,12. The method of claim 11,
    상기 주요 혈관이 정합되는 네비게이션 영상은, 상기 촬영된 수술영상 또는 대상체의 2차원 의료영상 데이터에서 모델링된 3차원 대상체 영상을 포함하는, 수술 중 혈관 네비게이션 시스템.The navigation image to which the main blood vessels are matched includes a three-dimensional object image modeled from the photographed surgical image or two-dimensional medical image data of the object.
  13. 제 11항에 있어서,12. The method of claim 11,
    상기 제어부에서 수행되는 연산은,The operation performed by the control unit is,
    상기 촬영된 수술영상 화면의 위치정보를 수집하고, Collecting the location information of the captured surgical image screen,
    상기 인식된 수술단계 및 위치정보를 기반으로 상기 3차원 모델링된 주요혈관과 대상체를 상기 수술영상에 동조(SYNC)시키는 연산을 더 포함하는, 수술 중 혈관 네비게이션 시스템.The intraoperative vascular navigation system further comprising an operation for synchronizing (SYNC) the three-dimensionally modeled main blood vessel and the object to the surgical image based on the recognized surgical stage and location information.
PCT/KR2021/004532 2020-04-10 2021-04-09 Intraoperative vascular navigation method and system WO2021206517A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200043786A KR102457585B1 (en) 2020-04-10 2020-04-10 Method and system for navigating vascular during surgery
KR10-2020-0043786 2020-04-10

Publications (1)

Publication Number Publication Date
WO2021206517A1 true WO2021206517A1 (en) 2021-10-14

Family

ID=78022923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/004532 WO2021206517A1 (en) 2020-04-10 2021-04-09 Intraoperative vascular navigation method and system

Country Status (2)

Country Link
KR (1) KR102457585B1 (en)
WO (1) WO2021206517A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187596A (en) * 2022-09-09 2022-10-14 中国医学科学院北京协和医院 Neural intelligent auxiliary recognition system for laparoscopic colorectal cancer surgery

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015500122A (en) * 2011-12-16 2015-01-05 コーニンクレッカ フィリップス エヌ ヴェ Automatic blood vessel identification by name
KR20150004538A (en) * 2013-07-03 2015-01-13 현대중공업 주식회사 System and method for setting measuring direction of surgical navigation
KR20150113929A (en) * 2014-03-31 2015-10-08 주식회사 코어메드 Method for Providing Training of Image Guided Surgery and Computer-readable Recording Medium for the same
KR20190004591A (en) * 2017-07-04 2019-01-14 경희대학교 산학협력단 Navigation system for liver disease using augmented reality technology and method for organ image display
WO2019164274A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Training data generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015500122A (en) * 2011-12-16 2015-01-05 コーニンクレッカ フィリップス エヌ ヴェ Automatic blood vessel identification by name
KR20150004538A (en) * 2013-07-03 2015-01-13 현대중공업 주식회사 System and method for setting measuring direction of surgical navigation
KR20150113929A (en) * 2014-03-31 2015-10-08 주식회사 코어메드 Method for Providing Training of Image Guided Surgery and Computer-readable Recording Medium for the same
KR20190004591A (en) * 2017-07-04 2019-01-14 경희대학교 산학협력단 Navigation system for liver disease using augmented reality technology and method for organ image display
WO2019164274A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Training data generation method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187596A (en) * 2022-09-09 2022-10-14 中国医学科学院北京协和医院 Neural intelligent auxiliary recognition system for laparoscopic colorectal cancer surgery

Also Published As

Publication number Publication date
KR102457585B1 (en) 2022-10-21
KR20210126243A (en) 2021-10-20

Similar Documents

Publication Publication Date Title
KR102014359B1 (en) Method and apparatus for providing camera location using surgical video
WO2016126056A1 (en) Medical information providing apparatus and medical information providing method
Gsaxner et al. The HoloLens in medicine: A systematic review and taxonomy
US10162935B2 (en) Efficient management of visible light still images and/or video
WO2018093124A2 (en) Customized surgical guide and customized surgical guide generating method and generating program
WO2016125978A1 (en) Method and apparatus for displaying medical image
JP5504028B2 (en) Observation support system, method and program
WO2019132165A1 (en) Method and program for providing feedback on surgical outcome
JP2015186567A (en) Medical image processing apparatus and medical image processing system
JP2010075403A (en) Information processing device and method of controlling the same, data processing system
WO2018097596A1 (en) Radiography guide system and method
WO2021206518A1 (en) Method and system for analyzing surgical procedure after surgery
WO2019132244A1 (en) Method for generating surgical simulation information and program
KR102146672B1 (en) Program and method for providing feedback about result of surgery
WO2022191575A1 (en) Simulation device and method based on face image matching
WO2021206517A1 (en) Intraoperative vascular navigation method and system
WO2010128818A2 (en) Medical image processing system and processing method
JP2014064722A (en) Virtual endoscopic image generation apparatus, virtual endoscopic image generation method, and virtual endoscopic image generation program
WO2019164273A1 (en) Method and device for predicting surgery time on basis of surgery image
EP4376402A1 (en) Information processing system, information processing method, and program
WO2019132166A1 (en) Method and program for displaying surgical assistant image
WO2022145988A1 (en) Apparatus and method for facial fracture reading using artificial intelligence
WO2020159276A1 (en) Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image
WO2022108387A1 (en) Method and device for generating clinical record data
WO2018147674A1 (en) Apparatus and method for diagnosing medical condition on basis of medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21785265

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/03/2023)