WO2021206517A1 - Procédé et système de navigation vasculaire peropératoire - Google Patents

Procédé et système de navigation vasculaire peropératoire Download PDF

Info

Publication number
WO2021206517A1
WO2021206517A1 PCT/KR2021/004532 KR2021004532W WO2021206517A1 WO 2021206517 A1 WO2021206517 A1 WO 2021206517A1 KR 2021004532 W KR2021004532 W KR 2021004532W WO 2021206517 A1 WO2021206517 A1 WO 2021206517A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
surgical
blood vessel
navigation
blood vessels
Prior art date
Application number
PCT/KR2021/004532
Other languages
English (en)
Korean (ko)
Inventor
김하진
허성환
박성현
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2021206517A1 publication Critical patent/WO2021206517A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/254User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information

Definitions

  • the present invention relates to a method and system for vascular navigation during surgery. More specifically, the present invention recognizes individual stages of an image during surgery through a learning model, and superimposes a patient's blood vessel image extracted based on medical image data (eg, CT image) on the recognized staged surgical image. It relates to a method and system for providing a modeled screen.
  • medical image data eg, CT image
  • the problem to be solved by the present invention is to recognize an individual phase of an image during surgery through a surgical stage learning model, and the patient's blood vessels extracted based on medical image data (eg, CT image) in the recognized staged surgical image. It is to provide a 3D modeled screen overlaid with images.
  • medical image data eg, CT image
  • the intraoperative blood vessel navigation method performed by the system includes the steps of: constructing a three-dimensional blood vessel model using a blood vessel learning model from two-dimensional medical image data of an object; , Recognizing the surgical stage of the object using the surgical stage learning model in the photographed surgical image, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image and providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel can be added or removed based on a user input.
  • the navigation image to which the main blood vessels are matched may include an image in which an object is 3D modeled using the medical image data or the photographed surgical image.
  • the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using surgical data, inputting a learning image for each defined label, and the surgical data is It may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
  • the blood vessel learning model may assign blood vessel types by sequentially applying blood vessel bifurcation points after modeling veins and arteries based on medical image data.
  • the step of matching the major blood vessels to the navigation image includes collecting the location information of the surgical image screen, and synchronizing the three-dimensionally modeled major blood vessels and the object to the surgical image based on the surgical step and location information. (SYNC) can be done.
  • the step of providing the navigation image may highlight and display the extracted major blood vessels in the navigation image.
  • the step of highlighting and displaying the main blood vessel includes displaying a list including the extracted main blood vessel item on the navigation image, and when a specific item is selected from the list, a scene of the navigation image is displayed on the selected specific item It is possible to move to a scene of major blood vessels corresponding to , and highlight and display major blood vessels corresponding to the specific item in the moved scene.
  • the step of highlighting and displaying the main blood vessels includes a first highlight display method for displaying the main blood vessels by zooming in at a preset magnification, a second highlight display method for displaying the main blood vessels with a preset color, and the main blood vessels.
  • a third highlight display method for displaying the border of blood vessels by blinking a fourth highlight display method for displaying text indicating the main blood vessels on the main blood vessels, and a fifth highlight display method for displaying the main blood vessels in a plurality of angles At least one or two or more of them may be highlighted in a combined form.
  • the present invention for solving the above problems is a computer program stored in a computer-readable recording medium that is combined with a computer, which is hardware, so as to perform a blood vessel navigation method during surgery, wherein the computer program is a two-dimensional (2D) object.
  • a process of constructing a three-dimensional blood vessel model using a blood vessel learning model from medical image data a process of recognizing a surgical stage of an object using a surgical stage learning model in a photographed surgical image, and corresponding to the recognized surgical stage performing a process of extracting major blood vessels from the constructed three-dimensional blood vessel model and matching them to a navigation image, and providing the navigation image to a user, wherein the navigation image includes blood vessels branching from the major blood vessels,
  • the branched blood vessel may be added or removed based on a user input.
  • An intraoperative blood vessel navigation system for solving the above-described problems includes a medical image photographing device for photographing a surgical image, a display unit for providing a surgical navigation image to a user, and one or more processors and a control unit including at least one memory in which instructions for causing the at least one processor to perform an operation when executed by the one or more processors are stored, wherein the operation performed by the control unit is performed in two-dimensional medical image data of an object
  • An operation for constructing a three-dimensional blood vessel model using the blood vessel learning model, an operation for recognizing the surgical stage of an object using the surgical stage learning model in the photographed surgical image, and a major blood vessel corresponding to the recognized surgical stage an operation of extracting from the constructed 3D blood vessel model and matching to a navigation image, and an operation of providing the navigation image to a user, wherein the navigation image includes a branched blood vessel from the main blood vessel, and the branched blood vessel may be added or removed based on user input.
  • various information can be provided to the user by matching the three-dimensionally modeled blood vessel image to the surgical step-by-step image recognized during surgery.
  • FIG. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
  • 3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
  • FIG. 4 is a view for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
  • FIG. 5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
  • FIG. 6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
  • FIGS. 8A and 8B are diagrams for explaining an example of a blood vessel navigation image according to an embodiment of the present invention.
  • image may mean multi-dimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
  • the image may include a medical image of the object obtained by the CT imaging apparatus.
  • an “object” may be a human or an animal, or a part or all of a human or animal.
  • the object may include at least one of organs such as liver, heart, uterus, brain, breast, abdomen, and blood vessels.
  • a “user” may be a medical professional, such as a doctor, a nurse, a clinical pathologist, or a medical imaging specialist, and may be a technician repairing a medical device, but is not limited thereto.
  • medical image data is a medical image captured by a medical imaging device, and includes all medical images that can be implemented as a three-dimensional model of the body of an object.
  • Medical image data may include a computed tomography (CT) image, a magnetic resonance imaging (MRI), a positron emission tomography (PET) image, and the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • the term “virtual body model” refers to a model generated to match the actual patient's body based on medical image data.
  • the “virtual body model” may be generated by modeling medical image data in 3D as it is, or may be corrected after modeling to be the same as during actual surgery.
  • “virtual surgical data” refers to data including rehearsal or simulation actions performed on a virtual body model. “Virtual surgical data” may be image data on which rehearsal or simulation is performed on a virtual body model in a virtual space, or data recorded on a surgical operation performed on the virtual body model. In addition, “virtual surgery data” may include learning data for learning the surgical learning model.
  • actual surgical data refers to data obtained by actual medical staff performing surgery.
  • Standard data may be image data of a surgical site taken in an actual surgical procedure, or may be data recorded on a surgical operation performed in an actual surgical procedure.
  • a surgical phase refers to a basic phase that is sequentially performed in the entire operation of a specific type of operation.
  • the term “computer” includes various devices capable of providing a result to a user by performing arithmetic processing.
  • computers include desktop PCs, notebooks (Note Books) as well as smart phones, tablet PCs, cellular phones, PCS phones (Personal Communication Service phones), synchronous/asynchronous A mobile terminal of International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may be a server or a navigation system that receives a request from a client and performs information processing.
  • FIG. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
  • FIG. 1 a schematic diagram of a system capable of performing robotic surgery according to an embodiment is shown.
  • the robotic surgery system includes a medical image capturing device 10 , a navigation system 20 , and a controller 30 provided in an operating room, an image capturing unit 36 , a display 32 and a surgical robot 34 .
  • the medical imaging equipment 10 may be omitted from the robotic surgery system according to an embodiment.
  • the robotic surgery may be performed by the user controlling the surgical robot 34 using the control unit 30, and may be automatically performed by the control unit 30 without the user's control (manipulation). .
  • the navigation system 20 is a computing device including at least one processor, a memory and a communication unit.
  • the navigation system 20 may include a data processing server outside the operating room and a graphic processing terminal device in the operating room, or may include only the graphic processing terminal device.
  • the control unit 30 includes a computing device including at least one processor, a memory, and a communication unit.
  • the controller 30 includes hardware and software interfaces for controlling the surgical robot 34 .
  • the control unit 30 may be divided into a user console device and a control device.
  • the image capturing unit 36 includes at least one image sensor. That is, the image capturing unit 36 includes at least one camera device, and is used to photograph the surgical site. In one embodiment, the imaging unit 36 is used in combination with the surgical robot (34). For example, the imaging unit 36 may include at least one camera coupled to the surgical arm of the surgical robot 34 .
  • the image captured by the image capturing unit 36 is displayed on the display 32 .
  • the controller 30 receives information necessary for surgery from the navigation system 20 or generates information necessary for surgery and provides it to the user. For example, the controller 30 displays generated or received information necessary for surgery on the display 32 .
  • the user performs robotic surgery by controlling the movement of the surgical robot 34 by manipulating the controller 30 while viewing the display 32 .
  • the navigation system 20 generates information necessary for robotic surgery using medical image data of an object (patient) photographed in advance from the medical imaging device 10 , and provides the generated information to the controller 30 .
  • the controller 30 may provide information received from the navigation system 20 to the user by displaying it on the display 32 , and control the operation of the surgical robot 34 using the received information.
  • the means that can be used in the medical imaging equipment 10 is not limited, for example, CT, X-Ray, PET, MRI, etc., various other medical image acquisition means may be used.
  • each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of it is performed by the server 20 or the control unit 30 can be performed.
  • the surgical image captured by the medical imaging device 10 may be divided according to various criteria. For example, the surgical image may be divided based on the type of object included in the image. The segmentation method based on the type of object requires the computer to recognize each object.
  • Objects recognized in the surgical image largely include a human body, an object introduced from the outside, and an object created by itself.
  • the human body includes body parts that are imaged by medical imaging (eg, CT) prior to surgery and body parts that are not imaged.
  • body parts photographed by medical imaging include organs, blood vessels, bones, tendons, and the like, and these body parts may be recognized based on a 3D modeling image generated based on the medical image.
  • the position, size, shape, etc. of each body part are recognized in advance by a 3D analysis method based on a medical image.
  • the computer defines an algorithm that can determine the position of the body part corresponding to the surgical image in real time, and based on this, information on the position, size, shape, etc. of each body part included in the surgical image without performing separate image recognition can be obtained.
  • body parts that are not photographed by medical imaging include omentum, etc., which are not photographed by medical imaging, so it is necessary to recognize them in real time during surgery.
  • the computer may determine the position and size of the omentum through the image recognition method, and if there is a blood vessel inside the omentum, the location of the blood vessel may also be predicted.
  • the objects introduced from the outside include, for example, surgical tools, gauze, clips, and the like. Since it has preset morphological characteristics, the computer can recognize it in real time through image analysis during surgery.
  • the object generated inside includes, for example, bleeding occurring in a body part. This can be recognized in real time by the computer through image analysis during surgery.
  • the movement of organs or omentum included in the body part, and the cause of the internal creation of an object, are all caused by the movement of an object introduced from the outside.
  • the surgical image may be divided into several surgical phases based on the movement of each object.
  • the surgical image may be divided based on a movement of an externally introduced object, that is, an action.
  • the computer determines the type of each object recognized in the surgical image, and the movement of each object, that actions can be recognized.
  • the computer can recognize the type of each action and further recognize the cause of each action.
  • the computer can segment the surgical image based on the recognized action, and can recognize from each detailed surgical operation to the type of the entire operation through stepwise segmentation.
  • a computer extracts feature information from a surgical image taken based on a surgical stage learning model that performs machine learning of a convolutional neural network method, and generates images for each surgical stage based on the feature information. It can be divided or recognized at which stage of surgery it is currently located.
  • the computer may determine a predefined type of operation corresponding to the operation image from the determination of the action.
  • determining the type of surgery information on the entire surgical process may be acquired.
  • one surgical process may be selected according to a selection of a doctor or based on actions recognized up to a specific time point.
  • the computer may recognize and predict the surgical stage based on the acquired surgical process. For example, when a specific step is recognized in a series of surgical processes, the subsequent steps may be predicted or candidates for possible steps may be selected.
  • the computer extracts navigation information about the main vessels corresponding to each surgical stage and the vessels branched to the main vessels according to the blood flow, thereby assisting the user for effective surgery. can do.
  • the computer may determine whether bleeding has occurred due to a surgical error based on the image recognition of the surgical stage.
  • the computer can determine the location, time, and magnitude of each bleeding. It can also determine whether surgery should be stopped due to bleeding. Accordingly, according to an embodiment, the computer may be used to provide data on an error situation and bleeding situation as a surgical result report, to exclude unnecessary operations or mistakes in the surgical process, and to streamline the surgical process.
  • the calculation performed on the computer is the calculation of constructing a 3D blood vessel model using the blood vessel learning model in the 2D medical image data of the object, in the photographed surgical image, Calculation of recognizing the surgical stage of an object using the surgical stage learning model, extracting major blood vessels corresponding to the recognized surgical stage from the constructed 3D vascular model and matching the navigation image, and providing the navigation image to the user It can include operations provided.
  • the computer may further include an operation for adding or removing a blood vessel branching from a main blood vessel based on a user input.
  • the navigation image to which the major blood vessels are matched may be a three-dimensional object image modeled from the photographed surgical image or two-dimensional medical image data of the object.
  • the computer collects the position information of the captured surgical image screen, and performs an operation of synchronizing (SYNC) the three-dimensional modeled main blood vessels and the object to the surgical image based on the recognized surgical stage and position information.
  • SYNC synchronizing
  • FIG. 2 is a flowchart illustrating a method for vascular navigation during surgery according to an embodiment of the present invention.
  • each of the steps shown in FIG. 2 is time-series performed by the navigation system 20 or the controller 30 shown in FIG. 1 .
  • each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of the navigation system 20 or the control unit 30 can be performed in
  • step S200 the computer according to an embodiment builds a 3D blood vessel model by using the blood vessel learning model from the two-dimensional medical image data of the object.
  • the blood vessel learning model may be to find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data.
  • the blood vessel learning model will be described later with reference to FIGS. 3A to 3C and 4 .
  • step S210 the computer according to an embodiment recognizes the surgical stage of the object by using the surgical stage learning model in the photographed surgical image.
  • the surgical step learning model may be machine-learned by defining at least one or more steps necessary for the operation of the object as a label using virtual surgery data, and inputting a learning image for each defined label.
  • the step of recognizing the operation stage of the object using the operation stage learning model may be recognizing the operation stage using the learned operation stage learning model based on the actual operation data.
  • the actual surgical data may include tracking data for the movement of the robot arm or data obtained from a surgical image frame.
  • step S220 the computer according to an embodiment extracts major blood vessels corresponding to the recognized surgical steps from the constructed 3D blood vessel model and matches them with the navigation image.
  • the navigation image to which the major blood vessels are matched may be a three-dimensionally modeled object image using the captured surgical image or the medical image data.
  • step S230 the computer according to an embodiment provides the matched navigation image to the user.
  • a blood vessel branched from a main blood vessel may be added or removed according to a blood vessel flow based on a user input.
  • FIGS. 7 and 8A and 8B an example of a method of providing and manipulating a blood vessel navigation image to a user will be described later with reference to FIGS. 7 and 8A and 8B .
  • 3A to 3C are diagrams for explaining a blood vessel division method according to an embodiment of the present invention.
  • a computer generates 3D image data 301 of an object (organs and blood vessels) through a process of converting medical image data (CT image) into a 3D object. do. More specifically, after veins and arteries are modeled based on medical image data, vascular branch points are sequentially assigned blood vessel types.
  • labeling for separating major blood vessels may be performed for each operation step, and machine learning may be performed on each labeled blood vessel.
  • the names of organs and blood vessels appearing in the surgical image are listed up, and the listed names are tagged on the image so that each learning model can perform machine learning. Then, each organ and blood vessel can be divided using the machine-learned blood vessel learning model. Accordingly, as shown in FIG. 3B , the user can check the list 302 of blood vessels labeled with the names of organs and blood vessels, and further, as shown in FIG. 3C , the user can check the image 303 tagged with the names can
  • FIG. 4 is a diagram for explaining an example of modeling a branch of a blood vessel according to an embodiment of the present invention.
  • the computer may label a blood vessel name by separating a blood vessel from a 3D image through a blood vessel learning model and then generating a point for each branch point 401 of each blood vessel. Therefore, according to the intraoperative blood vessel navigation method according to an embodiment, since the blood vessel learning model is machine-learned based on the branch point, when the blood vessel navigation image is provided to the user, the blood vessel may be displayed to be added or removed based on the branch point. .
  • the computer can more accurately find and model major blood vessels, branched blood vessels and blood vessel flows based on main blood vessels, based on user-specific information and two-dimensional medical image data obtained from clinical data. may be
  • the location of different blood vessels for each person can be accurately recognized from the medical image data through such machine learning, and the location of blood vessels can be accurately guided to the user during future surgery.
  • FIG. 5 is a flowchart for explaining a surgical step learning model according to an embodiment of the present invention.
  • step S500 the computer selects the type of surgery.
  • the type of surgery may be automatically selected by recognizing an object or an object in the input image, or may be directly input by the user.
  • step S510 the computer defines the operation as a label for each stage using the actual operation data.
  • Each surgical phase may be divided automatically by recognizing images of camera movement, equipment movement, and organs, and the user may classify an image suitable for a standardized surgical process as a learning image and learn it in advance.
  • a surgical stage can be defined as about 21 stages, including stages.
  • step S520 the computer selects a learning image corresponding to the defined label.
  • the learning image may be selected by automatically dividing the inputted surgical image by a computer, or the user may select and input the learning image in advance based on actual surgical data.
  • the computer may perform machine learning using a convolutional neural network based on the selected learning images. For example, when the frame of each surgical image defined in this way is learned using an action recognition model such as SlowFast Network, a highly accurate surgical stage learning model can be trained.
  • an action recognition model such as SlowFast Network
  • the trained surgical stage learning model is used to more accurately and automatically recognize the surgical stage during surgery.
  • FIG. 6 is a diagram for explaining an example of matching blood vessel navigation images according to an embodiment of the present invention.
  • a computer may collect position information of a surgical image screen, and synchronize (SYNC) the three-dimensional modeled major blood vessels and an object to the surgical image based on the surgical stage and position information.
  • SYNC synchronize
  • the navigation image to which the blood vessels are matched may be an image of an object (organs and blood vessels) that is 3D modeled using the photographed surgical image or the medical image data.
  • the computer extracts the major blood vessels and the object of the surgical stage recognized in the gastric cancer surgery image 601 from the 3D blood vessel model, and then extracts the extracted major blood vessels and 3
  • the dimensionally modeled image of the object may be matched and provided to the user.
  • FIG. 7 is a view for explaining an example of modeling blood vessel flow according to an embodiment of the present invention.
  • a three-dimensional blood vessel model in the case of a three-dimensional blood vessel model according to an embodiment of the present invention, it may be modeled to enable navigation of surrounding blood vessels according to the flow of blood vessels based on main blood vessels.
  • the flow and depth are divided for each blood vessel recognized from the medical image data based on the branch point. Because it can be classified.
  • the computer defines each step for displaying blood vessels by depth so that the guidance of the surrounding blood vessels can be performed centering on the selected main blood vessel 701 , and defines the blood vessel flow for each depth.
  • a decision 702 is made.
  • peripheral blood vessels may be added or removed according to the flow for each depth to be displayed.
  • FIGS. 8A and 8B show an example of a blood vessel navigation image according to an embodiment of the present invention.
  • the computer may display only the major blood vessels 801 synchronized with the surgical image on the navigation image screen.
  • the vascular navigation image required by the user in the surgical stage can provide
  • the computer identifies a plurality of points of interest (eg, major blood vessels) through analysis of surgical requirements included in information required for surgery, and reconstructs the identified plurality of major parts. After this, a list of the plurality of main parts is displayed on the blood vessel navigation image, and when a specific main part is selected from the list, the scene (or viewpoint) of the vessel navigation image is selected as the scene (or viewpoint) of the selected specific main part. It can be moved, and the specific main part can be highlighted and displayed within the scene of the moved specific main part.
  • points of interest eg, major blood vessels
  • the highlight display is a first highlight display method in which the specific main part is displayed prominently within the scene of the specific main part, and the specific main part is displayed by zooming in at a preset magnification.
  • a second highlighting method for displaying a main part with a preset color, a third highlighting method for displaying the edge of the specific main part by blinking, and a third highlighting method for displaying text indicating the specific main part on the specific main part At least one or a combination of two or more of the fourth highlighting method and the fifth highlighting method of displaying the specific main part in a plurality of angles may be highlighted. That is, the computer can highlight and mark the parts of the virtual body model that need to be looked at carefully in the surgical operation.
  • the part of the virtual body model that needs to be carefully observed in the surgical operation may be a bifurcation point of a specific blood vessel and a starting point of a specific blood vessel.
  • the starting point position of a specific blood vessel entering a specific organ may be displayed in the above-described highlighting method.
  • the computer may define in advance the main parts of the virtual body model that need to be carefully viewed in the surgical operation, and show a list of items of the defined main parts to the user.
  • the user can identify the part of the virtual body model that needs to be carefully viewed before or during the operation through the list, thereby compensating for the limitation in the process of minimally invasive surgery, which has a limited field of view, and related Damage to organs and blood vessels and side effects after surgery can be minimized.
  • a software module may contain random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Gynecology & Obstetrics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Un procédé et un système de navigation vasculaire peropératoire sont divulgués. Un procédé par lequel un système effectue une navigation vasculaire peropératoire, selon un mode de réalisation, comprend les étapes consistant à : construire, à partir des données d'image médicale bidimensionnelle d'un objet, un modèle vasculaire tridimensionnel à l'aide d'un modèle d'apprentissage vasculaire ; reconnaître, à partir de l'image chirurgicale photographiée, l'étape chirurgicale de l'objet à l'aide d'un modèle d'apprentissage d'étape chirurgicale ; extraire, à partir du modèle vasculaire tridimensionnel construit, un vaisseau sanguin principal correspondant à l'étape chirurgicale reconnue, et mettre en correspondance celui-ci avec une image de navigation ; et fournir l'image de navigation à un utilisateur, l'image de navigation comprenant les vaisseaux sanguins ramifiés à partir du vaisseau sanguin principal, et les vaisseaux sanguins ramifiés pouvant être ajoutés ou supprimés sur la base d'une entrée d'utilisateur.
PCT/KR2021/004532 2020-04-10 2021-04-09 Procédé et système de navigation vasculaire peropératoire WO2021206517A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200043786A KR102457585B1 (ko) 2020-04-10 2020-04-10 수술 중 혈관 네비게이션 방법 및 시스템
KR10-2020-0043786 2020-04-10

Publications (1)

Publication Number Publication Date
WO2021206517A1 true WO2021206517A1 (fr) 2021-10-14

Family

ID=78022923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/004532 WO2021206517A1 (fr) 2020-04-10 2021-04-09 Procédé et système de navigation vasculaire peropératoire

Country Status (2)

Country Link
KR (1) KR102457585B1 (fr)
WO (1) WO2021206517A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187596A (zh) * 2022-09-09 2022-10-14 中国医学科学院北京协和医院 用于腹腔镜结直肠癌手术的神经智能辅助识别系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240082092A (ko) * 2022-12-01 2024-06-10 사회복지법인 삼성생명공익재단 3차원 간 영상 재구성 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015500122A (ja) * 2011-12-16 2015-01-05 コーニンクレッカ フィリップス エヌ ヴェ 名前による自動血管識別
KR20150004538A (ko) * 2013-07-03 2015-01-13 현대중공업 주식회사 수술용 내비게이션의 측정 방향 설정 시스템 및 방법
KR20150113929A (ko) * 2014-03-31 2015-10-08 주식회사 코어메드 영상수술 트레이닝 제공방법 및 그 기록매체
KR20190004591A (ko) * 2017-07-04 2019-01-14 경희대학교 산학협력단 증강현실 기술을 이용한 간병변 수술 내비게이션 시스템 및 장기 영상 디스플레이 방법
WO2019164274A1 (fr) * 2018-02-20 2019-08-29 (주)휴톰 Procédé et dispositif de génération de données d'apprentissage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015500122A (ja) * 2011-12-16 2015-01-05 コーニンクレッカ フィリップス エヌ ヴェ 名前による自動血管識別
KR20150004538A (ko) * 2013-07-03 2015-01-13 현대중공업 주식회사 수술용 내비게이션의 측정 방향 설정 시스템 및 방법
KR20150113929A (ko) * 2014-03-31 2015-10-08 주식회사 코어메드 영상수술 트레이닝 제공방법 및 그 기록매체
KR20190004591A (ko) * 2017-07-04 2019-01-14 경희대학교 산학협력단 증강현실 기술을 이용한 간병변 수술 내비게이션 시스템 및 장기 영상 디스플레이 방법
WO2019164274A1 (fr) * 2018-02-20 2019-08-29 (주)휴톰 Procédé et dispositif de génération de données d'apprentissage

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187596A (zh) * 2022-09-09 2022-10-14 中国医学科学院北京协和医院 用于腹腔镜结直肠癌手术的神经智能辅助识别系统

Also Published As

Publication number Publication date
KR102457585B1 (ko) 2022-10-21
KR20210126243A (ko) 2021-10-20

Similar Documents

Publication Publication Date Title
KR102014359B1 (ko) 수술영상 기반 카메라 위치 제공 방법 및 장치
WO2021206517A1 (fr) Procédé et système de navigation vasculaire peropératoire
WO2016126056A1 (fr) Appareil et procédé de fourniture d'informations médicales
WO2018093124A2 (fr) Guide chirurgical personnalisé et procédé de génération de guide chirurgical personnalisé et programme de génération
US10162935B2 (en) Efficient management of visible light still images and/or video
WO2016125978A1 (fr) Procédé et appareil d'affichage d'image médical
WO2019132165A1 (fr) Procédé et programme de fourniture de rétroaction sur un résultat chirurgical
JP2010075403A (ja) 情報処理装置およびその制御方法、データ処理システム
WO2011121986A1 (fr) Système de support d'observations, méthode, et programme
KR102146672B1 (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2021206518A1 (fr) Procédé et système d'analyse d'un intervention chirurgicale après une opération
WO2019132244A1 (fr) Procédé de génération d'informations de simulation chirurgicale et programme
WO2022191575A1 (fr) Dispositif et procédé de simulation basés sur la mise en correspondance d'images de visage
WO2010128818A2 (fr) Procédé et système de traitement d'images médicales
JP2014064722A (ja) 仮想内視鏡画像生成装置および方法並びにプログラム
KR102222509B1 (ko) 의료 영상에 관한 판단을 보조하는 방법 및 이를 이용한 장치
KR102213412B1 (ko) 기복모델 생성방법, 장치 및 프로그램
WO2019164273A1 (fr) Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale
EP4376402A1 (fr) Système de traitement d'informations, procédé de traitement d'informations, et programme
WO2019132166A1 (fr) Procédé et programme d'affichage d'image d'assistant chirurgical
WO2022145988A1 (fr) Appareil et procédé de lecture de fracture faciale utilisant une intelligence artificielle
WO2020159276A1 (fr) Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale
WO2022108387A1 (fr) Procédé et dispositif permettant de générer des données de dossier clinique
WO2021096054A1 (fr) Système de génération de dessin médical automatique, procédé de génération de dessin médical automatique l'utilisant, et système de génération de dessin médical automatique basé sur l'apprentissage automatique
WO2023234626A1 (fr) Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21785265

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/03/2023)