WO2023286926A1 - System and method for providing image for neurosurgical practice - Google Patents

System and method for providing image for neurosurgical practice Download PDF

Info

Publication number
WO2023286926A1
WO2023286926A1 PCT/KR2021/016309 KR2021016309W WO2023286926A1 WO 2023286926 A1 WO2023286926 A1 WO 2023286926A1 KR 2021016309 W KR2021016309 W KR 2021016309W WO 2023286926 A1 WO2023286926 A1 WO 2023286926A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
skull
practice
scan data
neurosurgery
Prior art date
Application number
PCT/KR2021/016309
Other languages
French (fr)
Korean (ko)
Inventor
이민호
Original Assignee
가톨릭대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가톨릭대학교 산학협력단 filed Critical 가톨릭대학교 산학협력단
Publication of WO2023286926A1 publication Critical patent/WO2023286926A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image

Definitions

  • the present invention relates to a technology for providing images for neurosurgery practice, and more particularly, to a system and method capable of providing images for neurosurgery practice based on a 3D printer and augmented reality.
  • Neurosurgery is a more specialized field than general surgery and requires a longer training period and rich experience.
  • Neurosurgical procedures require manipulating a surgical instrument through a narrow passageway surrounded by vital structures, and in this neurosurgical aspect, the surgeon must be proficient with tools and the complex anatomy of the disc.
  • cadaver practice human body dissection practice
  • cadaver practice has a risk of costly and ethical problems, and there are many difficulties in cadaver quality control because there is no standardization for practice objects.
  • neurosurgery requires a lot of practice because it deals with areas directly related to life, where hard structures such as the skull and soft structures such as the brain, blood vessels, and nerves exist at the same time.
  • a technical task of the present invention is to provide a system capable of providing images for neurosurgery education or practice based on a 3D printer and augmented reality.
  • a technical task of the present invention is to provide a method capable of providing images for neurosurgery education or practice based on a 3D printer and augmented reality.
  • a system for providing images for neurosurgery practice includes a 3D printer generating 3D scan data by performing 3D scanning on a predetermined area of an object, and a predetermined area of the object. It may include an imaging device that obtains a 3D image of the interior, and a matching unit that matches 3D scan data with the 3D image and outputs the result.
  • the 3D printer generates three-dimensional scan data by three-dimensionally scanning the skull of the object
  • the imaging device obtains a three-dimensional image of the structure inside the skull of the object
  • the matching unit generates the three-dimensional scan data of the object.
  • the 3D scan data of the skull and the 3D image of the internal structure of the skull of the object are matched to generate and output an image of the skull of the object and the internal structure of the skull based on augmented reality.
  • a method for providing an image for neurosurgery practice includes the step (a) of performing 3D scanning on a predetermined area of an object to generate 3D scan data, and generating a 3D image of the inside of a predetermined area of an object. It may include a step (b) of acquiring, and a step (c) of matching and outputting the 3D scan data and the 3D image.
  • step (a) the skull of the object is 3D scanned to generate 3D scan data
  • step (b) a 3D image of the structure inside the skull of the object is obtained
  • step c the 3D scan data of the skull of the object and the 3D image of the structure inside the skull of the object are matched to generate and output an image of the skull and internal structure of the object based on augmented reality.
  • a system and method capable of providing images for neurosurgery practice or education based on a 3D printer and augmented reality are provided.
  • FIG. 1 is a diagram showing the configuration of an example of an image providing system for neurosurgery practice according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method of providing images for neurosurgery practice using the system for providing images for neurosurgery practice according to an embodiment of the present invention.
  • temporal precedence relationship when a temporal precedence relationship is described with “after,” “next to,” “next to,” “before,” etc., continuous unless “immediately” or “directly” is used. It may also include cases where it is not.
  • first, second, etc. are used to describe various components, these components are not limited by these terms. These terms are only used to distinguish one component from another. Therefore, the first component mentioned below may also be the second component within the technical spirit of the present invention.
  • first, second, A, B, (a), and (b) may be used. These terms are only used to distinguish the component from other components, and the nature, sequence, order, or number of the corresponding component is not limited by the term.
  • an element is described as being “connected,” “coupled to,” or “connected to” another element, that element is directly connected or capable of being connected to the other element, but indirectly unless specifically stated otherwise. It should be understood that other components may be “interposed” between each component that is or can be connected.
  • At least one should be understood to include all combinations of one or more of the associated elements.
  • at least one of the first, second, and third elements means not only the first, second, or third elements, but also two of the first, second, and third elements. It can be said to include a combination of all components of one or more.
  • FIG. 1 is a diagram showing the configuration of an example of an image providing system for neurosurgery practice according to an embodiment of the present invention.
  • an image providing system 1 for neurosurgery practice may be implemented to provide images for neurosurgery practice based on a 3D printer and augmented reality.
  • the image providing system 1 for neurosurgery practice may include a 3D printer 100, an image capturing unit 200, a matching unit 300, and a display unit 400,
  • the configuration of the practice video providing system 1 is not limited thereto.
  • the 3D printer 100 may generate 3D scan data by performing 3D scanning on a predetermined area (eg, surgical site) of an object (eg, patient). For example, the 3D printer 100 may generate 3D scan data by 3D scanning the skull of an object. Accordingly, a model for the skull may be produced through the 3D printer 100.
  • a predetermined area eg, surgical site
  • an object eg, patient
  • the 3D printer 100 may generate 3D scan data by 3D scanning the skull of an object. Accordingly, a model for the skull may be produced through the 3D printer 100.
  • 3D scan data (or skull model) of a skull can be saved as a file in STL (Standard Triangle Language) format, and in a state implemented as a 3D model by the MITK program, a 3D printer (ex, Cubicreator program ), but is not limited thereto.
  • STL Standard Triangle Language
  • an ABS-A100 filament may be used for a bone model, but is not limited thereto.
  • the imaging device 200 may provide a 3D image of the inside of a predetermined area of the object.
  • a conventional medical navigation system may be used, and a computing system installed with a radiology program capable of performing functions may be used, but is not limited thereto.
  • the imaging device 200 may provide a 3D image of a structure inside the skull of an object.
  • the imaging device 200 may capture an image of a predetermined area of an object in order to provide a 3D image.
  • the imaging device 200 may be any one of tomosynthesis using X-rays, computed tomography (CT), and positron emission tomography, and magnetic resonance imaging. Imaging) may be performed.
  • the video device 200 may be a device that performs a combination of two or more of the imaging methods.
  • a 3D image may be provided by one imaging device 200 or may be provided in combination by a plurality of imaging devices 200 .
  • a CT scan image of the subject's skull and brain MRI may be used to provide a 3D image.
  • a CT scan image can be acquired with an inter-slice resolution of 1 mm using a symmetric matrix size of 512 x 512, and an axial CT image aligned with the AC-PC plane and aligned perpendicular to the central plane.
  • MRI sequences including standard T1WI, T2WI, T2 SPACE 3D, and time-of-flight (TOF) MRA with a slice thickness of 1 mm can be obtained, and the mentioned MRI sequences are exemplary and limited thereto. It is not.
  • standard T1WI and T2WI images may be images of the entire head
  • T2 SPACE 3D images and TOF MRA images may include images from the foramen fossa up to the small bones.
  • the acquired image can be saved as a file in DICOM format, and can be segmented into the skull and main skull base structure through the MITK program and BrainLab program, etc., and the program used for image segmentation is not limited thereto .
  • structures of the skull can be extracted from CT scan images and segmented in the MITK program, and other structures can be segmented in BrainLab's segmentation program.
  • the optic nerve, trigeminal nerve, and facial nerve may be segmented on an MRI, and the facial nerve segmentation may be achieved by tracing through a CT scan from the internal auditory canal to the stylomastoid foramen;
  • the location of the greater superficial petrosal nerve (GSPN) may be determined according to its relative anatomical location.
  • the internal carotid artery and sigmoid sinus can be segmented using TOF images from MRI and CT scans, the semicircular canals and cochlea in the inner ear can be extracted from CT, and each structure can be divided and extracted as a 3D image and adjusted in T1WI.
  • the matching unit 300 may match the 3D scan data generated by the 3D printer 100 and the 3D image provided by the imaging device 200 .
  • the matching means 300 is a means capable of realizing augmented reality, and 3D scan data and 3D images are matched and outputted through the display means 400 for neurosurgery practice according to an embodiment of the present invention.
  • the image providing system 1 may provide augmented reality based neurosurgery practice images.
  • FIG. 2 is a flowchart illustrating a method of providing images for neurosurgery practice using the system for providing images for neurosurgery practice according to an embodiment of the present invention.
  • the step-by-step operation shown in FIG. 2 may be performed by the neurosurgery practice image providing system 1 described with reference to FIG. 1 .
  • the 3D printer 100 may generate 3D scan data by performing 3D scanning on a predetermined area (ex, surgical site) of an object (ex, patient) (S200).
  • step S200 the 3D printer 100 may generate 3D scan data by 3D scanning the skull of the object. Then, the 3D scan data generated by the 3D printer 100 in step S200 may be provided to the matching unit 300 .
  • the imaging device 200 may obtain a 3D image of the inside of a predetermined area of the object (S210).
  • the imaging device 200 may provide a 3D image of the internal structure of the object's skull to the matching unit 300.
  • the imaging device 200 may acquire a 3D image by, for example, photographing the inside of the skull of the object.
  • the matching unit 300 may match the 3D scan data generated and provided by the 3D printer 100 in step S200 with the 3D image provided in step S210 (S220).
  • the matching means 300 matches, for example, the 3D scan data of the skull of the object with the 3D image of the internal structure of the skull of the object, based on augmented reality, and the skull of the object and the inside of the skull.
  • An image of the structure may be generated and output through the display unit 400 .
  • the present invention relates to a technology for providing images for neurosurgery practice, and can be specifically applied to a system technology capable of providing images for neurosurgery practice based on a 3D printer and augmented reality.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Robotics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Optics & Photonics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Gynecology & Obstetrics (AREA)
  • Neurology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present specification relates to a technology for providing an image for neurosurgical practice, and a system for providing an image for neurosurgical practice, according to an embodiment of the present invention, may comprise: a 3D printer for generating three-dimensional scan data by three-dimensionally scanning a predetermined area of an object; an imaging device for acquiring a three-dimensional image of the inside of the predetermined area of the object; and a matching means for matching and outputting the three-dimensional scan data and the three-dimensional image.

Description

신경외과 실습용 영상 제공 시스템 및 방법Image provision system and method for neurosurgery practice
본 발명은 신경외과 실습용 영상 제공 기술에 관한 것으로, 상세하게는 3D 프린터와 증강현실을 기반으로 신경외과 실습을 위한 영상을 제공할 수 있는 시스템 및 방법에 관한 것이다.The present invention relates to a technology for providing images for neurosurgery practice, and more particularly, to a system and method capable of providing images for neurosurgery practice based on a 3D printer and augmented reality.
신경 외과는 일반 수술보다 더 전문적인 분야로 더 긴 훈련 기간과 풍부한 경험이 필요하다. 신경외과적 조치는 중요한 구조로 둘러싸인 좁은 통로를 통해 수술 기수를 조작을 요구하며, 이러한 신경 외과적 측면에서 외과의는 도구와 논의 복잡한 해부학적 구조에 능숙해야 한다. Neurosurgery is a more specialized field than general surgery and requires a longer training period and rich experience. Neurosurgical procedures require manipulating a surgical instrument through a narrow passageway surrounded by vital structures, and in this neurosurgical aspect, the surgeon must be proficient with tools and the complex anatomy of the disc.
따라서, 해부학은 신경 외과 교육의 중요한 주제이다. 그리고, 해부학을 배우는 최선의 방법은 환자를 직접 접하는 것 이외에 카데바 실습(인체 시체 해부 실습)이 효율적이다. 그러나, 카데바 실습은 비용적으로 그리고 윤리적으로 문제가 발생할 위험성이 있으며, 실습 개체에 대한 표준화가 되어 있지 않아 사체 품질 관리에 어려움이 많다.Thus, anatomy is an important subject in neurosurgery education. And, the best way to learn anatomy is through cadaver practice (human body dissection practice), in addition to direct contact with patients. However, cadaver practice has a risk of costly and ethical problems, and there are many difficulties in cadaver quality control because there is no standardization for practice objects.
더욱이, 신경외과는 일반외과, 흉부외과 등과 달리, 두개골이라는 단단한 구조물과, 뇌, 혈관, 신경과 같은 부드러운 구조물이 동시에 존재하고, 생명과 직결되는 부위를 다루기 때문에, 많은 실습이 요구된다.Moreover, unlike general surgery and thoracic surgery, neurosurgery requires a lot of practice because it deals with areas directly related to life, where hard structures such as the skull and soft structures such as the brain, blood vessels, and nerves exist at the same time.
최근 3D 프린터의 개발과, 증강현실 시스템의 발달로 많은 분야에서 이를 이용한 다양한 시도들이 이루어지고 있으며, 신경외과 분야에서도 3D 프린터와 증강현실을 응용한 교육 또는 실습이 가능할 것으로 기대되고 있다.Recently, with the development of 3D printers and augmented reality systems, various attempts have been made using them in many fields, and it is expected that education or practice using 3D printers and augmented reality will be possible in the field of neurosurgery.
본 발명은 3D 프린터와 증강현실을 기반으로 신경외과 교육 또는 실습을 위한 영상을 제공할 수 있는 시스템을 제공하는 것을 기술적 과제로 한다.A technical task of the present invention is to provide a system capable of providing images for neurosurgery education or practice based on a 3D printer and augmented reality.
본 발명은 3D 프린터와 증강현실을 기반으로 신경외과 교육 또는 실습을 위한 영상을 제공할 수 있는 방법을 제공하는 것을 기술적 과제로 한다.A technical task of the present invention is to provide a method capable of providing images for neurosurgery education or practice based on a 3D printer and augmented reality.
본 발명의 기술적 과제는 이상에서 언급한 사항에 제한되지 않으며, 이하의 기재로부터 본 발명이 속하는 기술 분야의 통상의 지식을 가진 자라면 본 발명이 의도하는 기타의 과제들 또한 명료하게 이해할 수 있을 것이다.The technical problem of the present invention is not limited to the above-mentioned matters, and from the following description, those of ordinary skill in the art to which the present invention belongs will be able to clearly understand other problems intended by the present invention. .
상기와 같은 기술적 과제를 해결하기 위하여 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템은, 대상물의 소정 영역에 대해 3차원 스캐닝을 하여 3차원 스캔 데이터를 생성하는 3D 프린터, 대상물의 소정 영역의 내부에 대한 3차원 영상을 획득하는 영상 기기, 및 3차원 스캔 데이터와 3차원 영상을 정합시켜 출력하는 정합 수단을 포함할 수 있다.In order to solve the above technical problems, a system for providing images for neurosurgery practice according to an embodiment of the present invention includes a 3D printer generating 3D scan data by performing 3D scanning on a predetermined area of an object, and a predetermined area of the object. It may include an imaging device that obtains a 3D image of the interior, and a matching unit that matches 3D scan data with the 3D image and outputs the result.
본 발명의 실시예에 따르면, 3D 프린터는 대상물의 두개골을 3차원 스캐닝 하여 3차원 스캔 데이터를 생성하고, 영상 기기는 대상물의 두개골 내부의 구조에 대한 3차원 영상을 획득하고, 정합 수단은 대상물의 두개골에 대한 3차원 스캔 데이터와, 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합시켜, 증강현실 기반으로 대상물의 두개골 및 두개골 내부 구조에 대한 영상을 생성하여 출력할 수 있다.According to an embodiment of the present invention, the 3D printer generates three-dimensional scan data by three-dimensionally scanning the skull of the object, the imaging device obtains a three-dimensional image of the structure inside the skull of the object, and the matching unit generates the three-dimensional scan data of the object. The 3D scan data of the skull and the 3D image of the internal structure of the skull of the object are matched to generate and output an image of the skull of the object and the internal structure of the skull based on augmented reality.
본 발명의 실시예에 따른 신경외과 실습용 영상 제공 방법은, 대상물의 소정 영역에 대해 3차원 스캐닝을 하여 3차원 스캔 데이터를 생성하는 (a) 단계, 대상물의 소정 영역의 내부에 대한 3차원 영상을 획득하는 (b) 단계, 및 3차원 스캔 데이터와 3차원 영상을 정합시켜 출력하는 (c) 단계를 포함할 수 있다.A method for providing an image for neurosurgery practice according to an embodiment of the present invention includes the step (a) of performing 3D scanning on a predetermined area of an object to generate 3D scan data, and generating a 3D image of the inside of a predetermined area of an object. It may include a step (b) of acquiring, and a step (c) of matching and outputting the 3D scan data and the 3D image.
본 발명의 실시예에 따르면, (a) 단계에서 대상물의 두개골을 3차원 스캐닝 하여 3차원 스캔 데이터를 생성하고, (b) 단계에서 대상물의 두개골 내부의 구조에 대한 3차원 영상을 획득하고, (c) 단계에서 대상물의 두개골에 대한 3차원 스캔 데이터와, 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합시켜, 증강현실 기반으로 대상물의 두개골 및 두개골 내부 구조에 대한 영상을 생성하여 출력할 수 있다.According to an embodiment of the present invention, in step (a), the skull of the object is 3D scanned to generate 3D scan data, in step (b), a 3D image of the structure inside the skull of the object is obtained, ( In step c), the 3D scan data of the skull of the object and the 3D image of the structure inside the skull of the object are matched to generate and output an image of the skull and internal structure of the object based on augmented reality. there is.
위에서 언급된 과제의 해결 수단 이외의 본 발명의 다양한 예에 따른 구체적인 사항들은 아래의 기재 내용 및 도면들에 포함되어 있다.Specific details according to various examples of the present invention other than the means for solving the problems mentioned above are included in the description and drawings below.
본 발명의 실시예에 따르면, 3D 프린터와 증강현실을 기반으로 하여 신경외과 실습 또는 교육을 위한 영상을 제공할 수 있는 시스템 및 방법이 제공된다.According to an embodiment of the present invention, a system and method capable of providing images for neurosurgery practice or education based on a 3D printer and augmented reality are provided.
따라서, 카데바 실습에 의하지 않고서도 신경외과 관련 교육 또는 실습을 할 수 있기 때문에, 카데바 실습에 따른 비용적 그리고 윤리적 문제를 해소할 수 있다.Therefore, since neurosurgery-related education or practice can be performed without cadaver practice, cost and ethical problems associated with cadaver practice can be solved.
또한, 카데바 실습의 경우 실습 개체가 한정적일 수밖에 없기 때문에, 교육 또는 실습이 한정적으로 이루어질 수밖에 없었으나, 본 발명의 실시예에 다른 신경외과 교육용 영상 제공 기술을 이용하면 반복적인 교육 또는 실습을 할 수 있으며, 의료 기술에 대한 습득이 용이할 뿐만 아니라 숙련도를 향상시킬 수 있다.In addition, in the case of cadaver practice, since the number of practice objects is inevitably limited, education or practice has to be limited. In addition, it is easy to acquire medical skills and improve proficiency.
본 발명의 효과는 이상에서 언급한 효과에 제한되지 않으며, 언급되지 않은 또 다른 효과는 아래의 기재로부터 당업자에게 명확하게 이해될 수 있을 것이다.The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the description below.
위에서 언급된 해결하고자 하는 과제, 과제 해결 수단, 효과의 내용은 청구범위의 필수적인 특징을 특정하는 것은 아니므로, 청구범위의 권리 범위는 발명의 내용에 기재된 사항에 의하여 제한되지 않는다.Since the contents of the problem to be solved, the means for solving the problem, and the effect mentioned above do not specify essential features of the claims, the scope of the claims is not limited by the matters described in the contents of the invention.
이하에 첨부되는 도면들은 본 발명의 실시예에 관한 이해를 돕기 위한 것으로, 상세한 설명과 함께 실시예들을 제공한다. 다만, 본 실시예의 기술적 특징이 특정 도면에 한정되는 것은 아니며, 각 도면에서 개시하는 특징들은 서로 조합되어 새로운 실시예로 구성될 수 있다.The accompanying drawings are provided to aid understanding of the embodiments of the present invention, and provide examples along with detailed descriptions. However, the technical features of this embodiment are not limited to specific drawings, and features disclosed in each drawing may be combined with each other to form a new embodiment.
도 1은 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템의 일례의 구성을 나타낸 도면이다.1 is a diagram showing the configuration of an example of an image providing system for neurosurgery practice according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템을 이용한 신경외과 실습용 영상 제공 방법을 설명하기 위한 순서도이다.2 is a flowchart illustrating a method of providing images for neurosurgery practice using the system for providing images for neurosurgery practice according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 것이며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하며, 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다.Advantages and features of the present invention, and methods of achieving them, will become clear with reference to the detailed description of the following embodiments taken in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but will be embodied in a variety of different forms, and only these embodiments make the disclosure of the present invention complete, and common knowledge in the art to which the present invention pertains. It is provided to completely inform the person who has the scope of the invention, and the present invention is only defined by the scope of the claims.
본 발명의 실시예를 설명하기 위한 도면에 개시된 형상, 크기, 비율, 각도, 개수 등은 예시적인 것이므로 본 발명이 도시된 사항에 한정되는 것은 아니다. 명세서 전체에 걸쳐 동일 참조 부호는 동일 구성 요소를 지칭한다. 또한, 본 발명을 설명함에 있어서, 관련된 공지 기술에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우 그 상세한 설명은 생략한다. 본 명세서 상에서 언급한 "포함한다," "갖는다," "이루어진다" 등이 사용되는 경우 "만"이 사용되지 않는 이상 다른 부분이 추가될 수 있다. 구성 요소를 단수로 표현한 경우에 특별히 명시적인 기재 사항이 없는 한 복수를 포함하는 경우를 포함한다.The shapes, sizes, ratios, angles, numbers, etc. disclosed in the drawings for explaining the embodiments of the present invention are illustrative, so the present invention is not limited to the details shown. Like reference numbers designate like elements throughout the specification. In addition, in describing the present invention, if it is determined that a detailed description of related known technologies may unnecessarily obscure the subject matter of the present invention, the detailed description will be omitted. When "comprises," "has," "consists of," etc. mentioned in this specification is used, other parts may be added unless "only" is used. In the case where a component is expressed in the singular, the case including the plural is included unless otherwise explicitly stated.
구성 요소를 해석함에 있어서, 오차 범위에 대한 별도의 명시적 기재가 없더라도 오차 범위를 포함하는 것으로 해석한다.In interpreting the components, even if there is no separate explicit description of the error range, it is interpreted as including the error range.
위치 관계에 대한 설명일 경우, 예를 들어, "상에," "상부에," "하부에," "옆에" 등으로 두 부분의 위치 관계가 설명되는 경우, 예를 들면, "바로" 또는 "직접"이 사용되지 않는 이상 두 부분 사이에 하나 이상의 다른 부분이 위치할 수도 있다.In the case of a description of a positional relationship, for example, when the positional relationship of two parts is described as “on top,” “upper,” “lower,” “next to,” etc., for example, “right” Or, unless "directly" is used, one or more other parts may be located between the two parts.
시간 관계에 대한 설명일 경우, "후에,", "이어서,", "다음에,", "전에" 등으로 시간적 선후 관계가 설명되는 경우, "바로" 또는 "직접"이 사용되지 않는 이상 연속적이지 않은 경우도 포함할 수 있다.In the case of a description of a temporal relationship, when a temporal precedence relationship is described with “after,” “next to,” “next to,” “before,” etc., continuous unless “immediately” or “directly” is used. It may also include cases where it is not.
제1, 제2 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않는다. 이들 용어들은 단지 하나의 구성 요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있다.Although first, second, etc. are used to describe various components, these components are not limited by these terms. These terms are only used to distinguish one component from another. Therefore, the first component mentioned below may also be the second component within the technical spirit of the present invention.
본 발명의 구성 요소를 설명하는 데에 있어서, 제 1, 제 2, A, B, (a), (b) 등의 용어를 사용할 수 있다. 이러한 용어는 그 구성 요소를 다른 구성 요소와 구별하기 위한 것일 뿐, 그 용어에 의해 해당 구성 요소의 본질, 차례, 순서 또는 개수 등이 한정되지 않는다. 어떤 구성 요소가 다른 구성요소에 "연결" "결합" 또는 "접속"된다고 기재된 경우, 그 구성 요소는 그 다른 구성요소에 직접적으로 연결되거나 또는 접속될 수 있지만, 특별히 명시적인 기재 사항이 없는 간접적으로 연결되거나 또는 접속될 수 있는 각 구성 요소 사이에 다른 구성 요소가 "개재"될 수도 있다고 이해되어야 할 것이다.In describing the components of the present invention, terms such as first, second, A, B, (a), and (b) may be used. These terms are only used to distinguish the component from other components, and the nature, sequence, order, or number of the corresponding component is not limited by the term. When an element is described as being "connected," "coupled to," or "connected to" another element, that element is directly connected or capable of being connected to the other element, but indirectly unless specifically stated otherwise. It should be understood that other components may be “interposed” between each component that is or can be connected.
"적어도 하나"는 연관된 구성요소의 하나 이상의 모든 조합을 포함하는 것으로 이해되어야 할 것이다. 예를 들면, "제1, 제2, 및 제3 구성요소의 적어도 하나"의 의미는 제1, 제2, 또는 제3 구성요소뿐만 아니라, 제1, 제2, 및 제3 구성요소의 두 개 이상의 모든 구성요소의 조합을 포함한다고 할 수 있다. “At least one” should be understood to include all combinations of one or more of the associated elements. For example, "at least one of the first, second, and third elements" means not only the first, second, or third elements, but also two of the first, second, and third elements. It can be said to include a combination of all components of one or more.
본 명세서의 여러 실시예들의 각각 특징들이 부분적으로 또는 전체적으로 서로 결합 또는 조합 가능하고, 기술적으로 다양한 연동 및 구동이 가능하며, 각 실시예들이 서로에 대하여 독립적으로 실시 가능할 수도 있고 연관 관계로 함께 실시할 수도 있다.Each feature of the various embodiments of the present specification can be partially or entirely combined or combined with each other, technically various interlocking and driving are possible, and each embodiment can be implemented independently of each other or can be implemented together in an association relationship. may be
이하, 첨부된 도면 및 실시예를 통해 본 발명의 실시예를 살펴보면 다음과 같다. 도면에 도시된 구성요소들의 스케일은 설명의 편의를 위해 실제와 다른 스케일을 가지므로, 도면에 도시된 스케일에 한정되지 않는다.Hereinafter, looking at the embodiments of the present invention through the accompanying drawings and embodiments are as follows. Since the scales of the components shown in the drawings have different scales from actual ones for convenience of explanation, they are not limited to the scales shown in the drawings.
이하, 첨부된 도면을 참조하여 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템 및 방법에 대해서 설명한다.Hereinafter, an image providing system and method for neurosurgery practice according to an embodiment of the present invention will be described with reference to the accompanying drawings.
도 1은 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템의 일례의 구성을 나타낸 도면이다.1 is a diagram showing the configuration of an example of an image providing system for neurosurgery practice according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템(1)은 3D 프린터 및 증강현실을 기반으로 신경외과 실습을 위한 영상을 제공할 수 있도록 구현될 수 있다.Referring to FIG. 1 , an image providing system 1 for neurosurgery practice according to an embodiment of the present invention may be implemented to provide images for neurosurgery practice based on a 3D printer and augmented reality.
본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템(1)은 3D 프린터(100), 영상 촬영 수단(200), 정합 수단(300), 및 디스플레이 수단(400)을 포함할 수 있으며, 신경외과 실습용 영상 제공 시스템(1)의 구성이 이에 한정되는 것은 아니다.The image providing system 1 for neurosurgery practice according to an embodiment of the present invention may include a 3D printer 100, an image capturing unit 200, a matching unit 300, and a display unit 400, The configuration of the practice video providing system 1 is not limited thereto.
3D 프린터(100)는 대상물(ex, 환자)의 소정 영역(ex, 수술 부위)에 대해 3차원(3D) 스캐닝을 하여 3차원 스캔 데이터를 생성할 수 있다. 예를 들어, 3D 프린터(100)는 대상물의 두개골에 대한 3차원 스캐닝을 하여 3차원 스캔 데이터를 생성할 수 있다. 이에 따라 3D 프린터(100)를 통해 두개골에 대한 모델이 제작될 수 있다.The 3D printer 100 may generate 3D scan data by performing 3D scanning on a predetermined area (eg, surgical site) of an object (eg, patient). For example, the 3D printer 100 may generate 3D scan data by 3D scanning the skull of an object. Accordingly, a model for the skull may be produced through the 3D printer 100.
예를 들어, 두개골에 대한 3차원 스캔 데이터(또는 두개골 모델)는 STL(Standard Triangle Language) 형식의 파일로 저장될 수 있고, MITK 프로그램에 의해 3D 모델로 구현된 상태에서 3D 프린터(ex, Cubicreator 프로그램) 상에서 표현될 수 있으며, 이에 한정되는 것은 아니다. 예를 들어, 뼈에 대한 모델에는 ABS-A100 필라멘트가 사용될 수 있으며, 이에 한정되는 것은 아니다.For example, 3D scan data (or skull model) of a skull can be saved as a file in STL (Standard Triangle Language) format, and in a state implemented as a 3D model by the MITK program, a 3D printer (ex, Cubicreator program ), but is not limited thereto. For example, an ABS-A100 filament may be used for a bone model, but is not limited thereto.
영상 기기(200)는 대상물의 소정 영역의 내부에 대한 3차원 영상을 제공할 수 있다. 예를 들어, 영상 기기(200)로, 종래의 의료 네비게이션 시스템이 이용될 수 있고, 기능 수행을 할 수 있는 영상의학 프로그램이 설치된 컴퓨팅 시스템이 이용될 수 있으며, 이에 한정되는 것은 아니다.The imaging device 200 may provide a 3D image of the inside of a predetermined area of the object. For example, as the imaging device 200, a conventional medical navigation system may be used, and a computing system installed with a radiology program capable of performing functions may be used, but is not limited thereto.
예를 들어, 영상 기기(200)는 대상물의 두개골 내부의 구조에 대한 3차원 영상을 제공할 수 있다. 예를 들어, 영상 기기(200)은 3차원 영상을 제공하기 위하여, 대상물의 소정 영역에 대한 영상 촬영을 수행할 수 있다.For example, the imaging device 200 may provide a 3D image of a structure inside the skull of an object. For example, the imaging device 200 may capture an image of a predetermined area of an object in order to provide a 3D image.
예를 들어, 영상 기기(200)는 엑스선을 이용하는 토모신세시스(Tomosynthesis), 컴퓨터 단층촬영(Computed tomography; CT) 및 양전자 단층촬영(Positron Emission Tomography) 중 어느 하나일 수 있으며, 자기공명촬영(Magnetic Resonance Imaging)을 수행하는 기기일 수 있다. 또한, 영상 기기(200)는 촬영 방식 중 둘 이상의 방식을 결합하여 수행하는 기기일 수도 있다.For example, the imaging device 200 may be any one of tomosynthesis using X-rays, computed tomography (CT), and positron emission tomography, and magnetic resonance imaging. Imaging) may be performed. In addition, the video device 200 may be a device that performs a combination of two or more of the imaging methods.
예를 들어, 3차원 영상은 하나의 영상 기기(200)에 의해 제공될 수 있으며, 다수의 영상 기기(200)에 의해 복합적으로 제공될 수도 있다.For example, a 3D image may be provided by one imaging device 200 or may be provided in combination by a plurality of imaging devices 200 .
예를 들어, 3차원 영상을 제공하기 위해, 대상물의 두개골에 대한 CT 스캔 이미지와 뇌 MRI가 이용될 수 있다.For example, a CT scan image of the subject's skull and brain MRI may be used to provide a 3D image.
예를 들어, CT 스캔 이미지는 512 x 512의 대칭 매트릭스 크기를 사용하여 1mm의 슬라이스 간 해상도로 획득될 수 있고, 축 CT 이미지는 AC-PC 평면과 정렬되고 중앙 평면에 수직으로 정렬될 수 있다.For example, a CT scan image can be acquired with an inter-slice resolution of 1 mm using a symmetric matrix size of 512 x 512, and an axial CT image aligned with the AC-PC plane and aligned perpendicular to the central plane.
예를 들어, 표준 T1WI, T2WI, T2 SPACE 3D, 및 슬라이스 두께가 1mm인 TOF(Time-of-Flight) MRA을 포함하는 MRI 시퀀스가 획득될 수 있으며, 언급된 MRI 시퀀스는 예시적인 것으로 이에 한정되는 것은 아니다.For example, MRI sequences including standard T1WI, T2WI, T2 SPACE 3D, and time-of-flight (TOF) MRA with a slice thickness of 1 mm can be obtained, and the mentioned MRI sequences are exemplary and limited thereto. It is not.
예를 들어, 표준 T1WI 및 T2WI 이미지는 전체 머리에 대한 이미지일 수 있고, T2 SPACE 3D 이미지와 TOF MRA에는 대공 구멍에서 작은 뼈 위까지의 이미지가 포함될 수 있다.For example, standard T1WI and T2WI images may be images of the entire head, while T2 SPACE 3D images and TOF MRA images may include images from the foramen fossa up to the small bones.
예를 들어, 획득된 이미지는 DICOM 형식의 파일로 저장될 수 있고, MITK 프로그램 및 BrainLab 프로그램 등을 통해 두개골과 주 두개골 기저 구조로 분할될 수 있으며, 이미지 분할에 사용되는 프로그램이 이에 한정되는 것은 아니다.For example, the acquired image can be saved as a file in DICOM format, and can be segmented into the skull and main skull base structure through the MITK program and BrainLab program, etc., and the program used for image segmentation is not limited thereto .
예를 들어, 두개골의 구조는 CT 스캔 이미지에서 추출되어 MITK 프로그램에서 분할될 수 있고, 다른 구조의 경우 BrainLab의 분할 프로그램에서 분할될 수 있다.For example, structures of the skull can be extracted from CT scan images and segmented in the MITK program, and other structures can be segmented in BrainLab's segmentation program.
예를 들어, MRI에서 시신경, 삼차 신경, 안면 신경이 분할될 수 있고, 안면 신경 분할은 내이도(internal auditory canal)에서 경상유돌기구멍(stylomastoid foramen)까지 CT 스캔을 통해 추적하여 이루어질 수 있고, GSPN(greater superficial petrosal nerve)의 위치는 상대적인 해부학적 위치에 따라 결정될 수 있다.For example, the optic nerve, trigeminal nerve, and facial nerve may be segmented on an MRI, and the facial nerve segmentation may be achieved by tracing through a CT scan from the internal auditory canal to the stylomastoid foramen; The location of the greater superficial petrosal nerve (GSPN) may be determined according to its relative anatomical location.
예를 들어, 내경동맥(internal carotid artery)과 S상정맥동(sigmoid sinus)은 MRI 및 CT 스캔의 TOF 이미지를 사용하여 분할될 수 있고, 내이의 반고리관과 달팽이관은 CT에서 추출될 수 있고, 각 구조는 분할되어 3D 이미지로 추출되어 T1WI에서 조정될 수 있다.For example, the internal carotid artery and sigmoid sinus can be segmented using TOF images from MRI and CT scans, the semicircular canals and cochlea in the inner ear can be extracted from CT, and each structure can be divided and extracted as a 3D image and adjusted in T1WI.
정합 수단(300)은 3D 프린터(100)에 의해 생성된 3차원 스캔 데이터와, 영상 기기(200)에 의해 제공된 3차원 영상을 정합시킬 수 있다.The matching unit 300 may match the 3D scan data generated by the 3D printer 100 and the 3D image provided by the imaging device 200 .
예를 들어, 정합 수단(300)은 증강 현실을 구현할 수 있는 수단으로서, 3차원 스캔 데이터와 3차원 영상을 정합시켜 디스플레이 수단(400)을 통해 출력함으로써, 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템(1)이 증강현실 기반의 신경외과 실습용 영상을 제공할 수 있다.For example, the matching means 300 is a means capable of realizing augmented reality, and 3D scan data and 3D images are matched and outputted through the display means 400 for neurosurgery practice according to an embodiment of the present invention. The image providing system 1 may provide augmented reality based neurosurgery practice images.
이상에서는 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템의 구성 및 구성별 기능/동작에 대해서 설명하였다. 이하에서는 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템을 이용한 신경외과 실습용 영상 제공 방법에 대해서 설명한다.In the above, the configuration of the image providing system for neurosurgery practice according to an embodiment of the present invention and functions/operations for each configuration have been described. Hereinafter, a method for providing an image for neurosurgery practice using the system for providing an image for neurosurgery practice according to an embodiment of the present invention will be described.
도 2는 본 발명의 실시예에 따른 신경외과 실습용 영상 제공 시스템을 이용한 신경외과 실습용 영상 제공 방법을 설명하기 위한 순서도이다. 2 is a flowchart illustrating a method of providing images for neurosurgery practice using the system for providing images for neurosurgery practice according to an embodiment of the present invention.
도 2에 도시된 단계별 동작은 도 1을 참조하여 설명된 신경외과 실습용 영상 제공 시스템(1)에 의해 수행될 수 있다. The step-by-step operation shown in FIG. 2 may be performed by the neurosurgery practice image providing system 1 described with reference to FIG. 1 .
도 2를 참조하면, 3D 프린터(100)가 대상물(ex, 환자)의 소정 영역(ex, 수술 부위)에 대해 3차원(3D) 스캐닝을 하여 3차원 스캔 데이터를 생성할 수 있다(S200).Referring to FIG. 2 , the 3D printer 100 may generate 3D scan data by performing 3D scanning on a predetermined area (ex, surgical site) of an object (ex, patient) (S200).
단계 S200에서, 3D 프린터(100)는 대상물의 두개골을 3차원 스캐닝 하여 3차원 스캔 데이터를 생성할 수 있다. 그리고, 단계 S200에서 3D 프린터(100)에 의해 생성된 3차원 스캔 데이터는 정합 수단(300)으로 제공될 수 있다.In step S200, the 3D printer 100 may generate 3D scan data by 3D scanning the skull of the object. Then, the 3D scan data generated by the 3D printer 100 in step S200 may be provided to the matching unit 300 .
이후, 영상 기기(200)가 대상물의 소정 영역의 내부에 대한 3차원 영상을 획득할 수 있다(S210).Thereafter, the imaging device 200 may obtain a 3D image of the inside of a predetermined area of the object (S210).
단계 S210에서, 영상 기기(200)는 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합 수단(300)으로 제공할 수 있다.In step S210, the imaging device 200 may provide a 3D image of the internal structure of the object's skull to the matching unit 300.
단계 S210에서, 영상 기기(200)는 예를 들어, 대상물의 두개골 내부를 촬영하여 3차원 영상을 획득할 수 있다.In step S210, the imaging device 200 may acquire a 3D image by, for example, photographing the inside of the skull of the object.
이후, 정합 수단(300)은 단계 S200에서 3D 프린터(100)에 의해 생성 및 제공된 3차원 스캔 데이터와, 단계 S210에서 제공된 3차원 영상을 정합시킬 수 있다(S220).Then, the matching unit 300 may match the 3D scan data generated and provided by the 3D printer 100 in step S200 with the 3D image provided in step S210 (S220).
단계 S220에서, 정합 수단(300)은 예를 들어, 대상물의 두개골에 대한 3차원 스캔 데이터와, 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합시켜, 증강현실 기반으로 대상물의 두개골 및 두개골 내부 구조에 대한 영상을 생성하여 디스플레이 수단(400)을 통해 출력할 수 있다.In step S220, the matching means 300 matches, for example, the 3D scan data of the skull of the object with the 3D image of the internal structure of the skull of the object, based on augmented reality, and the skull of the object and the inside of the skull. An image of the structure may be generated and output through the display unit 400 .
따라서, 카데바 실습에 의하지 않고서도 신경외과 관련 교육 또는 실습을 할 수 있기 때문에, 카데바 실습에 따른 비용적 그리고 윤리적 문제를 해소할 수 있다.Therefore, since neurosurgery-related education or practice can be performed without cadaver practice, cost and ethical problems associated with cadaver practice can be solved.
또한, 카데바 실습의 경우 실습 개체가 한정적일 수밖에 없기 때문에, 교육 또는 실습이 한정적으로 이루어질 수밖에 없었으나, 본 발명의 실시예에 다른 신경외과 교육용 영상 제공 기술을 이용하면 반복적인 교육 또는 실습을 할 수 있으며, 의료 기술에 대한 습득이 용이할 뿐만 아니라 숙련도를 향상시킬 수 있다.In addition, in the case of cadaver practice, since the number of practice objects is inevitably limited, education or practice has to be limited. In addition, it is easy to acquire medical skills and improve proficiency.
이상 첨부된 도면을 참조하여 본 발명의 실시예들을 더욱 상세하게 설명하였으나, 본 발명이 반드시 이러한 실시예로 국한되는 것은 아니고, 본 발명의 기술사상을 벗어나지 않는 범위 내에서 다양하게 변형 실시될 수 있다. 따라서, 본 명세서에 개시된 실시예들은 본 발명의 기술 사상을 한정하기 위한 것이 아니라 설명하기 위한 것이고, 이러한 실시예에 의하여 본 발명의 기술 사상의 범위가 한정되는 것은 아니다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 본 발명의 보호 범위는 청구 범위에 의하여 해석되어야 하며, 그와 동등한 범위 내에 있는 모든 기술 사상은 본 발명의 권리 범위에 포함되는 것으로 해석되어야 할 것이다.Although the embodiments of the present invention have been described in more detail with reference to the accompanying drawings, the present invention is not necessarily limited to these embodiments, and may be variously modified and implemented without departing from the technical spirit of the present invention. . Therefore, the embodiments disclosed herein are not intended to limit the technical spirit of the present invention, but to explain, and the scope of the technical spirit of the present invention is not limited by these embodiments. Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive. The protection scope of the present invention should be construed according to the scope of the claims, and all technical ideas within the scope equivalent thereto should be construed as being included in the scope of the present invention.
본 발명은 신경외과 실습용 영상 제공 기술에 관한 것으로, 상세하게는 3D 프린터와 증강현실을 기반으로 신경외과 실습을 위한 영상을 제공할 수 있는 시스템 기술에 적용될 수 있다.The present invention relates to a technology for providing images for neurosurgery practice, and can be specifically applied to a system technology capable of providing images for neurosurgery practice based on a 3D printer and augmented reality.

Claims (4)

  1. 대상물의 소정 영역에 대해 3차원 스캐닝을 하여 3차원 스캔 데이터를 생성하는 3D 프린터;a 3D printer generating 3D scan data by performing 3D scanning on a predetermined area of an object;
    상기 대상물의 상기 소정 영역의 내부에 대한 3차원 영상을 획득하는 영상 기기; 및an imaging device for acquiring a 3D image of the inside of the predetermined area of the object; and
    상기 3차원 스캔 데이터와 상기 3차원 영상을 정합시켜 출력하는 정합 수단을 포함하는, 신경외과 실습용 영상 제공 시스템.An image providing system for neurosurgery practice, comprising a matching means for matching and outputting the 3D scan data and the 3D image.
  2. 제 1 항에 있어서,According to claim 1,
    상기 3D 프린터는 상기 대상물의 두개골을 3차원 스캐닝 하여 3차원 스캔 데이터를 생성하고,The 3D printer generates 3D scan data by 3D scanning the skull of the object,
    상기 영상 기기는 상기 대상물의 두개골 내부의 구조에 대한 3차원 영상을 획득하고, The imaging device acquires a three-dimensional image of the structure inside the skull of the object,
    상기 정합 수단은 상기 대상물의 두개골에 대한 3차원 스캔 데이터와, 상기 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합시켜, 증강현실 기반으로 상기 대상물의 두개골 및 두개골 내부 구조에 대한 영상을 생성하여 출력하는, 신경외과 실습용 영상 제공 시스템.The matching means matches the 3D scan data of the skull of the object with the 3D image of the structure inside the skull of the object, and generates an image of the skull and the internal structure of the skull of the object based on augmented reality. Output, image providing system for neurosurgery practice.
  3. 대상물의 소정 영역에 대해 3차원 스캐닝을 하여 3차원 스캔 데이터를 생성하는 (a) 단계;(a) generating 3D scan data by performing 3D scanning on a predetermined area of the object;
    상기 대상물의 상기 소정 영역의 내부에 대한 3차원 영상을 획득하는 (b) 단계; 및(b) acquiring a 3D image of the inside of the predetermined region of the object; and
    상기 3차원 스캔 데이터와 상기 3차원 영상을 정합시켜 출력하는 (c) 단계를 포함하는, 신경외과 실습용 영상 제공 방법.A method for providing an image for practicing neurosurgery, comprising a step (c) of matching the 3D scan data and the 3D image and outputting the result.
  4. 제 3 항에 있어서,According to claim 3,
    상기 (a) 단계에서 상기 대상물의 두개골을 3차원 스캐닝 하여 3차원 스캔 데이터를 생성하고,In step (a), three-dimensional scanning of the skull of the object is performed to generate three-dimensional scan data;
    상기 (b) 단계에서 상기 대상물의 두개골 내부의 구조에 대한 3차원 영상을 획득하고, Obtaining a three-dimensional image of the structure inside the skull of the object in step (b),
    상기 (c) 단계에서 상기 대상물의 두개골에 대한 3차원 스캔 데이터와, 상기 대상물의 두개골 내부의 구조에 대한 3차원 영상을 정합시켜, 증강현실 기반으로 상기 대상물의 두개골 및 두개골 내부 구조에 대한 영상을 생성하여 출력하는, 신경외과 실습용 영상 제공 방법.In the step (c), the 3D scan data of the skull of the object is matched with the 3D image of the structure inside the skull of the object, and the image of the skull and the internal structure of the skull of the object is obtained based on augmented reality. A method of generating and outputting images for neurosurgery practice.
PCT/KR2021/016309 2021-07-16 2021-11-10 System and method for providing image for neurosurgical practice WO2023286926A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210093775A KR20230013220A (en) 2021-07-16 2021-07-16 System and method for providing video for neurosurgery practice
KR10-2021-0093775 2021-07-16

Publications (1)

Publication Number Publication Date
WO2023286926A1 true WO2023286926A1 (en) 2023-01-19

Family

ID=84919408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/016309 WO2023286926A1 (en) 2021-07-16 2021-11-10 System and method for providing image for neurosurgical practice

Country Status (2)

Country Link
KR (1) KR20230013220A (en)
WO (1) WO2023286926A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101146796B1 (en) * 2010-11-12 2012-05-21 연세대학교 산학협력단 Apparatus and method for modeling cranioface and apparatus for simulating craniofacial surgery
KR20180116090A (en) * 2017-04-14 2018-10-24 (주)칼리온 Medical navigation system and the method thereof
KR20190096575A (en) * 2018-02-09 2019-08-20 고려대학교 산학협력단 Medical imaging system
KR20200056855A (en) * 2018-11-15 2020-05-25 서울여자대학교 산학협력단 Method, apparatus and program for generating a pneumoperitoneum model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101146796B1 (en) * 2010-11-12 2012-05-21 연세대학교 산학협력단 Apparatus and method for modeling cranioface and apparatus for simulating craniofacial surgery
KR20180116090A (en) * 2017-04-14 2018-10-24 (주)칼리온 Medical navigation system and the method thereof
KR20190096575A (en) * 2018-02-09 2019-08-20 고려대학교 산학협력단 Medical imaging system
KR20200056855A (en) * 2018-11-15 2020-05-25 서울여자대학교 산학협력단 Method, apparatus and program for generating a pneumoperitoneum model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEE JEONG-JIN: "Use Status and Prospects of 3D Printing Technology for Biomedical", THE MAGAZINE OF THE IEIE, vol. 43, no. 8, 25 August 2016 (2016-08-25), pages 20 - 29, XP093024351 *

Also Published As

Publication number Publication date
KR20230013220A (en) 2023-01-26

Similar Documents

Publication Publication Date Title
US20180263698A1 (en) Image registration and augmented reality system and method augmented reality thereof
WO2018056544A1 (en) Augmented reality system for dental surgery and implementation method therefor
WO2016195401A1 (en) 3d glasses system for surgical operation using augmented reality
Wellmer et al. Digital photography and 3D MRI–based multimodal imaging for individualized planning of resective neocortical epilepsy surgery
WO2014148794A1 (en) Method, apparatus, and system for manufacturing phantom customized to patient
WO2016126056A1 (en) Medical information providing apparatus and medical information providing method
WO2014073818A1 (en) Implant image creating method and implant image creating system
Tessier et al. Three dimensional imaging in medicine
Yamazaki et al. Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery
US11439466B2 (en) Operating method for a medical system, and medical system for performing a surgical procedure
CN101006465A (en) System and method for linking VOIS across timepoints for analysis of disease progression or response to therapy
WO2017171295A1 (en) Augmented reality system in which estimation of jaw movement of patient is reflected and augmented reality providing method therefor
WO2016137157A1 (en) Medical imaging device and medical image processing method
WO2021006472A1 (en) Multiple bone density displaying method for establishing implant procedure plan, and image processing device therefor
WO2010128818A2 (en) Medical image processing system and processing method
WO2022035110A1 (en) User terminal for providing augmented reality medical image and method for providing augmented reality medical image
WO2024058497A1 (en) Method for displaying treatment information for ultrasound brain treatment
CN114943802A (en) Knowledge-guided surgical operation interaction method based on deep learning and augmented reality
WO2022055271A1 (en) Medical image processing apparatus, medical image learning method, and medical image processing method
Wang et al. A downloadable three-dimensional virtual model of the visible ear
WO2023286926A1 (en) System and method for providing image for neurosurgical practice
Bichlmeier et al. Stepping into the operating theater: ARAV—augmented reality aided vertebroplasty
Dammann et al. Computer-aided surgical planning for implantation of hearing aids based on CT data in a VR environment
WO2020159051A1 (en) 10-20 system-based positional information providing method
CN111053598A (en) Augmented reality system platform based on projector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE