WO2023234626A1 - Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie - Google Patents

Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie Download PDF

Info

Publication number
WO2023234626A1
WO2023234626A1 PCT/KR2023/007105 KR2023007105W WO2023234626A1 WO 2023234626 A1 WO2023234626 A1 WO 2023234626A1 KR 2023007105 W KR2023007105 W KR 2023007105W WO 2023234626 A1 WO2023234626 A1 WO 2023234626A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessels
organs
surgery
processor
image
Prior art date
Application number
PCT/KR2023/007105
Other languages
English (en)
Korean (ko)
Inventor
정태영
김성재
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2023234626A1 publication Critical patent/WO2023234626A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present disclosure relates to an apparatus and method for generating 3D models of organs and blood vessels depending on the type of surgery.
  • simulations for existing virtual mock surgeries have been developed based on anatomical structures regardless of the type of surgery. Accordingly, there is a problem in that simulations for existing virtual mock surgeries do not reflect the characteristics of the type of surgery.
  • the present disclosure was created to solve the above-mentioned problems, and the problem that the present disclosure seeks to solve is to provide a method, device, and program for automatically generating 3D models of organs and blood vessels according to the type of surgery.
  • the device for generating 3D models of organs and blood vessels according to the type of surgery according to the present disclosure to achieve the above-described technical problem includes a communication module; Memory; and at least one processor, wherein the processor obtains one or more scanned images output by scanning the patient's body, and when the type of specific surgery to be performed on the patient is input through a first user interface (UI), Obtain information related to organs and blood vessels related to the input type of specific surgery, and based on the acquired information, image each of the organs and blood vessels from the scan image through an artificial intelligence (AI) model. Divide and identify, use the images for each of the identified organs and blood vessels to generate 3D modeling data for each of the organs and blood vessels, and match the generated 3D modeling data to create 3D modeling data for the specific surgery.
  • a model can be created.
  • the scanned image includes one or more computer tomography (CT) images of the patient's body, and when the processor acquires the CT image, the external device that scans the patient's body and the The CT image may be acquired from at least one of a cloud server connected to an external device.
  • CT computer tomography
  • the memory includes a database that stores information on organs and blood vessels corresponding to a plurality of surgery types, and the processor obtains information related to organs and blood vessels related to the input specific type of surgery from the database. You can.
  • the AI model includes a previously learned algorithm module for each of the plurality of surgery types that identifies images of organs and blood vessels corresponding to each of the plurality of surgery types, and the processor performs the plurality of surgeries through a second UI.
  • the AI model using the selected algorithm module is used. Images for each of the organs and blood vessels can be divided and identified from the scanned image.
  • the processor when generating the 3D modeling data, provides a third UI for inspecting the position, size, and shape of the image for each of the organs and blood vessels identified from the scan image, and the third UI
  • the images for each of the above organs and blood vessels can be modified according to the inspection results entered through .
  • the third UI may include at least one of a first tool for moving the position of the image for each of the identified organs and blood vessels, a second tool for deletion, or a third tool for creating an additional image.
  • the processor may adjust the position of the generated 3D modeling data based on the movement of the patient or the position of blood vessels commonly appearing on the scan image.
  • the method of generating a 3D model of organs and blood vessels according to the type of surgery according to the present disclosure to achieve the above-described technical problem includes obtaining one or more scanned images output by scanning the patient's body by the processor. steps; When the type of specific surgery to be performed on the patient is input through a first user interface (UI), acquiring, by the processor, information related to organs and blood vessels related to the input type of specific surgery; segmenting and identifying, by the processor, an image for each of the organs and blood vessels from the scanned image through an artificial intelligence (AI) model based on the acquired information; generating, by the processor, 3D modeling data for each of the identified organs and blood vessels using images of each of the identified organs and blood vessels; and generating, by the processor, a 3D model for performing the specific surgery by matching the generated 3D modeling data.
  • UI user interface
  • AI artificial intelligence
  • a computer-readable recording medium recording a computer program for executing a method for implementing the present disclosure may be further provided.
  • a method, device, and program for automatically generating 3D models of organs and blood vessels according to the type of surgery can be provided.
  • FIG. 1 is a schematic diagram of a system for implementing a method for generating 3D models of organs and blood vessels according to the type of surgery, according to the present disclosure.
  • Figure 2 is a block diagram for explaining the configuration of a device that generates 3D models of organs and blood vessels according to the type of surgery according to the present disclosure.
  • FIG. 3 is a flowchart illustrating a method for generating 3D models of organs and blood vessels according to the type of surgery according to the present disclosure.
  • FIG. 4 is a flowchart illustrating a method for segmenting/identifying organs and blood vessels on a scanned image according to the present disclosure.
  • 5 to 7 are diagrams for explaining a method of generating 3D models of organs and blood vessels according to the type of surgery according to the present disclosure.
  • first and second are used to distinguish one component from another component, and the components are not limited by the above-mentioned terms.
  • the identification code for each step is used for convenience of explanation.
  • the identification code does not explain the order of each step, and each step may be performed differently from the specified order unless a specific order is clearly stated in the context. there is.
  • image refers to multi-dimensional data consisting of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a 3D image). You can.
  • the image may include a medical image of an object acquired by a CT imaging device.
  • an “object” may be a person (eg, a patient) or an animal, or a part or all of a person or an animal.
  • the object may include at least one of organs such as the liver, heart, uterus, brain, breast, abdomen, etc., blood vessels (eg, arteries or veins, etc.), fat tissue, etc.
  • target object may mean a part of a person that is actually the target of surgery.
  • a “user” is a medical professional and may be a doctor, nurse, clinical pathologist, medical imaging expert, etc., and may be a technician who repairs a medical device, but is not limited thereto.
  • 3D modeling data refers to data that represents a specific object in 3D
  • 3D model may refer to an element of a simulation created through the combination of one or more 3D modeling data.
  • 3D modeling data and/or “3D model” refers to body parts in real space through the display provided to the user in the form of 2D (Dimensional), 3D (Dimensional), or Augmented Reality. It can be provided so that it appears as if it appears.
  • the "first device” i.e., the device that generates 3D models of organs and blood vessels according to the type of surgery
  • the “first device” refers to various devices that can perform computational processing and provide results to the user. Included.
  • the first device may be a desktop PC, a laptop (Note Book), as well as a smart phone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), Synchronous/asynchronous IMT-2000 (International Mobile Telecommunication-2000) mobile terminals, Palm PCs (Palm Personal Computers), personal digital assistants (PDAs), etc. may also apply.
  • a Head Mounted Display (HMD) device includes computing functionality, the HMD device may be the device.
  • the device may be implemented as a separate server that receives requests from clients and performs information processing.
  • FIG. 1 is a schematic diagram of a system 1000 for implementing a method for generating 3D models of organs and blood vessels depending on the type of surgery, according to the present disclosure.
  • the system 1000 for implementing a method of generating 3D models of organs and blood vessels according to the type of surgery includes a first device 100, a hospital server 200, and a database 300. ) and AI model 400.
  • the first device 100 is shown to be implemented in the form of a single desktop, but it is not limited thereto. As described above, the first device 100 may refer to various types of devices or a device group in which one or more types of devices are connected.
  • the first device 100, hospital server 200, database 300, and artificial intelligence (AI) model 400 included in the system 1000 perform communication through the network (W).
  • the network W may include a wired network and a wireless network.
  • the network may include various networks such as a local area network (LAN), a metropolitan area network (MAN), and a wide area network (WAN).
  • LAN local area network
  • MAN metropolitan area network
  • WAN wide area network
  • the network W may include the known World Wide Web (WWW).
  • WWW World Wide Web
  • the network (W) according to an embodiment of the present disclosure is not limited to the networks listed above, and may include at least some of a known wireless data network, a known telephone network, and a known wired and wireless television network.
  • the first device 100 may provide a method for generating 3D models of organs and blood vessels depending on the type of surgery.
  • the first device 100 may obtain a scanned image by scanning the patient's body and receive input from the user about the type of specific surgery to be performed on the patient.
  • the first device 100 may obtain information related to organs and blood vessels related to the type of specific surgery entered. Based on the acquired information, the first device 100 can segment and identify images for each organ and blood vessel on one or more scanned images through an AI model.
  • the first device 100 uses images of each identified organ and blood vessel to generate 3D modeling data for each organ and blood vessel, and matches the generated 3D modeling data to create a 3D model for the progress of a specific surgery. can do.
  • the hospital server 200 may store scanned images (eg, computer tomography (CT) images) obtained by scanning the patient's body.
  • the hospital server 200 may transmit the saved scanned image to the first device 100, the database 300, or the AI model 400.
  • CT computer tomography
  • the hospital server 200 can protect personal information about the body by pseudonymizing or anonymizing the subject of the CT image. Additionally, the hospital server may encrypt and store information related to the age/gender/height/weight/parity of the patient who is the subject of the CT image input by the user.
  • the database 300 may store 3D modeling data and 3D models for various objects/surgical equipment generated by the first device 100. As another example, the database 300 may store information on organs and blood vessels corresponding to each type of surgery. Although FIG. 1 illustrates the case where the database 300 is implemented outside the first device 100, the database 300 may also be implemented as a component of the first device 100.
  • the AI model 400 is an artificial intelligence model learned to segment and identify images for each of the organs and blood vessels from one or more scanned images.
  • the AI model 400 may be trained to segment and identify images for each of the organs and blood vessels from one or more scanned images through a data set constructed from actual surgical images, CT images, and anatomy-related data. Learning methods may include, but are not limited to, supervised training/unsupervised training.
  • the AI model 400 is implemented outside of the first device 100 (e.g., implemented as cloud-based), but is not limited thereto and is implemented outside the first device 100. It can be implemented as a single component.
  • FIG. 2 is a block diagram for explaining the configuration of a first device 100 that generates 3D models of organs and blood vessels according to the type of surgery according to the present disclosure.
  • the first device 100 may include a memory 110, a communication module 120, a display 130, an input module 140, and a processor 150.
  • a memory 110 may include a central processing unit (CPU) 110, a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), a graphics processing unit (GPU), or a graphics processing unit.
  • the memory 110 can store data supporting various functions of the first device 100 and a program for the operation of the processor 150, and can store input/output data (e.g., music files, images, moving pictures, etc.), a plurality of application programs (application programs or applications) running on the device, data for operation of the first device 100, and commands can be stored. At least some of these applications may be downloaded from an external server via wireless communication.
  • input/output data e.g., music files, images, moving pictures, etc.
  • application programs application programs or applications
  • At least some of these applications may be downloaded from an external server via wireless communication.
  • the memory 110 is of a flash memory type, hard disk type, solid state disk type, SDD type (Silicon Disk Drive type), and multimedia card micro type. micro type), card type memory (e.g. SD or XD memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), EEPROM (electrically erasable) It may include at least one type of storage medium among programmable read-only memory (PROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, and optical disk.
  • PROM programmable read-only memory
  • PROM programmable read-only memory
  • magnetic memory magnetic disk
  • optical disk optical disk.
  • the memory 110 is separate from the device, but may include a database connected by wire or wirelessly. That is, the database shown in FIG. 1 may be implemented as a component of the memory 110.
  • the communication module 120 may include one or more components that enable communication with an external device, for example, at least one of a broadcast reception module, a wired communication module, a wireless communication module, a short-range communication module, and a location information module. may include.
  • Wired communication modules include various wired communication modules such as Local Area Network (LAN) modules, Wide Area Network (WAN) modules, or Value Added Network (VAN) modules, as well as USB (Universal Serial Bus) modules. ), HDMI (High Definition Multimedia Interface), DVI (Digital Visual Interface), RS-232 (recommended standard 232), power line communication, or POTS (plain old telephone service).
  • LAN Local Area Network
  • WAN Wide Area Network
  • VAN Value Added Network
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • DVI Digital Visual Interface
  • RS-232 Recommended standard 232
  • power line communication or POTS (plain old telephone service).
  • wireless communication modules include GSM (global System for Mobile Communication), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), and UMTS (universal mobile telecommunications system). ), TDMA (Time Division Multiple Access), LTE (Long Term Evolution), 4G, 5G, 6G, etc. may include a wireless communication module that supports various wireless communication methods.
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • UMTS universal mobile telecommunications system
  • TDMA Time Division Multiple Access
  • LTE Long Term Evolution
  • 4G, 5G, 6G, etc. may include a wireless communication module that supports various wireless communication methods.
  • the display 130 displays (outputs) information processed by the first device 100 (eg, 3D modeling data/model or a simulation screen based on the 3D modeling data/model). For example, the display displays execution screen information of an application program (for example, an application) running on the first device 100, or UI (User Interface) and GUI (Graphic User Interface) information according to this execution screen information. It can be displayed. The types of UI output from the display 130 will be described later.
  • the input module 140 is for receiving information from the user.
  • the processor 150 can control the operation of the first device 100 to correspond to the input information.
  • the input module 140 includes hardware-type physical keys (e.g., buttons, dome switches, jog wheels, jog switches, etc. located on at least one of the front, back, and sides of the device) and software-type physical keys. May include touch keys.
  • the touch key consists of a virtual key, soft key, or visual key displayed on the touch screen type display 130 through software processing, or the above It may consist of a touch key placed in a part other than the touch screen.
  • the virtual key or visual key can be displayed on the touch screen in various forms, for example, graphic, text, icon, video or these. It can be made up of a combination of .
  • the processor 150 has a memory that stores data for an algorithm for controlling the operation of the components in the first device 100 or a program that reproduces the algorithm, and performs the above-described operations using the data stored in the memory. It may be implemented with at least one processor (not shown). At this time, the memory and processor may each be implemented as separate chips. Alternatively, the memory and processor may be implemented as a single chip.
  • the processor 150 combines any one or a plurality of the above-described components to implement various embodiments according to the present disclosure described in FIGS. 3 to 7 below on the first device 100. You can control it.
  • one or more scanned images output can be obtained by scanning the patient's body (S310).
  • the first device 100 may receive a computer tomography (CT) image obtained by scanning the patient's body from a hospital server. Specifically, the first device 100 sends a signal requesting a CT image of a specific patient containing an image of a specific surgical object (e.g., target object) to be performed on the patient to the hospital server (or a device controlled by the hospital). Can be transmitted.
  • a second device an external device controlled by the hospital, may scan the patient's body to obtain one or more CT images.
  • the hospital server (or cloud server) connected to the second device may transmit one or more CT images of a specific patient including an image of the specific surgical subject to the device according to a request signal from the first device 100.
  • the first device 100 may receive a CT image including an image of a specific surgical target from a CT imaging device connected wirelessly/wireously to the first device 100.
  • the first device 100 may receive input of the type of specific surgery to be performed on the patient through the first UI (S320).
  • the first UI may be shown as shown in FIG. 5.
  • the first device 100 may select a specific type of surgery (that is, a specific organ/blood vessel/fatty tissue, etc. where the surgery will be performed) through the UI element 500 on the first UI.
  • the first device 100 may obtain information related to organs and blood vessels related to the type of specific surgery entered (S330). And, based on the acquired information, the first device 100 can segment and identify images for each organ and blood vessel on one or more scanned images through an AI model (S340). S330 and S340 will be described in more detail with reference to FIG. 4.
  • the first device 100 may generate 3D modeling data for each of the identified organs and blood vessels using images of each identified organ and blood vessel (S350).
  • the first device 100 may provide the user with a third UI for inspecting the location, size, and shape of images for each of the organs and blood vessels identified from one or more scanned images.
  • the first device 100 can modify the images for each organ and blood vessel according to the inspection results input by the user through the third UI.
  • the third UI can specify inspection items based on an organ/blood vessel database defined for each surgery type.
  • organs related to stomach cancer may include the liver, abdomen, spleen, etc.
  • the detailed blood vessels that make up the arteries related to stomach cancer include AORTA, CHA, and SA.
  • PHA, etc. and may include PV, SV, GCT, SMV, etc. as detailed blood vessels forming venous vessels related to stomach cancer.
  • the third UI is a first tool for moving the position of the image for each of the identified organs and blood vessels, a second tool for deletion (e.g., eraser tool), or a tool for creating additional images. It may include at least one of three tools (e.g., brush tool, etc.).
  • the user may erase liver modeling data incorrectly predicted by the first device 100 using an eraser tool.
  • the user can add omitted liver modeling data using the brush tool.
  • the user's inspection standard may be based on whether the inspection items defined for each type of surgery are properly confirmed and whether the structure is anatomically acceptable.
  • the user's inspection criteria can refer to human anatomy to check whether the inspection items have the correct size and location.
  • the user can inspect the modeling by considering the various branch points of arterial blood vessels.
  • the first device 100 may generate a 3D model for performing a specific surgery by matching the generated 3D modeling data (S360).
  • the first device 100 may adjust the location of the generated 3D modeling data based on the patient's movement or the location of blood vessels commonly appearing on one or more CT images.
  • the second device may acquire a plurality of CT images by scanning the locations of organs related to stomach cancer two or more times, and transmit the acquired plurality of CT images to the first device 100.
  • the first device 100 may match a plurality of 3D modeling data (that is, 3D modeling data for a plurality of organs related to stomach cancer) generated based on blood vessels commonly identified in a plurality of CT images.
  • the first device 100 is used to detect other organs. /The location of blood vessels (e.g., detailed blood vessels constituting the artery shown in (b) of FIG. 7) can also be adjusted by applying the difference value.
  • the first device 100 matches the modeling data for each organ/blood vessel whose positions have been adjusted to create a 3D model for the progress of gastric cancer surgery (e.g., like the 3D model shown in (c) of FIG. 7). can be created.
  • the first device 100 may generate a simulation for the progress of a specific surgery (i.e., virtual mock surgery) using the generated 3D model.
  • FIG. 4 is a flowchart illustrating a method for segmenting/identifying organs and blood vessels on a scanned image according to the present disclosure. That is, Figure 4 is a flowchart for specifically explaining S330 and S340.
  • the first device 100 may build or load a database storing information on organs and blood vessels corresponding to a plurality of surgery types (S410).
  • the first device 100 may build a database by collecting information on organs/blood vessels/other tissues corresponding to a specific surgery. Additionally, the first device 100 may load (or receive) a database built based on information on organs/blood vessels/other tissues corresponding to a specific surgery (eg, stomach cancer surgery) from an external device. Additionally, the database may be provided in the memory 110.
  • a specific surgery eg, stomach cancer surgery
  • the database includes organs related to the stomach cancer surgery (e.g., liver, abdomen, spleen, gallbladder, and pancreas), and detailed blood vessels forming the arteries of the organ (e.g., AORTA, A total of 16 blood vessels, including CT, CHA, SA, PHA, and GDA), detailed blood vessels forming the veins of the relevant organ (eg, a total of 12 blood vessels, including PV, SV, GCT, SMV, LGV, and LGEV), and others It may include tissue (e.g., skin, Lib, abdominal wall, etc.).
  • organs related to the stomach cancer surgery e.g., liver, abdomen, spleen, gallbladder, and pancreas
  • detailed blood vessels forming the arteries of the organ e.g., AORTA, A total of 16 blood vessels, including CT, CHA, SA, PHA, and GDA
  • detailed blood vessels forming the veins of the relevant organ eg, a total of 12 blood vessels, including PV
  • the first device 100 may obtain information related to organs and blood vessels related to a specific type of surgery input from the database (S420).
  • the first device 100 may select an algorithm module that can identify images of organs and blood vessels corresponding to the type of specific surgery input on the second UI (S430).
  • the AI model may include a previously learned algorithm module for each type of surgery, capable of identifying images of organs and blood vessels corresponding to each type of surgery.
  • the algorithm module may refer to an algorithm learned by the AI model to perform a specific operation.
  • the AI model is trained to identify images of organs and blood vessels corresponding to multiple types of surgery, and the AI model may include previously learned algorithm modules for each type of surgery.
  • the user can select an algorithm module corresponding to a specific surgery type among the algorithm modules previously learned for each surgery type on the second UI. Accordingly, the AI model can identify organs/blood vessels/other tissues related to a specific surgery type using the selected algorithm module.
  • the second UI may be implemented as shown in FIG. 6.
  • the second UI may include UI elements 610 and 620 that can select a pre-learned algorithm module for each type of surgery. The user can select a specific algorithm module through UI elements 610 and 620.
  • the first device 100 can segment and identify images for each organ and blood vessel on one or more CT images through an AI model using the selected algorithm module (S450).
  • the disclosed embodiments may be implemented in the form of a recording medium that stores instructions executable by a computer. Instructions may be stored in the form of program code, and when executed by a processor, may create program modules to perform operations of the disclosed embodiments.
  • the recording medium may be implemented as a computer-readable recording medium.
  • Computer-readable recording media include all types of recording media storing instructions that can be decoded by a computer. For example, there may be Read Only Memory (ROM), Random Access Memory (RAM), magnetic tape, magnetic disk, flash memory, optical data storage device, etc.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • magnetic tape magnetic tape
  • magnetic disk magnetic disk
  • flash memory optical data storage device

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Gynecology & Obstetrics (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente divulgation concerne un appareil et un procédé pour générer un modèle 3D d'un organe et d'un vaisseau sanguin en fonction du type de chirurgie. Le présent appareil comprend : un module de communication ; une mémoire ; et au moins un processeur, le processeur pouvant : obtenir une ou plusieurs images de balayage qui sont délivrées en balayant le corps d'un patient ; lorsque le type de chirurgie spécifique à effectuer sur le patient est entré par l'intermédiaire d'une première interface utilisateur (IU), obtenir des informations relatives à un organe et à un vaisseau sanguin qui sont associés au type d'entrée d'une chirurgie spécifique ; sur la base des informations obtenues, segmenter et identifier, par l'intermédiaire d'un modèle d'intelligence artificielle (IA), une image de chacun de l'organe et du vaisseau sanguin à partir des images de balayage ; en utilisant l'image identifiée de chacun de l'organe et du vaisseau sanguin, générer des données de modélisation 3D pour chacun de l'organe et du vaisseau sanguin ; et faire correspondre les données de modélisation 3D générées pour générer un modèle 3D pour réaliser la chirurgie spécifique.
PCT/KR2023/007105 2022-05-30 2023-05-24 Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie WO2023234626A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0066185 2022-05-30
KR1020220066185A KR20230167194A (ko) 2022-05-30 2022-05-30 수술 종류에 따라 장기 및 혈관에 대한 3d 모델을 생성하는 방법, 장치 및 프로그램

Publications (1)

Publication Number Publication Date
WO2023234626A1 true WO2023234626A1 (fr) 2023-12-07

Family

ID=89025293

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/007105 WO2023234626A1 (fr) 2022-05-30 2023-05-24 Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie

Country Status (2)

Country Link
KR (1) KR20230167194A (fr)
WO (1) WO2023234626A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312094A1 (en) * 2009-06-08 2010-12-09 Michael Guttman Mri-guided surgical systems with preset scan planes
KR20120111871A (ko) * 2011-03-29 2012-10-11 삼성전자주식회사 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치
US20140071125A1 (en) * 2012-09-11 2014-03-13 The Johns Hopkins University Patient-Specific Segmentation, Analysis, and Modeling from 3-Dimensional Ultrasound Image Data
US20150127316A1 (en) * 2011-03-30 2015-05-07 Mordechai Avisar Method and system for simulating surgical procedures
KR20160121740A (ko) * 2015-04-10 2016-10-20 한국전자통신연구원 수술 관련 해부정보 제공 방법 및 장치

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010178157A (ja) 2009-01-30 2010-08-12 Nec Corp 基地局装置とその処理方法、移動通信システムとその処理方法
JP6783757B2 (ja) 2014-11-12 2020-11-11 マテリアライズ・ナムローゼ・フエンノートシャップMaterialise Nv 患者に患者固有医療を提供するコンピューターネットワークシステム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312094A1 (en) * 2009-06-08 2010-12-09 Michael Guttman Mri-guided surgical systems with preset scan planes
KR20120111871A (ko) * 2011-03-29 2012-10-11 삼성전자주식회사 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치
US20150127316A1 (en) * 2011-03-30 2015-05-07 Mordechai Avisar Method and system for simulating surgical procedures
US20140071125A1 (en) * 2012-09-11 2014-03-13 The Johns Hopkins University Patient-Specific Segmentation, Analysis, and Modeling from 3-Dimensional Ultrasound Image Data
KR20160121740A (ko) * 2015-04-10 2016-10-20 한국전자통신연구원 수술 관련 해부정보 제공 방법 및 장치

Also Published As

Publication number Publication date
KR20230167194A (ko) 2023-12-08

Similar Documents

Publication Publication Date Title
Pepe et al. A marker-less registration approach for mixed reality–aided maxillofacial surgery: a pilot evaluation
WO2016125978A1 (fr) Procédé et appareil d'affichage d'image médical
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
KR102146672B1 (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2019132244A1 (fr) Procédé de génération d'informations de simulation chirurgicale et programme
JP7453708B2 (ja) 拡張された解剖学的特徴を表示するためのシステム及び方法
KR102628324B1 (ko) 인공지능 기반의 사용자 인터페이스를 통한 수술 결과 분석 장치 및 그 방법
Hofman et al. First‐in‐human real‐time AI‐assisted instrument deocclusion during augmented reality robotic surgery
WO2023234626A1 (fr) Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie
WO2023234492A1 (fr) Procédé, dispositif et programme pour mettre en œuvre une simulation chirurgicale 3d spécifique à un patient
KR102213412B1 (ko) 기복모델 생성방법, 장치 및 프로그램
WO2021206517A1 (fr) Procédé et système de navigation vasculaire peropératoire
WO2021149918A1 (fr) Procédé et appareil d'estimation de l'âge d'un os
WO2020159276A1 (fr) Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale
CN114927229A (zh) 手术仿真方法、装置、电子设备及存储介质
JP2023520741A (ja) ホログラフィックによる遠隔プロクタリングのための、心エコー画像からの医療機器のリアルタイムトラッキング方法
JP6862286B2 (ja) 情報処理装置、情報処理方法、情報処理システム及びプログラム
US20220254464A1 (en) Communication system and method
WO2023018138A1 (fr) Dispositif et procédé de génération d'un modèle de pneumopéritoine virtuel d'un patient
WO2023136616A1 (fr) Appareil et procédé permettant de fournir un environnement chirurgical basé sur la réalité virtuelle pour chaque situation chirurgicale
WO2022173232A2 (fr) Procédé et système pour prédire le risque d'apparition d'une lésion
KR20190133424A (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2024111846A1 (fr) Procédé et dispositif de détection de saignement peropératoire par l'intermédiaire d'un modèle de fusion de caractéristiques spatio-temporelles
WO2022265345A1 (fr) Système pour émettre un modèle d'image foetale à l'aide d'une imprimante 3d et dispositif et procédé qui génèrent un fichier pour sortie d'imprimante 3d
WO2023229415A1 (fr) Procédé de fourniture d'image de realité augmentée et dispositif de fourniture d'image de realité augmentée (ra)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23816287

Country of ref document: EP

Kind code of ref document: A1