WO2019164274A1 - Procédé et dispositif de génération de données d'apprentissage - Google Patents

Procédé et dispositif de génération de données d'apprentissage Download PDF

Info

Publication number
WO2019164274A1
WO2019164274A1 PCT/KR2019/002092 KR2019002092W WO2019164274A1 WO 2019164274 A1 WO2019164274 A1 WO 2019164274A1 KR 2019002092 W KR2019002092 W KR 2019002092W WO 2019164274 A1 WO2019164274 A1 WO 2019164274A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
information
image
surgery
polygon
Prior art date
Application number
PCT/KR2019/002092
Other languages
English (en)
Korean (ko)
Inventor
이종혁
형우진
양훈모
김호승
허성환
최민국
이재준
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020180122454A external-priority patent/KR102014351B1/ko
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2019164274A1 publication Critical patent/WO2019164274A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image

Definitions

  • the present invention relates to a method and apparatus for generating learning data.
  • Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
  • Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
  • laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
  • Microsurgery is performed using a laser or a special instrument.
  • robot surgery is to perform minimally invasive surgery using a surgical robot.
  • radiation surgery refers to surgical treatment with radiation or laser light outside the body.
  • Deep learning is defined as a set of machine learning algorithms that attempts to achieve high levels of abstraction (summarizing key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformations. Deep learning can be seen as a field of machine learning that teaches computers how people think in a large framework.
  • the surgery is performed using a variety of methods as well as the case where the medical staff directly perform the surgery.
  • a robotic surgery system using a surgical robot a virtual surgery system that provides simulation of virtual surgery by providing the same virtual environment as in actual surgery, and a remote surgery system that can guide surgery remotely during surgery. Method can be used. Accordingly, there is a need for a method that allows the operation process to proceed by using various methods during actual surgery.
  • the problem to be solved by the present invention is to provide a surgical information construction method and apparatus.
  • the problem to be solved by the present invention is to provide an artificial data generating method and apparatus.
  • An object of the present invention is to provide a method and apparatus for generating artificial data from an actual surgical image.
  • the problem to be solved by the present invention is to provide a method and apparatus for generating artificial data close to the actual surgical image through learning.
  • An object of the present invention is to provide a method and apparatus for generating learning data based on a surgical image.
  • An object of the present invention is to provide a method and apparatus for recognizing various surgical information included in a surgical image and expressing a relationship between the recognized surgical information.
  • the problem to be solved by the present invention is to provide a method and apparatus for generating training data based on each surgical information recognized from the surgical image, and to build a learning model using the generated training data.
  • An object of the present invention is to provide a method and apparatus for providing blood vessel information using a blood vessel model.
  • the problem to be solved by the present invention is to provide a method and apparatus for constructing the training data using the polygon information in the 3D blood vessel model and learning to classify blood vessels.
  • the problem to be solved by the present invention is to provide a method and apparatus for identifying the vessel type and vascular hierarchy for the vessel information of the new patient through the vessel classification learning model.
  • the relay server for constructing surgery information while relaying the communication between the client, receives the actual surgery information generated based on the actual surgery image from the first client, the received actual surgery information Transmits the virtual surgery information generated by performing a virtual surgery based on the actual surgery information from the second client, and transmits the virtual surgery information to the first client.
  • a method of generating artificial data performed by a computer includes obtaining a background image and an object image, and generating artificial data by combining the background image and the object image.
  • the background image includes a specific area inside the body taken by the endoscope camera, and the object image includes a surgical tool or blood, and the artificial data includes a surgical tool or blood in the object image, and the It is characterized in that it is generated to approximate the actual surgical image based on the arrangement relationship between specific regions within the body in the background image.
  • a method for generating training data based on a surgical image performed by a computer may include obtaining a surgical image including a plurality of image frames, and performing surgery recognition information from each of the plurality of image frames. Recognizing, and generating, for each of the plurality of image frames, relational representation information representing a relationship between surgical elements included in the surgical recognition information based on the surgical recognition information. Include.
  • the method may include obtaining vessel vessel polygons constituting a vessel in a vessel model, the vessel vessel information and connection information of the vessel vessel. Comprising the learning data based on the step, and classifying the blood vessels based on the learning data.
  • surgical information can be collected from various clients in real time at the time of an actual operation, so that surgical medical staff can effectively proceed with the surgical procedure.
  • surgical medical staff can effectively proceed with the surgical procedure.
  • since the operation can be performed through a variety of surgical information and surgical system as compared to the case of performing the operation only by the surgical medical staff, it is possible to perform a more accurate operation.
  • the surgical information can be transmitted and received to each other through the relay server.
  • each client since each client only needs to perform communication through a relay server, even if the number of clients increases, communication between each other can be easily performed, and an additional load does not occur. Thus, increasing the number of clients does not affect performance.
  • artificial data close to the actual surgical image can be generated through learning.
  • learning data can be constructed using artificial data.
  • a sufficient learning effect can be obtained by providing artificially generated surgical images together with actual surgical images, such as artificial intelligence, deep learning, and machine learning, which require a large amount of learning data.
  • an artificially generated surgical image including various surgical information that cannot be obtained during actual surgery can be secured.
  • the present invention it is possible to obtain more meaningful information through the image frame by defining various surgical elements that can be recognized from the image frame, and grasping the relationship information between the surgical elements as well as the information on each surgical element itself. .
  • the present invention rather than generating training data for building one learning model, it provides the basis data for building various learning models.
  • the learning data may be increased to obtain more improved learning results.
  • the present invention it is possible to accurately grasp geometric and hierarchical information of blood vessels by building a learning model for classifying blood vessels.
  • the simulation is performed using this, more accurate surgical simulation is possible because the vessel information about the arteries and veins located in the surgical target and the surroundings can be accurately provided.
  • the present invention is effective to classify blood vessels and stratify blood vessels by constructing training data based on connection information between one polygon and an adjacent polygon.
  • the present invention by using the position information of the polygon derived based on the reference vessel point and the reference organ, it is possible to build a more accurate vessel classification learning model, through which a higher recognition rate can be obtained.
  • FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • FIG. 2 is a view schematically showing the configuration of a surgical information building system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method of building surgery information using a relay server according to an embodiment of the present invention.
  • FIG. 4 is a diagram schematically showing the configuration of an apparatus 300 for performing surgery information construction according to an embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of generating artificial data according to an embodiment of the present invention.
  • FIG. 6 is a view for explaining a method of generating artificial data based on learning according to an embodiment of the present invention.
  • FIGS. 7 to 10 are diagrams for explaining embodiments of a method for generating artificial data based on learning according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • FIG. 12 is a diagram schematically showing the configuration of an apparatus 700 for performing a method of generating artificial data according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • FIG. 14 is a flowchart schematically illustrating a method of generating learning data based on a surgical image according to an embodiment of the present invention.
  • 15 is a diagram illustrating a process of generating relationship expression information by recognizing surgical recognition information from an image frame according to an embodiment of the present invention.
  • 16 is a diagram illustrating relationship expression information generated for a plurality of image frames in a surgical image according to an embodiment of the present invention.
  • 17 is a diagram illustrating an example of a process of performing learning based on relationship expression information generated for a plurality of image frames in a surgical image according to an embodiment of the present invention.
  • FIG. 18 is a diagram schematically illustrating a configuration of an apparatus 400 for performing a method for generating learning data based on a surgical image according to an embodiment of the present invention.
  • 19 is a flowchart schematically illustrating a method of providing blood vessel information using a blood vessel model according to an embodiment of the present invention.
  • 20 and 21 are diagrams for explaining a process of deriving connection information between a vascular polygon and an adjacent polygon according to an embodiment of the present invention.
  • 22 is a diagram illustrating a process of calculating the midpoint of the OBB.
  • FIG. 23 is a diagram illustrating a learning process for classifying blood vessel types according to one embodiment of the present invention.
  • 24 and 25 are diagrams showing an example of blood vessel classification derived as a learning result according to an embodiment of the present invention.
  • FIG. 26 is a diagram schematically illustrating a configuration of an apparatus 200 for performing a method of providing blood vessel information using a blood vessel model according to an embodiment of the present invention.
  • a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
  • the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
  • the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
  • surgical robot 34 includes imaging device 36 and surgical instrument 38.
  • the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
  • the server 100 is a computing device including at least one processor and a communication unit.
  • the controller 30 includes a computing device including at least one processor and a communication unit.
  • the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
  • the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
  • the image photographed by the photographing apparatus 36 is displayed on the display 340.
  • surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
  • Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
  • the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
  • the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
  • the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
  • the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
  • the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
  • the surgical image obtained by the imaging device 36 is transmitted to the controller 30.
  • the relay server may mean the server 100 of FIG. 1 or may be separately installed and operate in conjunction with the server 100 of FIG. 1.
  • the medical staff may perform the surgery by various methods such as using a surgical robot or a simulator or guiding the operation remotely, as well as performing the surgery directly in the operating room. Accordingly, there is a need for a method of connecting various devices used during surgery and transmitting and receiving information therebetween to provide a more effective and optimized surgery to the patient during surgery. Therefore, the present invention is to provide a method for connecting each device using a relay server and through this to collect and utilize information.
  • FIG. 2 is a view schematically showing the configuration of a surgical information building system according to an embodiment of the present invention.
  • the surgical information building system may include a relay server 200 and a plurality of clients.
  • the first client 210, the second client 220, and the third client 230 are described as one example.
  • the present invention is not limited thereto. .
  • it can be configured to include fewer or more clients than shown in FIG.
  • the relay server 200 is a device that relays communication between a plurality of clients, and may perform a function such as, for example, a chat server.
  • the relay server 200 may relay a surgery by connecting a robot surgery system using a surgery robot, a virtual surgery system using a simulator, a remote system that guides the surgery remotely during surgery, and the like.
  • the relay server 200 may build learning data in response to receiving various surgical information from a plurality of clients.
  • the relay server 200 may perform learning based on the learning data and calculate an optimized surgical process, or may provide the learning data to an external server.
  • the external server may calculate the optimized surgical process by performing the learning based on the learning data provided from the relay server 200.
  • the first client 210 may be an apparatus for acquiring an actual surgery image and generating actual surgery information.
  • the first client 210 may be a device provided in the operating room (operating site).
  • it may correspond to the surgical robot 34 or the control unit 30 shown in FIG. 1, or may be provided in the operating room as a separate device and connected to the surgical robot 34 or the control unit 30.
  • the first client 210 may access the relay server 200 and transmit the actual surgery information generated at the time of operation to the relay server 200. In addition, the first client 210 may receive various information generated by another client from the relay server 200.
  • the actual surgery image may mean data obtained as the actual medical staff performs the surgery.
  • it may be an image of an actual surgical scene actually performed by the surgical robot 34.
  • the actual surgical image is data recorded on the surgical site and the operation during the actual surgical procedure.
  • the actual surgery information is information obtained from an actual surgery image, and may include, for example, surgical site information of a patient, surgical tool information, and information about a camera.
  • the surgical site refers to a body part of the patient where the actual surgery is performed, and may be, for example, an organ or a blood vessel.
  • the surgical tool may be a tool used during surgery, for example, a tool required for performing a robotic operation or a consumable used during surgery.
  • the camera is a device for photographing an actual surgical scene.
  • the camera may be installed in an operating room to photograph a surgical scene, or may be provided in an endoscope that enters a patient's body when performing a robotic surgery to photograph an internal body.
  • the first client 210 acquires a real surgery image from a camera or an external device included in a surgical robot, recognizes a surgical site, a surgical tool, and a camera from the acquired real surgery image, and based on the actual surgery information. Can be generated. That is, the first client 210 may generate the type and position information of the surgical site, the type, position, direction, and movement information of the surgical tool, the position and movement direction of the camera, and the like from the actual surgical image. Thereafter, the first client 210 may transmit the generated actual surgery information to the relay server 200, and provide the generated actual surgery information to other clients connected to the relay server 200.
  • the second client 220 may be an apparatus for generating virtual surgery information by performing virtual surgery.
  • the second client 220 may be a device provided in an operating room (operating site) or a remote location, for example, rehearsing or simulating a virtual body model modeled the same as the patient (ie, a patient). It may be a simulator capable of performing an action.
  • the virtual surgery refers to a virtual surgery that performs rehearsal or simulation on a virtual body model that is implemented in the same manner as the physical state based on the medical image data (eg, CT, MRI, PET images, etc.) of the subject.
  • the virtual surgery information is information obtained based on the image data on which the rehearsal or simulation is performed on the virtual body model in the virtual space, and may include simulation data.
  • the simulation data may include surgical site information, surgical tool information, and information about a camera of a surgical subject. That is, the virtual surgery information may include all the information included in the data recorded for the surgical operation performed on the virtual body model.
  • the virtual body model may be 3D modeling data generated based on medical image data previously photographed inside the body of the subject.
  • the model may be modeled in accordance with the body of the surgical subject, and may be corrected to the same state as the actual surgical state.
  • the second client 220 may access the relay server 200 and transmit the virtual surgery information generated by performing the virtual surgery to the relay server 200.
  • the second client 220 may receive various information generated by another client from the relay server 200.
  • the second client 220 receives the actual surgery information generated by the first client 210 from the relay server 200 and performs virtual surgery based on the received actual surgery information to perform virtual surgery information. Can be generated.
  • the second client 220 receives the actual surgery information (eg, position, direction, and movement information of the surgical tool) obtained by performing the actual surgery on the first client 210 through the relay server 200. Since it can be transmitted in real time, the second client 220 can provide the surgical medical staff with more accurate virtual surgery information reflecting the actual surgery information in real time. Therefore, since the surgical medical staff can use both the information of the first client 210 and the second client 220 through the relay server 200, it is possible to effectively perform the actual surgery for the surgical subject.
  • the actual surgery information eg, position, direction, and movement information of the surgical tool
  • the third client 230 may be a device for generating surgery guide information corresponding to the actual surgery image.
  • the third client 230 may be a device provided in the operating room (operation site) or a remote location, for example, when a specific event occurs during the operation may generate the surgical guide information in response thereto.
  • another medical staff may access the third client 230 at the request of the medical staff performing the surgery in the operating room (operation site). In this case, by performing communication between the third client 230 through the relay server 200, other medical staff can provide information about the surgery to the medical staff performing the surgery in the operating room (operation site) in real time.
  • the third client 230 may access the relay server 200 to transmit the surgical guide information generated by the third client 230 to the relay server 200, and receive various information generated from other clients from the relay server 200. .
  • the third client 230 may monitor the surgical situation according to the information (ie, actual surgery information and virtual surgery information) provided by the other clients 210 and 220 from the relay server 200. .
  • the third client 230 may recognize whether a specific event occurs during the surgery based on the monitoring, and generate surgery guide information corresponding thereto.
  • the third client 230 may automatically recognize whether a specific event occurs and generate surgery guide information determined according to the specific event.
  • the medical staff may monitor the actual surgical situation through the third client 230 to recognize that a specific event occurs, and input surgery guide information corresponding thereto.
  • Relay server 200 in addition to the above-described first, second, third client (210, 220, 230) can additionally connect the necessary client in accordance with the operation, and transmit and receive information between these clients It can play a role in controlling it.
  • the relay server 200 serves to relay the communication between the client, the intermediate intervention through the relay server 200 during the actual operation to perform Can be.
  • the relay server 200 may monitor the actual surgical process in real time.
  • the relay server 200 since the relay server 200 can obtain the surgical information from all the clients, it is possible to derive information about the entire surgical procedure. Accordingly, the relay server 200 may record the entire surgical procedure in order through such surgical information to form historical information about the corresponding surgery.
  • FIG. 3 is a flowchart illustrating a method of building surgery information using a relay server according to an embodiment of the present invention. The method of FIG. 3 may be performed in the surgical information building system shown in FIG. 2.
  • the relay server 200 may generate at least one client required as a group in operation.
  • the relay server 200 may generate clients required for actual surgery as a group based on a specific patient, or may receive a group request signal directly from a client and create a group in response thereto.
  • the relay server 200 may automatically determine whether the relay server belongs to the corresponding group and process the client to access the group.
  • the relay server 200 may transmit and receive information between clients in the group as the surgery proceeds. A detailed operation process thereof will be described below.
  • the first client 210 may acquire an actual surgery image and generate real surgery information corresponding to the obtained actual surgery image (S100).
  • the first client 210 obtains a real surgery image from a camera or an external device included in the surgical robot, and recognizes a surgical site, a surgical tool, and a camera from the acquired real surgery image, based on the actual surgery.
  • Information can be generated.
  • the actual surgical information may include surgical site information (e.g., type and location information of the surgical site), surgical tool information (e.g., type, position, direction, and movement information of the surgical tool), and camera information (e.g., camera). Location and motion information), and the like.
  • the first client 210 may transmit the actual surgery information to the relay server 200 (S105), and the relay server 200 may store the actual surgery information received from the first client 210 in the database ( S110).
  • the relay server 200 may transmit the actual surgery information to at least one client (eg, the second client 220 or / and the third client 230) in the same group (S115).
  • at least one client eg, the second client 220 or / and the third client 230
  • the second client 220 may perform virtual surgery to generate virtual surgery information (S120).
  • the second client 220 receives the actual surgery information generated by the first client 210 through the relay server 200 and performs virtual surgery based on the received actual surgery information to perform virtual surgery. Information can be generated.
  • the second client 220 may transmit the virtual surgery information to the relay server 200 (S125), and the relay server 200 may store the virtual surgery information received from the second client 220 in a database ( S130).
  • the relay server 200 may transmit the virtual surgery information to at least one client (eg, the first client 210 or / and the third client 230) in the same group (S135).
  • the third client 230 may monitor the surgery situation based on the surgery information on the surgery subject received through the relay server 200, and may generate the surgery guide information based on the monitoring result (S140).
  • the third client 230 generates a specific event during the surgery based on the actual surgery information of the first client 210 and the virtual surgery information of the second client 220 received from the relay server 200. Whether or not, and generate surgical guide information corresponding thereto.
  • the third client 230 may acquire the actual surgical image through the relay server 200 and provide the same to the medical staff to directly generate the surgical guide information corresponding to the actual surgical image. In this case, the third client 230 may receive the surgical guide information from the medical staff, and transmit it to the relay server 200.
  • the third client 230 may transmit the surgical guide information to the relay server 200 (S145), the relay server 200 may store the surgical guide information received from the third client 230 in the database ( S150).
  • the relay server 200 may transmit the surgical guide information to at least one client (eg, the first client 210 or / and the second client 220) in the same group (S155).
  • at least one client eg, the first client 210 or / and the second client 220
  • the relay server 200 acts as a relay to transmit and receive information between the clients belonging to the group as the operation performed on the operation target object, and stores the transmitted and received information in the database Can be.
  • the relay server 200 may perform learning to build learning data and provide an optimized surgical process by using various surgical information on the operation targets stored in the database.
  • the relay server 200 may generate cue sheet data about a surgical procedure of a surgery target subject based on various surgery information received from a client (S160).
  • the relay server 200 may include at least one of actual surgery information of the first client 210, virtual surgery information of the second client 220, and surgery guide information of the third client 230. Based on the cue sheet data for the surgical procedure of the surgical subject can be generated.
  • the cue sheet data may refer to data recorded in order by dividing a specific surgical procedure into detailed surgical operations.
  • the detailed surgical operation may be a minimum operation unit constituting the surgical process, and may be divided by various criteria.
  • the detailed operation may include the type of surgery (e.g., laparoscopic surgery, robotic surgery, etc.), the anatomical body area on which the surgery is performed, the surgical tools used, the number of surgical tools, the direction or location of the surgical tools on the screen. This may be based on the movement of the surgical tool (e.g., forward / retract, etc.).
  • the relay server 200 can grasp the information on the criteria for dividing the detailed operation through the actual surgery information, the virtual surgery information, the operation guide information, and the like stored in the database, the cue sheet data corresponding to the actual surgery of the patient Can be built accurately.
  • the relay server 200 may form a history of the entire surgical procedure by simply generating the cue sheet data.
  • the relay server 200 may perform step S165.
  • the relay server 200 may calculate the optimized cue sheet data corresponding to the surgical process of the surgical subject by performing the learning based on the cue sheet data (S165).
  • the relay server 200 may configure the actual surgery information, virtual surgery information, surgery guide information, and the like stored in the database as the training data, or may configure the cue sheet data itself as the training data. Thereafter, the relay server 200 may perform learning based on the learning data and calculate an optimized surgical process for the surgical target.
  • the learning method may be a machine learning method such as supervised learning, unsupervised learning, reinforcement learning, for example, deep learning learning method.
  • the relay server 200 may store the optimized cuesheet data in a database and provide the optimized cuesheet data to at least one client in the group. Accordingly, the at least one client may use the surgical subject's surgery based on the received optimized cuesheet data.
  • the relay server 200 calculates the optimized cue sheet data by performing the learning based on the training data, but according to the embodiment, the relay server 200 stores the actual surgical information and the virtual surgery stored in the database.
  • Information, surgery guide information, etc. ie, learning data
  • learning data may be provided to an external server, and learning may be performed based on the learning data in the external server.
  • the embodiment of the present invention may implement a virtual body model of the surgical subject based on the cue sheet data for the surgical subject generated in step S160 or / and step S165, or conversely the operation of the surgery performed on the surgical subject You can also implement an image.
  • surgical information can be collected from various clients in real time at the time of an actual operation, so that surgical medical staff can effectively proceed with the surgical procedure.
  • surgical medical staff can effectively proceed with the surgical procedure.
  • since the operation can be performed through a variety of surgical information and surgical system as compared to the case of performing the operation only by the surgical medical staff, it is possible to perform a more accurate operation.
  • the surgical information can be transmitted and received to each other through the relay server.
  • each client since each client only needs to perform communication through a relay server, even if the number of clients increases, communication between each other can be easily performed, and an additional load does not occur. Thus, increasing the number of clients does not affect performance.
  • FIG. 4 is a diagram schematically showing the configuration of an apparatus 300 for performing surgery information construction according to an embodiment of the present invention.
  • the processor 310 may include a connection passage (for example, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
  • a connection passage for example, a bus or the like
  • the processor 310 executes one or more instructions stored in the memory 320 to perform the surgery information construction method described with reference to FIGS. 2 and 3.
  • the processor 310 receives the actual surgery information on the actual surgery image from the first client, transmits the actual surgery information to the second client by executing one or more instructions stored in the memory 320, and the second client.
  • the virtual surgery information generated based on the actual surgery information may be received from the client, and the actual surgery information and the virtual surgery information may be stored.
  • the processor 310 may read random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 310. , Not shown) may be further included.
  • the processor 310 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 320 may store programs (one or more instructions) for processing and controlling the processor 310. Programs stored in the memory 320 may be divided into a plurality of modules according to their functions.
  • the surgical information building method according to an embodiment of the present invention described above may be implemented as a program (or an application) and stored in a medium to be executed in combination with a computer which is hardware.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 5 is a flowchart illustrating a method of generating artificial data according to an embodiment of the present invention.
  • the method of FIG. 5 is described as being performed by a computer for the convenience of description, the subject of each step is not limited to a specific device but may be used to encompass a device capable of performing computing processing. That is, in the present embodiment, the computer may mean an apparatus capable of performing the artificial data generating method according to the embodiment of the present invention.
  • obtaining a background image and an object image (S100), and combining the background image and an object image to generate artificial data (S110). ) May be included.
  • S100 a background image and an object image
  • S110 an object image to generate artificial data
  • the computer may acquire a background image and an object image (S100).
  • the background image and the object image may be part or the entire image area included in the actual surgery image.
  • the actual surgery image may be data obtained by the actual medical staff performing the surgery.
  • medical staff may perform surgery directly on a patient, or perform minimally invasive surgery using a surgical robot, a laparoscope, an endoscope, or the like.
  • an actual surgical image obtained by photographing organs, surgical instruments, and the like in the body may be obtained in a surgical procedure, or data recorded on a surgical operation performed in the surgical procedure may be acquired.
  • the background image may include a specific area inside the body taken by the endoscope camera.
  • the image data may be image data obtained by photographing a specific region including organs (eg, liver, heart, uterus, brain, breast, abdomen, etc.), blood vessels, and tissues in the body.
  • the object image may be image data including surgical instruments or blood (including bleeding out of the blood vessels due to blood vessel damage).
  • the computer may acquire an object image or a background image from an actual surgical image including an object and a background (ie, a background including a specific area inside the body).
  • the computer may acquire the object image from the actual surgical image including only the object or the background image from the actual surgical image including only the background.
  • the computer may generate various artificial data using the object image or the background image obtained from the actual surgical image. Specific embodiments thereof will be described with reference to FIGS. 7 to 10.
  • the computer may acquire the actual surgical image in units of frames or in units of sequences, which are sets of consecutive frames.
  • the computer may generate an artificial data by acquiring an actual surgical image by one frame and extracting an object image including the surgical tool therefrom. have.
  • the computer acquires an actual surgical image in a sequence unit consisting of a plurality of frames and extracts an object image including blood or bleeding from the artificial data. Can be generated.
  • the computer inputs the actual surgical image including the object image in sequence units (ie, continuous frames) to the generator to be described later, and accordingly, the generator inputs artificial data in sequence units (ie, continuous frames). Can be generated and printed.
  • artificial data may be generated in units of frames or sequences according to the type of object.
  • object images are acquired in sequence units as in the case of generating artificial data including blood or bleeding. Artificial data can be generated.
  • artificial data may be generated by acquiring an actual surgical image in a frame unit or a sequence unit as a change of an object occurs.
  • the computer acquires a real surgery image that changes in a specific object, for example, to acquire a real surgery image performing a specific surgery operation according to the movement of the surgical tool or the surgical tool in a continuous frame
  • the computer may continuously extract an object image including a specific object (eg, a surgical tool) from consecutive frames to generate artificial data reflecting a change in movement of the specific object (eg, a surgical tool).
  • the artificial data may be generated in a continuous frame according to a change of a specific object.
  • the computer may extract a background image from the actual surgical image in response to the movement change of a specific object (eg, a surgical tool).
  • a specific object eg, a surgical tool
  • the computer continuously extracts the object image and the background image according to the change of the movement of a specific object (for example, a surgical tool), and generates continuous artificial data by sequentially combining the continuous object image and the background image. You may. Therefore, by extracting and reflecting the background image corresponding to the change of the object, it is possible to generate more artificial data.
  • the computer may generate artificial data by combining the background image and the object image (S110).
  • the computer is based on the placement relationship between objects in the object image (e.g. surgical instruments, blood, bleeding, etc.) and specific areas within the body (e.g. organs, blood vessels, tissues, etc.) within the background image.
  • objects in the object image e.g. surgical instruments, blood, bleeding, etc.
  • specific areas within the body e.g. organs, blood vessels, tissues, etc.
  • artificial data may be generated by combining the object image and the background image to approximate the actual surgical image.
  • the arrangement relationship may be information derived based on information such as a shape, a location, a direction or a camera photographing direction, a type and a number of backgrounds and objects in each image.
  • the computer may generate artificial data through learning by using a generator and a discriminator. This will be described in detail with reference to FIG. 6.
  • FIG. 6 is a view for explaining a method of generating artificial data based on learning according to an embodiment of the present invention.
  • the computer in generating artificial data by combining the object and the background based on the arrangement relationship between the object in the object image and the background in the background image, the computer may generate the artificial data closer to the actual surgical image. Learning can be done.
  • the computer may perform the learning by using the generator 200 and the discriminator 210.
  • the generator 200 and the discriminator 210 may operate by competing with each other using a generative adversarial network (GAN) based learning scheme.
  • GAN generative adversarial network
  • the generator 200 trains a generation model using a real surgery image, and based on the trained generation model, artificial data close to the actual input data (actual surgery image) 220. And 230.
  • the generator 200 may generate artificial data by combining a background image and an object image obtained from each actual surgical image using a generation model.
  • the discriminator 210 trains a discrimination model to discriminate whether the artificial data 230 generated by the generator 200 is real data or artificial data. At this time, the discriminator 210 may train the discrimination model to discriminate the authenticity of the artificial data 230 based on the actual surgical image 240 corresponding to the artificial data. The discriminator 210 may retrain the generated model based on the learned discrimination model.
  • the discriminator 210 may determine whether the artificial data 230 generated by the generator 200 is real data or artificial data using the discrimination model. If the discriminator 210 determines that the artificial data is not real, that is, the generator 200 does not deceive the discriminator 210, the generator 200 may not deceive the discriminator 210. You can retrain the generation model in a way that reduces. The generator 200 may generate improved artificial data through relearning the generation model. On the contrary, in the case where it is determined that the discriminator 210 is the actual data, that is, when the discriminator 210 is deceived by the generator 200, the discriminator 210 reduces the discrimination model in the direction of reducing the error rate (error). You can relearn. This process is repeated in the generator 200 and the discriminator 210, thereby making artificial data close to reality.
  • the computer may construct a training data set for learning a learning model for recognizing a surgical image. .
  • Surgical images can only be obtained through actual surgical procedures, and often contain only limited scenes.
  • surgery is performed on a specific surgical site, and since surgery is performed using a surgical tool determined according to the surgical site or the surgical purpose, it is difficult to secure a surgical image including various surgical tools or surgical sites. . Therefore, when learning is performed using such a surgical image, it is difficult to construct sufficient learning data using only the actual surgical image obtained during the actual surgery.
  • the present invention by generating artificial data almost no difference from the actual surgical image, it can be provided as the learning data of the learning model to perform the learning using the surgical image.
  • FIGS. 7 to 10 are diagrams for explaining embodiments of a method for generating artificial data based on learning according to an embodiment of the present invention.
  • FIG. 7 illustrates a first embodiment of generating artificial data using first and second actual surgical images including an object and a background.
  • the computer may acquire a first real surgery image 300 including an object and a background, and a second real surgery image 310 including an object and a background.
  • the computer may extract the object image or the background image included in each of the first real surgery image 300 and the second real surgery image 310, and combine them to generate artificial data.
  • the computer may extract the background image in the first real surgery image 300 and the object image in the second real surgery image 310.
  • the computer may generate artificial data 330 close to the actual surgical image by using the generated model (ie, the generator) 320 to extract the extracted background image and the object image.
  • the computer may use the background in the background image (ie, organs, blood vessels, tissues, etc., present in a specific area of the body) and the objects in the object image (ie, surgical instruments). Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the computer may provide an object segmentation 311 in the second real surgery image 310 to the generation model 320 together with the second real surgery image 310 as a correct answer. Since the generation model 320 is trained through the object segmentation 311 which is the correct answer, it is possible to generate artificial data closer to the actual surgical image.
  • the computer may provide the artificial data 330 generated by the generation model 320 to the discrimination model (ie, the discriminator) 350.
  • the differential model 350 may determine the authenticity of the artificial data 330 based on the actual surgical image 340 corresponding to the artificial data 330.
  • the computer may additionally provide the object segmentation 331 in the artificial data 330 and the object segmentation 341 in the actual surgical image 340 to learn the differential model 350.
  • the computer may relearn the generation model 320 or the differential model 350 according to the determination result of the differential model 350. For example, when it is determined that the artificial data 330 is artificial data rather than actual data, the computer may relearn the generated model 320. On the contrary, when it is determined that the artificial data 330 is real data, the computer may relearn the differential model 350.
  • the computer may extract the object image in the first real surgery image 300, extract the background image in the second real surgery image 310, and generate artificial data. Since a detailed process thereof is similar to that described with reference to FIG. 7, the description thereof will be omitted.
  • artificial data is generated by using an actual surgical image including different objects, which is performed on rare objects (eg, surgical instruments, blood or bleeding) that do not appear well at the time of surgery. You can accumulate images. In addition, manual labeling can replace expensive segmentation data with artificial data, thereby saving costs.
  • rare objects eg, surgical instruments, blood or bleeding
  • FIG. 8 is a second embodiment of generating artificial data using a first real surgery image including only an object and a second real surgery image including both an object and a background.
  • the computer may acquire a first real surgery image 400 including only an object and a second real surgery image 410 including both an object and a background.
  • the computer may extract the object image from the first real surgery image 400 and the background image from the second real surgery image 410.
  • the computer may generate artificial data 430 close to the actual surgical image by using the generation model (ie, the generator) 420 of the extracted background image and the object image.
  • the computer may use the background in the background image (ie, organs, blood vessels, tissues, etc., present in a specific area of the body) and the objects in the object image (ie, surgical instruments). Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the background in the background image ie, organs, blood vessels, tissues, etc., present in a specific area of the body
  • the objects in the object image ie, surgical instruments. Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the computer may provide the artificial data 430 generated by the generation model 420 to the discrimination model (ie, the discriminator) 450.
  • the differential model 450 may determine the authenticity of the artificial data 430 based on the actual surgical image 440 corresponding to the artificial data 430.
  • the computer may additionally provide the object segmentation 431 in the artificial data 430 and the object segmentation 441 in the actual surgical image 440 to learn the differential model 450.
  • the computer may relearn the generation model 420 or the differential model 450 according to the determination result of the differential model 450. For example, when it is determined that the artificial data 430 is artificial data rather than real data, the computer may relearn the generated model 320. On the contrary, when it is determined that the artificial data 430 is real data, the computer may relearn the differential model 450.
  • flexibility for generating artificial data can be increased, and surgical images for various objects can be accumulated.
  • various types of surgical tool images, surgical images with different positions, directions, and arrangements of surgical tools can be obtained.
  • FIG. 9 is a third embodiment of generating artificial data using a first real surgery image including only a background and a second real surgery image including both an object and a background.
  • the computer may acquire a first real surgery image 500 including only a background and a second real surgery image 510 including both an object and a background.
  • the computer may extract the background image from the first real surgery image 500 and the object image from the second real surgery image 510.
  • the computer may generate artificial data 530 close to the actual surgical image by using the generated model (ie, generator) 520 of the extracted background image and the object image.
  • the computer may use the background in the background image (ie, organs, blood vessels, tissues, etc., present in a specific area of the body) and the objects in the object image (ie, surgical instruments). Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the background in the background image ie, organs, blood vessels, tissues, etc., present in a specific area of the body
  • the objects in the object image ie, surgical instruments. Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the computer may provide the object segmentation 511 in the second actual surgery image 510 with the second actual surgery image 510 as a correct answer to the generation model 520. Since the generation model 520 is trained through the object segmentation 511 which is the correct answer, the generation model 520 may generate artificial data closer to the actual surgical image.
  • the computer may provide the artificial data 530 generated by the generation model 520 to the discrimination model (ie, discriminator) 550.
  • the differential model 550 may determine the authenticity of the artificial data 530 based on the actual surgical image 540 corresponding to the artificial data 530.
  • the computer may additionally provide an object segmentation 531 in the artificial data 530 and an object segmentation 541 in the actual surgical image 540 to learn the differential model 550.
  • the computer may relearn the generation model 520 or the differential model 550 according to the determination result of the differential model 550. For example, when it is determined that the artificial data 530 is artificial data rather than actual data, the computer may relearn the generated model 520. On the contrary, if it is determined that the artificial data 530 is real data, the computer may relearn the differential model 550.
  • FIG. 10 is a fourth embodiment of generating artificial data using a first real surgery image including only a background and a second real surgery image including only an object.
  • the computer may acquire a first real surgery image 600 including only a background and a second real surgery image 610 including only an object.
  • the computer may extract the background image from the first real surgery image 600 and the object image from the second real surgery image 610.
  • the computer may generate artificial data 630 close to the actual surgical image by using the generated model (ie, generator) 620 of the extracted background image and the object image.
  • the computer may use the background in the background image (ie, organs, blood vessels, tissues, etc., present in a specific area of the body) and the objects in the object image (ie, surgical instruments). Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the background in the background image ie, organs, blood vessels, tissues, etc., present in a specific area of the body
  • the objects in the object image ie, surgical instruments. Background image and object image can be combined based on the arrangement relationship between the blood, blood, and bleeding.
  • the computer may provide the artificial data 630 generated by the generation model 620 to the discrimination model (ie, discriminator) 650.
  • the differential model 650 may determine the authenticity of the artificial data 630 based on the actual surgical image 640 corresponding to the artificial data 630.
  • the computer may additionally provide the object segmentation 631 in the artificial data 630 and the object segmentation 641 in the actual surgical image 640 to learn the differential model 650.
  • the computer may relearn the generation model 620 or the differential model 650 according to the determination result of the differential model 650. For example, when it is determined that the artificial data 630 is artificial data rather than real data, the computer may relearn the generated model 620. On the contrary, when it is determined that the artificial data 630 is real data, the computer may relearn the differential model 650.
  • a surgical image including various objects and various internal body spaces may be acquired.
  • various types of surgical images may be obtained by freely arranging information on necessary surgical tools or organs, blood vessels, tissues, and the like.
  • the artificial data is generated by acquiring an actual surgical image (background image) including only a background.
  • the actual surgical image including only the background does not include an object such as a surgical tool and includes only a background image.
  • the computer may acquire image frames photographed at a specific position inside the body by the endoscope camera, and remove the object image such as a surgical tool from the image frames to generate a background image including only the background area.
  • the computer may use an image frame obtained by the endoscope entering the body during the operation using the endoscope. In this case, an image frame having only a background image not including an object such as a surgical tool may be obtained.
  • the computer may extract an image frame including only a background from an image frame obtained by a camera entering the body during minimally invasive surgery such as laparoscopic or robotic surgery.
  • minimally invasive surgery the operation of the first area is completed and the camera and the surgical tool are also moved from the first area to the second area when the operation is performed by moving to the second area.
  • the surgical tool does not move when the camera moves due to the nature of the surgery. Accordingly, when moving from the first area to the second area, the camera moves first and then the surgical tool moves, so that the camera moved first to the second area may acquire an image frame without the surgical tool.
  • the computer may construct the training data using artificial data generated from the above embodiments.
  • artificial data may be used as training data for training a training model for image recognition. Therefore, according to an embodiment of the present invention, by providing artificial surgical images generated with artificial surgery, deep learning, machine learning, etc. which requires a large amount of learning data together with the actual surgical images, sufficient learning effects can be obtained.
  • FIG. 11 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
  • the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
  • surgical robot 34 includes imaging device 36 and surgical instrument 38.
  • the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
  • the server 100 is a computing device including at least one processor and a communication unit.
  • the controller 30 includes a computing device including at least one processor and a communication unit.
  • the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
  • the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
  • the image photographed by the photographing apparatus 36 is displayed on the display 340.
  • surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
  • Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
  • the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
  • the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
  • the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
  • the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
  • the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
  • data including various surgical information may be acquired in a surgical image photographed in a surgical process or a control process of a surgical robot.
  • artificial data may be generated by acquiring the object image or the background image as described above based on the surgical information (ie, the surgical image) obtained in the robot surgery.
  • the surgical information ie, the surgical image
  • utilizing the surgical image obtained in the robot surgery process is just one example, the present invention is not limited thereto.
  • minimally invasive surgery using a laparoscope, endoscope, or the like, and a surgical image obtained when a medical staff directly operates on a patient may also be used to generate artificial data.
  • FIG. 12 is a diagram schematically showing the configuration of an apparatus 700 for performing a method of generating artificial data according to an embodiment of the present invention.
  • the processor 710 may include a connection passage (eg, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
  • a connection passage eg, a bus or the like
  • the processor 710 executes one or more instructions stored in the memory 720 to perform the artificial data generation method described with reference to FIGS. 5 to 10.
  • the processor 710 may acquire a background image and an object image by executing one or more instructions stored in the memory 720, and generate artificial data by combining the background image and the object image.
  • the processor 710 may include random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 710. , Not shown) may be further included.
  • the processor 710 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 720 may store programs (one or more instructions) for processing and controlling the processor 710. Programs stored in the memory 720 may be divided into a plurality of modules according to their functions.
  • the artificial data generation method according to an embodiment of the present invention described above may be implemented as a program (or an application) and stored in a medium to be executed in combination with a computer which is hardware.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 13 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
  • the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
  • surgical robot 34 includes imaging device 36 and surgical instrument 38.
  • the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
  • the server 100 is a computing device including at least one processor and a communication unit.
  • the controller 30 includes a computing device including at least one processor and a communication unit.
  • the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
  • the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
  • the image photographed by the photographing apparatus 36 is displayed on the display 340.
  • surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
  • Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
  • the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
  • the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
  • the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
  • the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
  • the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
  • Computer performs a method of generating learning data based on a surgical image according to an embodiment disclosed herein.
  • Computer may mean the server 100 or the controller 30 of FIG. 13, but is not limited thereto.
  • the term "computer” may be used to mean a device capable of performing computing processing.
  • the computer may be a computing device provided separately from the device shown in FIG. 13.
  • the embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 13, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure.
  • it can be applied in connection with minimally invasive surgery such as laparoscopic surgery or endoscopic surgery.
  • FIG. 14 is a flowchart schematically illustrating a method of generating learning data based on a surgical image according to an embodiment of the present invention.
  • acquiring a surgical image including a plurality of image frames (S100), each of the plurality of image frames Recognizing surgical recognition information from the step (S200), and for each of the plurality of image frames, a relational expression representing a relationship between the surgical element (surgical element) included in the surgical recognition information based on the surgical recognition information (Relational) Representation) may include the step of generating (S300).
  • the computer may acquire a surgical image including a plurality of image frames (S100).
  • the medical staff may perform the actual surgery on the patient directly, as well as laparoscopes or endoscopes, including the surgical robot as described in FIG. Minimally invasive surgery may be performed.
  • the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like.
  • the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera entering the patient's body.
  • the surgical image may include one or more image frames.
  • Each image frame may represent a scene in which a surgical operation is performed, including a surgical site, a surgical tool, and the like of a surgical subject.
  • the surgical image may be composed of image frames in which a surgical operation is recorded for each scene (scene) according to time during the surgical procedure.
  • the surgical image may be composed of image frames that record each surgical scene according to the spatial movement such as the surgical site or the position of the camera during the surgery.
  • the surgical image may be composed of all image frames including the entire surgical procedure, or may be in the form of at least one video clip divided according to a specific classification criterion.
  • the computer may acquire a surgical image in the form of a video clip, or may obtain a surgical image including all or part of a surgical procedure and divide the surgical image into at least one video clip.
  • the computer may acquire a surgical image including all or part of a surgical procedure and divide the surgical image into at least one video clip according to a specific classification criterion.
  • the computer may divide the surgical image according to the time course of the surgery, or may divide the surgical image according to the position or state change of the surgical site based on the surgical site during the surgery.
  • the computer may segment the surgical image based on the position of the camera or the moving range of the camera during the operation, or divide the surgical image based on the change (eg replacement) of the surgical tool during the operation. You may.
  • the computer may segment the surgical image based on image frames formed of a series of surgical operations in relation to the surgical operation.
  • each operation may be predetermined surgery stages classified according to a specific classification criteria. In this case, the computer may segment the surgical image based on the surgical stage.
  • the computer may recognize surgical recognition information from each of the plurality of image frames in the acquired surgical image (S200).
  • each image frame in the surgical image includes surgery related information.
  • it includes various surgical-related information such as surgical instruments, surgical operations, surgical sites (ie, body parts such as organs, blood vessels, tissues), and bleeding. Therefore, in the present invention, each element that can be recognized as surgery related information from an image frame is defined as a surgical element.
  • the surgical element may be a concept corresponding to the object, but in the present invention, a wide concept encompassing not only an object but also information such as an operation, a function, and a state related to the object.
  • the computer may extract surgical recognition information determined as a surgical element from each of the plurality of image frames.
  • the surgical recognition information refers to the surgical element information recognized from the image frame, for example, surgical instruments, surgical operations, body parts, bleeding, surgical stages, surgery time (eg, remaining surgery time, surgery time, etc.), and camera It may include at least one surgical element of information (eg, camera position, angle, direction, movement, etc.).
  • the computer may derive the position information of the surgical element in the surgical recognition information extracted from each of the plurality of image frames.
  • the position information of the surgical element may be region information where the surgical element is located on the image frame.
  • the position information of the surgical element may be calculated based on the coordinate information on the 2D space or the coordinate information on the 3D space in the image frame.
  • the computer may generate relational representation information indicating a relationship between surgical elements included in each surgical recognition information based on the surgical recognition information recognized for each of the plurality of image frames (S300).
  • the computer may include the first image based on the plurality of surgical elements extracted from the first image frame among the plurality of image frames (eg, the first to nth image frames) and the position information on the plurality of surgical elements. It may be determined whether a correlation exists between a plurality of surgical elements in the frame. Next, the computer may generate relationship expression information for the first image frame based on the correlation between the plurality of surgical elements. The computer generates relationship expression information for not only the first image frame but also the second to nth image frames in the surgical image in the same manner.
  • the computer may determine whether or not there is a correlation between the plurality of surgical elements based on the relationship information on the predefined surgical elements.
  • the relationship information on the predefined surgical element may be information set based on at least one of the type of surgical element, location information of the surgical element, state information of the surgical element, and operation information of the surgical element.
  • surgical tools, surgical operations, body parts, bleeding status, surgical steps, surgery time, camera information, etc. can be defined as a surgical element.
  • Each defined surgical element may further include additional information such as location information, state information, and operation information according to the type of surgical element.
  • the type of surgical tool can be determined according to the surgical site, and the operation can be determined according to each surgical tool operating state information such as open (close), energy (close), the presence of energy, contact, etc. have. Whether or not bleeding may have color information based on the bleeding point.
  • the computer may preset, as relational information, each surgical element and additional information determined according to the type of each surgical element. Accordingly, the computer may recognize each surgical element from the first image frame and grasp relationship information about each recognized surgical element.
  • the computer when the computer extracts the surgical tool and the surgical site from the first image frame, the computer identifies correlation information between the specific surgical tool and the specific surgical site based on the predefined relationship information, and the correlation between the two surgical elements. Can be determined to exist.
  • the computer extracts the surgical tool from the first image frame
  • the correlation between the surgical tool and the surgical operation exists by grasping the relationship information on the specific surgical tool and its operation based on the predefined relationship information. You can judge.
  • the computer may determine that there is a correlation between the plurality of surgical elements. For example, if the computer extracts the body organs and surgical instruments from the first image frame, and recognizes that the extracted body organs and surgical instruments are present at the same location, a positional correlation exists between the two surgical elements. You can judge.
  • the computer may determine that the positional correlation exists, for example, when the surgical tool is exerting a specific motion on the corresponding organ or when the surgical tool is in contact with the organ.
  • the computer may generate relationship expression information for the first image frame based on whether there is a correlation between a plurality of surgical elements recognized from the first image frame.
  • the computer may generate relational expression information by mapping each surgical element and information about each surgical element.
  • the relational expression information may be generated in the form of a matrix.
  • Each surgical element may be arranged corresponding to a row and a column, and information derived based on a correlation between each surgical element may be represented as an element of a matrix.
  • the computer may generate relational expression information for the second to nth image frames in the surgical image in the same manner as the first image frame as described above.
  • FIG. 15 is a diagram illustrating a process of generating relationship expression information by recognizing surgical recognition information from an image frame according to an embodiment of the present invention.
  • FIG. 15 illustrates a process of recognizing surgical recognition information from the first image frame 200 in the surgical image and generating it as the relational expression information 300.
  • the computer may recognize each surgical element by applying an image recognition technology using deep learning from the first image frame 200.
  • the computer may apply various individual recognition techniques according to the characteristics of each surgical element. For example, the computer extracts feature information (eg, a feature map) from the first image frame 200, and uses a body feature (eg, texture feature information that expresses color or texture). Body organs such as liver, stomach, and blood vessels) and bleeding.
  • the computer may recognize a surgical instrument, a surgical operation, or the like using the feature information about the shape.
  • the computer may recognize the operation by using the location feature information between the surgical elements.
  • the computer may recognize camera information (eg, motion information such as camera angle and movement), a surgery step, a surgery operation, a surgery time, and the like by using feature information between the plurality of image frames.
  • the computer may derive position information on each surgical element recognized from the first image frame 200.
  • the computer may divide the first image frame 200 into at least one region, and calculate position information of each surgical element based on a region where each surgical element exists among the divided regions.
  • the computer may divide the space in the first image frame 200 into at least one region based on two-dimensional or three-dimensional coordinate information, and for each divided region, a specific coordinate value (for example, a center point of each region) ) Or index value to use as location information of each area.
  • the computer may divide the first image frame 200 into nine regions and assign index values of 0 to 8 to each divided region to use the position information. .
  • the computer determines that the surgical element 'gauze' recognized from the first image frame 200 is located in the index 0, 1 region among the divided regions, and then the position information of the surgical element 'gauze' based on the index 0, 1 Can be calculated.
  • Position information of each surgical element may be calculated in the same manner with respect to other surgical elements (for example, two surgical instruments and surgical regions) recognized from the first image frame 200.
  • the computer may generate the relationship expression information 300 based on each surgical element recognized from the first image frame 200 and position information of each surgical element.
  • the computer may configure the relational expression information 300 in a matrix form. Each surgical element may be arranged in each row and each column, and relationship information between corresponding surgical elements may be represented in each component value of the matrix.
  • the relationship expression information 300 of the first image frame 200 may be represented by a matrix.
  • each row and each column may be arranged from surgical element a to surgical element s.
  • the operation element a is the operation stage
  • the surgical elements b to h surgical instruments the surgical elements i to l body parts
  • the surgical elements m to q surgical operation the surgical element r is bleeding
  • the surgical element s is camera information, etc. It can be defined as.
  • the i th column of the i th row of the matrix may represent position information of a corresponding surgical element (eg, a surgical element, b surgical element, etc.).
  • the j th column of the i th row of the matrix may represent relationship information between the surgical elements arranged in the i th row and the surgical elements arranged in the j th column.
  • the specific surgical tool the surgical tool defined as the surgical element b, for example, 'Harmonics'
  • the specific surgical operation the surgical element m are defined.
  • Information about a surgical operation for example, open / grap.
  • other specific values may be assigned to the components of the matrix to represent the relationship information. For example, when an event related to a surgical element occurs or implies such an event, such as camera information or an operation step, the matrix component for the corresponding surgical element is used instead of the position information or relationship information of the corresponding surgical element. Information related to the event may be given.
  • the computer may repeatedly perform the process as described with reference to FIG. 15 for each image frame in the surgical image. That is, the computer may recognize a surgical element for each image frame in the surgical image and generate relational expression information based on the recognized surgical element.
  • 16 is a diagram illustrating relationship expression information generated for a plurality of image frames in a surgical image according to an embodiment of the present invention.
  • the computer may generate relationship expression information for each of a plurality of image frames in a surgical image.
  • each relationship expression information since each relationship expression information expresses the relationship information between a plurality of surgical elements at the same time, it may have tensor information.
  • the computer may generate the relation expression information generated from the plurality of image frames in the form of a tensor.
  • the computer may express a relationship between the plurality of image frames by grouping the plurality of image frames based on a temporal size of the tensor.
  • relational expression information based on a plurality of image frames such as camera information or a surgical step
  • the computer when it is necessary to generate relational expression information based on a plurality of image frames, such as camera information or a surgical step, the computer generates a group of a specific number of image frames and generates each relational expression information in the form of a tensor, and the camera information.
  • information related to the surgical stage may be related to image frames and may be represented by a tensor.
  • the relation expression information may be generated for each image frame in the surgical image, and the learning data may be constructed based on the generated relation expression information.
  • learning may be performed to derive a specific learning result based on the learning data constructed as described above.
  • 17 is a diagram illustrating an example of a process of performing learning based on relationship expression information generated for a plurality of image frames in a surgical image according to an embodiment of the present invention.
  • the computer may generate relational expression information for each image frame in the surgical image, and construct learning data based on the generated relational expression information (S400).
  • the labeling learning in constructing the learning data by generating the relationship expression information for each image frame in the surgical image, may be performed by a medical staff, or the labeling learning may be automatically performed by a computer.
  • semi-supervised learning may be performed.
  • the computer may label each image frame by applying various individual recognition techniques (eg, body organ recognition, surgical tool recognition, bleeding recognition, surgical motion recognition, camera movement recognition, etc.) according to the characteristics of each surgical element as described above. Extracted information and relationship expression information can be generated based on the extracted information.
  • the computer may derive a specific learning result by performing learning based on the learning data (S500).
  • the specific learning result is a specific learning model by constructing a specific learning model by performing learning using the learning data generated according to an embodiment of the present invention as an input value, and obtaining a result of the learning that can be obtained as an output value of the specific learning model.
  • the computer may determine that the learning result is to derive the relationship expression information according to the embodiment of the present invention. That is, the computer may learn about the newly input image frame based on the previously constructed learning data (relational expression information), and acquire the relationship expression information on the newly input image frame as a result of the learning.
  • the computer may perform learning by using the learning data generated according to an embodiment of the present invention as an input value to acquire a variety of learning results such as recognizing a surgical procedure through a surgical image or recognizing the meaning of a surgical operation. Can be.
  • the computer may use image frames as an input value for deriving a specific learning result together with the learning data generated according to the embodiment of the present invention.
  • the computer may learn a plurality of image frames in the surgical image by using a convolutional neural network (CNN), and may acquire feature information of each image frame (S410).
  • the computer may generate the relationship expression information for the plurality of image frames in the surgical image to obtain the training data (S400).
  • the computer may finally perform the learning based on the feature information and the relationship expression information of each image frame, and may obtain a specific output value as a result of the learning (S500).
  • the present invention it is possible to obtain more meaningful information through the image frame by defining various surgical elements that can be recognized from the image frame, and grasping the relationship information between the surgical elements as well as the information on each surgical element itself. .
  • the present invention rather than generating learning data for building a learning model, it provides the basis data for building a variety of learning models. In addition, by providing such basic learning data, the learning data may be increased to obtain more improved learning results.
  • FIG. 18 is a diagram schematically illustrating a configuration of an apparatus 400 for performing a method for generating learning data based on a surgical image according to an embodiment of the present invention.
  • the processor 410 may include a connection passage (for example, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphics processor (not shown) and / or other components. ) May be included.
  • a connection passage for example, a bus or the like
  • the processor 410 executes one or more instructions stored in the memory 420 to perform a method of generating learning data based on the surgical image described with reference to FIGS. 14 to 17.
  • the processor 410 may acquire a surgical image including a plurality of image frames by executing one or more instructions stored in the memory 420, recognizing surgical recognition information from each of the plurality of image frames, and For each of the plurality of image frames, generating relational representation information representing a relationship between surgical elements included in the surgical recognition information based on the surgical recognition information.
  • the processor 410 may include random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 410. , Not shown) may be further included.
  • the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 420 may store programs (one or more instructions) for processing and controlling the processor 410. Programs stored in the memory 420 may be divided into a plurality of modules according to their functions.
  • the method for generating learning data based on the surgical image according to the exemplary embodiment of the present invention described above may be implemented as a program (or an application) and stored in a medium to be executed in combination with a computer which is hardware.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • 19 is a flowchart schematically illustrating a method of providing blood vessel information using a blood vessel model according to an embodiment of the present invention.
  • the subject of each step is not limited to a specific device but may be used to encompass a device capable of performing computing processing. That is, in the present embodiment, the computer may mean an apparatus capable of performing the method of providing blood vessel information using the blood vessel model according to the embodiment of the present invention.
  • a method of providing blood vessel information using a blood vessel model may include obtaining a blood vessel polygon constituting a blood vessel in a blood vessel model (S100), the blood vessel polygon information, and the Comprising the step of configuring the training data based on the connection information of the vascular polygon (S200), and classifying the blood vessels based on the learning data (S300).
  • S100 blood vessel polygon constituting a blood vessel in a blood vessel model
  • S200 the blood vessel polygon information
  • S300 the learning data
  • the computer may acquire a blood vessel polygon constituting a blood vessel in the blood vessel model (S100).
  • the computer may generate a 3D blood vessel model based on medical image data of the inside of the body of the object (eg, the patient).
  • the medical image data is a medical image photographed by a medical image photographing apparatus and includes all medical images that can be implemented as a three-dimensional model of the body of the object.
  • the medical image data may include a computed tomography (CT) image, a magnetic resonance imaging (MRI), a positron emission tomography (PET) image, and the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • the computer may extract the blood vessel of the patient from the medical image data and 3D model the extracted blood vessel.
  • the computer may sequentially extract arteries and veins from the medical image data, and 3D model the vessel models including the arteries and the vessel models including the veins, respectively, and then match them.
  • the blood vessel model may be a polygon model of 3D modeling by constructing at least one polygon of blood vessels extracted from the medical image data.
  • the computer can obtain at least one polygon constituting the vessel from the 3D vessel model.
  • Polygon refers to a polygon, which is the smallest unit used to express a three-dimensional shape of an object in 3D computer graphics, and polygons may be gathered to represent a 3D object (ie, a blood vessel).
  • the computer may obtain a 3D blood vessel model pre-built for the subject and obtain at least one polygon constituting the blood vessel therefrom.
  • the computer may construct and store the 3D blood vessel model in advance, or may acquire and use a 3D blood vessel model generated by another device.
  • the computer may obtain at least one vascular polygon constituting the blood vessel from the 3D modeled vascular model using the polygon.
  • the computer may configure learning data for each of the acquired at least one vascular polygon based on the vascular polygon information and the connection information of the vascular polygon (S200).
  • the computer may construct training data based on its own information about each vascular polygon in the 3D vascular model and connection information with adjacent polygons connected thereto.
  • the vascular polygon information may include a position vector of the vascular polygon and blood vessel information of the vascular polygon.
  • the connection information of the vascular polygon may include adjacent polygons adjacent to the vascular polygon, and position vector difference information between the adjacent polygon and the vascular polygon.
  • the computer can obtain the vascular polygon itself information from the vascular polygon.
  • the computer may derive the position vector and vessel information of the vessel polygon.
  • the computer may calculate a position vector of the vascular polygon based on a predetermined reference vessel point in the 3D vessel model. For example, the computer may set a representative branching point or an easily distinguishable vessel of a vessel, such as a celiac trunk or a celiac axis, in the 3D vessel model as a reference vessel point. The computer may set the set reference vessel point as a reference point (eg, an origin point) in the coordinate space, and calculate a position vector of the vessel polygon relative to the reference vessel point. At this time, the computer may calculate a position vector with respect to the midpoint in the vascular polygon. The midpoint of the vascular polygon can be calculated using the inner core, outer core, center of gravity, and the like.
  • the computer may derive the vessel information of the vessel polygon based on the predetermined artery and vein classification information. For example, the computer may use the vessel information classified in detail for the predefined arteries and veins as shown in Table 1 below to determine what the vessel types of the vessel polygon are. For example, the computer can obtain vascular information such as artery 'Aorta' or vein 'PV' for vascular polygons.
  • the computer may determine what the types of blood vessels of the blood vessel polygon are by using the color code information of the blood vessels to which the color information is assigned to the predefined arteries and veins as shown in Table 2 below.
  • the blood vessel classification result derived as a learning result about the blood vessel polygon may be confirmed by color information.
  • the computer can obtain the connection information from the vascular polygon to the adjacent polygon connected thereto.
  • the computer may acquire adjacent polygons adjacent to the vascular polygon and derive the position vector difference information with the adjacent polygons as connection information.
  • each vascular polygon may be a triangular polygon, and each triangular polygon may be considered a node.
  • the vascular polygon of node 2 since node 1 is adjacent to node 2, in the case of the vascular polygon of node 1, the vascular polygon of node 2 may be acquired as an adjacent polygon. In the case of the vascular polygon of the second node, the vascular polygons of the first and third nodes may be acquired as adjacent polygons.
  • the computer may derive the position vector difference from the vascular polygon as connection information by using the acquired position vector of the adjacent polygon.
  • the position vector of the adjacent polygon may be calculated with respect to the midpoint of the adjacent polygon based on the reference blood vessel point as described above.
  • the computer when deriving a position vector difference for a polygon centered on Y, acquires adjacent polygons adjacent thereto and calculates a position vector (X, Z, Q position vector) of each adjacent polygon. can do.
  • the computer may then calculate a difference vector between the position vectors (X, Z, Q position vectors) of the respective polygons and the position vectors of the polygons centered on Y. This may be calculated as in Equation 1 below.
  • the computer may derive connection information with major organs and use it as learning data.
  • the computer may acquire at least one reference organ, and derive the distance information between the obtained reference organ and the vascular polygon to construct the learning data.
  • the computer can acquire fixed-position organs, such as liver, heart, spleen, lung, kidney, etc., as reference organs.
  • the reference organ may be composed of 3D modeled polygons.
  • the computer may then place reference organs based on a predetermined reference vessel point in the 3D vessel model as a reference (eg, origin).
  • the computer may calculate a position vector for the reference organ based on the reference vessel point.
  • the computer may derive the distance information between the reference organ and the vascular polygon based on the calculated position vector of the reference organ and the position vector of the vascular polygon.
  • the computer For example, if the position vector of the reference organ is (x 1 , y 1 , z 1 ) and the position vector of the vascular polygon is (x 2 , y 2 , z 2 ), the computer generates distance information as shown in Equation 2 below. (d) can be calculated.
  • the position vector (x 2 , y 2 , z 2 ) of the vascular polygon may be the midpoint of the polygon.
  • the position vector (x 1 , y 1 , z 1 ) of a reference organ may be the center of the OBB when a cubic Oriented Bounding Box (OBB) is placed on the reference organ.
  • 22 is a diagram illustrating a process of calculating the midpoint of the OBB. Referring to FIG. 22, a computer may cover a hexahedron OBB corresponding to the organ.
  • the computer uses the two positions of the cube V1 (x 1 , y 1 , z 1 ) and V2 (x 2 , y 2 , z 2 ) As shown in Equation 3, the midpoint M (x 3 , y 3 , z 3 ) of the OBB can be calculated.
  • the computer may include the position vector of the vascular polygon, the vessel information of the vascular polygon (ie, the vessel type), the position vector of the adjacent polygon, the difference vector information between the adjacent polygon and the vascular polygon, and the distance information between the reference organ and the vascular polygon. At least one may be derived, and learning data may be constructed based on the at least one.
  • the computer may classify blood vessels based on the learning data (S300).
  • the computer may generate a vessel classification learning model for classifying vessels by performing learning based on the training data.
  • a computer can learn learning data through machine learning.
  • a map such as a decision tree, a K-nearest neighbor, a neural network, and a support vector machine (SVM) can be used. You can use learning. That is, the computer may generate a blood vessel classification learning model by performing supervised learning for classifying blood vessels based on the training data generated based on the blood vessel polygon.
  • the computer may perform learning to classify blood vessel types based on the learning data as described above.
  • the computer may derive the structure of the blood vessel as shown in FIG. 24 to be described later as a learning result, and may specify the type of blood vessel to which the blood vessel polygon belongs on the structure of the blood vessel.
  • FIG. 23 is a diagram illustrating a learning process for classifying blood vessel types according to one embodiment of the present invention.
  • FIG. 23 is a diagram illustrating types of hepatic artery (type 10, for example) according to shape.
  • the computer may use, as learning data, information classified by type of hepatic artery as shown in FIG. 23. In this case, the computer can perform the learning based on the shape of these vessels.
  • the computer may derive a hierarchical structure of blood vessels as shown in FIG. 25 to be described later as a learning result, and specify a hierarchy of blood vessels to which a vascular polygon belongs on the blood vessel hierarchy.
  • the computer may generate a blood vessel classification learning model by performing a learning process for classifying blood vessel types and a learning process for classifying blood vessel types, respectively.
  • the computer may generate a blood vessel classification learning model by simultaneously performing a learning process for classifying blood vessel types and a learning process for classifying blood vessel types.
  • the computer may perform the learning using the additional learning data.
  • the computer may acquire a 2D projection image of projecting a blood vessel corresponding to a blood vessel polygon in the 3D blood vessel model in a 2D space, and use the additional learning data.
  • the computer may extract 3D vessels corresponding to the vascular polygons in the 3D vessel model, and project the extracted 3D vessels in a two-dimensional space to a specific view to obtain a two-dimensional projection image.
  • the computer may obtain at least one two-dimensional projection image by projecting the extracted 3D blood vessel onto a two-dimensional space for at least one viewpoint corresponding to a predetermined range based on a specific viewpoint.
  • the computer may perform learning (eg, CNN) using the acquired two-dimensional projection image set as learning data. For example, the computer can learn what the vessel types are by using a two-dimensional projection image set.
  • the computer may obtain a new vascular polygon model and classify the blood vessels in the new vascular polygon model through the vascular classification learning model.
  • the computer may extract at least one vascular polygon from the new vascular polygon model.
  • the computer may obtain at least one of the extracted vector of the vascular polygon, the position vector of the vascular polygon, the adjacent polygon adjacent to the vascular polygon, the position vector of the adjacent polygon, and the distance information from the reference organ.
  • the computer may input information about the vascular polygon obtained in the vessel classification learning model, and may derive at least one of blood vessel type and vascular hierarchy information corresponding to the vascular polygon as a result of the output.
  • 24 and 25 are diagrams showing an example of blood vessel classification derived as a learning result according to an embodiment of the present invention.
  • Figure 24 (a) shows the structure of the main vein vessels
  • Figure 24 (b) shows the structure of the major arterial vessels
  • 25 shows the hierarchical structure of the aorta and its detailed arterial vessels.
  • the artery or vein type to which the vascular polygon belongs may be specified as a learning result through the vascular classification learning model.
  • the type of blood vessel to which the corresponding blood vessel polygon belongs in the blood vessel structure as shown in FIG. 24 (a) or FIG. 24 (b).
  • the type of blood vessel ie, the type of artery or vein
  • the hierarchical structure of blood vessels as shown in FIG. 25 may specify a hierarchy of blood vessels to which the corresponding blood vessel polygon belongs.
  • the present invention it is possible to accurately grasp geometric and hierarchical information of blood vessels by building a learning model for classifying blood vessels.
  • the simulation is performed using this, more accurate surgical simulation is possible because the vessel information about the arteries and veins located in the surgical target and the surroundings can be accurately provided.
  • the present invention is effective to classify blood vessels and stratify blood vessels by constructing training data based on connection information between one polygon and an adjacent polygon.
  • the present invention by using the position information of the polygon derived based on the reference vessel point and the reference organ, it is possible to build a more accurate vessel classification learning model, through which a higher recognition rate can be obtained.
  • FIG. 26 is a diagram schematically illustrating a configuration of an apparatus 200 for performing a method of providing blood vessel information using a blood vessel model according to an embodiment of the present invention.
  • the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
  • a connection passage eg, a bus or the like
  • the processor 210 executes one or more instructions stored in the memory 220 to perform a method of providing blood vessel information using the blood vessel model described with reference to FIGS. 19 to 25.
  • the processor 210 acquires a blood vessel polygon constituting a blood vessel in a blood vessel model by executing one or more instructions stored in the memory 220, based on the blood vessel polygon information and connection information of the blood vessel polygon. And organizing the learning data, and classifying blood vessels by performing learning based on the learning data.
  • the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included.
  • the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
  • the blood vessel information providing method using the blood vessel model according to the above-described embodiment of the present invention may be implemented as a program (or an application) to be executed by being combined with a computer which is hardware and stored in a medium.
  • the program may be read by the computer's processor (CPU) through the device interface of the computer in order for the computer to read the program and execute the methods implemented as the program.
  • Code that is coded in a computer language such as C, C ++, JAVA, or machine language.
  • Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
  • the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
  • the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
  • the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
  • examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.

Abstract

L'invention concerne un serveur de relais pour construire des informations chirurgicales. Le serveur relais est destiné à relayer une communication entre des clients, et reçoit des informations chirurgicales réelles générées sur la base d'une image chirurgicale réelle provenant d'un premier client, transmet les informations chirurgicales réelles reçues à un deuxième client, reçoit des informations chirurgicales virtuelles générées au moyen d'une chirurgie virtuelle sur la base des informations chirurgicales réelles provenant du deuxième client, et transmet les informations chirurgicales virtuelles au premier client.
PCT/KR2019/002092 2018-02-20 2019-02-20 Procédé et dispositif de génération de données d'apprentissage WO2019164274A1 (fr)

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
KR10-2018-0019867 2018-02-20
KR20180019867 2018-02-20
KR20180019866 2018-02-20
KR20180019868 2018-02-20
KR10-2018-0019868 2018-02-20
KR10-2018-0019866 2018-02-20
KR1020180122454A KR102014351B1 (ko) 2018-02-20 2018-10-15 수술정보 구축 방법 및 장치
KR10-2018-0122454 2018-10-15
KR10-2018-0122949 2018-10-16
KR1020180122949A KR102013806B1 (ko) 2018-02-20 2018-10-16 인공데이터 생성 방법 및 장치
KR1020180143367A KR102013857B1 (ko) 2018-02-20 2018-11-20 수술영상을 기초로 학습데이터를 생성하는 방법 및 장치
KR10-2018-0143367 2018-11-20
KR10-2018-0149293 2018-11-28
KR1020180149293A KR102013848B1 (ko) 2018-02-20 2018-11-28 혈관 모델을 이용한 혈관 정보 제공 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2019164274A1 true WO2019164274A1 (fr) 2019-08-29

Family

ID=67686856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/002092 WO2019164274A1 (fr) 2018-02-20 2019-02-20 Procédé et dispositif de génération de données d'apprentissage

Country Status (1)

Country Link
WO (1) WO2019164274A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021206517A1 (fr) * 2020-04-10 2021-10-14 (주)휴톰 Procédé et système de navigation vasculaire peropératoire

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101049507B1 (ko) * 2009-02-27 2011-07-15 한국과학기술원 영상유도수술시스템 및 그 제어방법
KR20120046439A (ko) * 2010-11-02 2012-05-10 서울대학교병원 (분사무소) 3d 모델링을 이용한 수술 시뮬레이션 방법 및 자동 수술장치
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
KR101302595B1 (ko) * 2012-07-03 2013-08-30 한국과학기술연구원 수술 진행 단계를 추정하는 시스템 및 방법
KR20150000450A (ko) * 2011-08-26 2015-01-02 이비엠 가부시키가이샤 혈관혈류 시뮬레이션 시스템, 그 방법 및 컴퓨터 소프트웨어 프로그램
JP2016039874A (ja) * 2014-08-13 2016-03-24 富士フイルム株式会社 内視鏡画像診断支援装置、システム、方法およびプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101049507B1 (ko) * 2009-02-27 2011-07-15 한국과학기술원 영상유도수술시스템 및 그 제어방법
KR20120046439A (ko) * 2010-11-02 2012-05-10 서울대학교병원 (분사무소) 3d 모델링을 이용한 수술 시뮬레이션 방법 및 자동 수술장치
KR20150000450A (ko) * 2011-08-26 2015-01-02 이비엠 가부시키가이샤 혈관혈류 시뮬레이션 시스템, 그 방법 및 컴퓨터 소프트웨어 프로그램
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
KR101302595B1 (ko) * 2012-07-03 2013-08-30 한국과학기술연구원 수술 진행 단계를 추정하는 시스템 및 방법
JP2016039874A (ja) * 2014-08-13 2016-03-24 富士フイルム株式会社 内視鏡画像診断支援装置、システム、方法およびプログラム

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021206517A1 (fr) * 2020-04-10 2021-10-14 (주)휴톰 Procédé et système de navigation vasculaire peropératoire
KR20210126243A (ko) * 2020-04-10 2021-10-20 (주)휴톰 수술 중 혈관 네비게이션 방법 및 시스템
KR102457585B1 (ko) * 2020-04-10 2022-10-21 (주)휴톰 수술 중 혈관 네비게이션 방법 및 시스템

Similar Documents

Publication Publication Date Title
WO2019164275A1 (fr) Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra
WO2010110560A2 (fr) Système de robot chirurgical utilisant la réalité augmentée et procédé de contrôle correspondant
WO2017188706A1 (fr) Robot mobile et procédé de commande de robot mobile
WO2018048054A1 (fr) Procédé et dispositif de production d'une interface de réalité virtuelle sur la base d'une analyse d'image 3d à caméra unique
WO2020138624A1 (fr) Appareil de suppression de bruit et son procédé
WO2019216593A1 (fr) Procédé et appareil de traitement de pose
WO2017039348A1 (fr) Appareil de capture d'image et son procédé de fonctionnement
WO2021006366A1 (fr) Dispositif d'intelligence artificielle pour ajuster la couleur d'un panneau d'affichage et procédé associé
WO2021230708A1 (fr) Procédé de traitement d'image, dispositif électronique et support de stockage lisible
EP3740936A1 (fr) Procédé et appareil de traitement de pose
WO2019135621A1 (fr) Dispositif de lecture vidéo et son procédé de commande
WO2014200289A2 (fr) Appareil et procédé de fourniture d'informations médicales
WO2022154457A1 (fr) Procédé de localisation d'action, dispositif, équipement électronique et support de stockage lisible par ordinateur
WO2015102391A1 (fr) Procédé de génération d'image pour analyser la position d'élan de golf d'un utilisateur au moyen d'une analyse d'image de profondeur, et procédé et dispositif pour analyser une position d'élan de golf à l'aide de celui-ci
WO2020196962A1 (fr) Appareil de nettoyage à intelligence artificielle et procédé de commande associé
WO2019164271A1 (fr) Procédé et dispositif de génération de modèle de corps humain virtuel
WO2021210966A1 (fr) Procédé et dispositif de détection automatique de points caractéristiques de données d'image médicale tridimensionnelle faisant intervenir un apprentissage profond, procédé d'automatisation d'alignement de position de données tridimensionnelles dentaires, procédé de détection automatique de points de repère dans des données de balayage tridimensionnelles dentaires, procédé de détermination de précision de la mise en correspondance d'image de tomodensitométrie dentaire tridimensionnelle et de modèle d'impression numérique tridimensionnel, et support d'enregistrement lisible par ordinateur à programme enregistré d'exécution des procédés dans un ordinateur
WO2021137345A1 (fr) Réfrigérateur à intelligence artificielle et son procédé de fonctionnement
WO2017090833A1 (fr) Dispositif de prise de vues, et procédé de commande associé
WO2021029457A1 (fr) Serveur d'intelligence artificielle et procédé permettant de fournir des informations à un utilisateur
WO2020138564A1 (fr) Dispositif électronique
WO2019164274A1 (fr) Procédé et dispositif de génération de données d'apprentissage
WO2014200265A1 (fr) Procédé et appareil pour présenter des informations médicales
WO2021040156A1 (fr) Dispositif de mesure du corps et procédé de commande associé
WO2021091293A1 (fr) Appareil de production d'image médicale et procédé de production d'image médicale l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19758259

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19758259

Country of ref document: EP

Kind code of ref document: A1