WO2019132169A1 - Method, apparatus, and program for surgical image playback control - Google Patents
Method, apparatus, and program for surgical image playback control Download PDFInfo
- Publication number
- WO2019132169A1 WO2019132169A1 PCT/KR2018/010334 KR2018010334W WO2019132169A1 WO 2019132169 A1 WO2019132169 A1 WO 2019132169A1 KR 2018010334 W KR2018010334 W KR 2018010334W WO 2019132169 A1 WO2019132169 A1 WO 2019132169A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- image
- importance
- surgical image
- steps
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000001356 surgical procedure Methods 0.000 claims description 33
- 230000008929 regeneration Effects 0.000 claims description 4
- 238000011069 regeneration method Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 230000009471 action Effects 0.000 description 16
- 238000002059 diagnostic imaging Methods 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 210000000056 organ Anatomy 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000002432 robotic surgery Methods 0.000 description 5
- 230000000740 bleeding effect Effects 0.000 description 4
- 210000002747 omentum Anatomy 0.000 description 4
- 208000005718 Stomach Neoplasms Diseases 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 3
- 238000013523 data management Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 206010017758 gastric cancer Diseases 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 201000011549 stomach cancer Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001172 regenerating effect Effects 0.000 description 2
- 238000002271 resection Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000004873 anchoring Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 229940066220 doctor's choice Drugs 0.000 description 1
- 238000013110 gastrectomy Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002350 laparotomy Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 210000002435 tendon Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010977 unit operation Methods 0.000 description 1
- 238000007879 vasectomy Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
- A61B34/76—Manipulators having means for providing feel, e.g. force or tactile feedback
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G06T3/14—
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/20—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
Definitions
- the present invention relates to a surgical video image reproduction control method, apparatus and program.
- Deep learning is defined as a set of machine learning algorithms that try to achieve high levels of abstraction (a task that summarizes key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation techniques. Deep learning can be viewed as a field of machine learning that teaches computers how people think in a big way.
- the present invention provides a method, an apparatus, and a program for controlling a surgical image reproduction.
- a method of controlling an operation of regenerating a surgical image comprising: obtaining a surgical image by a computer; acquiring information obtained by dividing the surgical image into one or more regions; Obtaining information on a surgical stage corresponding to each of the areas, determining importance of each of the surgical steps, and controlling reproduction of the surgical image based on the determined importance.
- the step of acquiring the information on the operation step may include the step of determining the type of the operation, and the step of determining the importance may include determining the importance of each of the operation steps based on the type of the operation Step < / RTI >
- the step of controlling the regeneration of the surgical image may include determining whether to reproduce the surgical image corresponding to each of the surgical steps and the summary level based on the importance of each of the surgical steps.
- the step of controlling the regeneration of the surgical image may further include the steps of: determining a division level of each of the surgical steps based on the importance of each of the surgical steps; dividing each of the surgical steps according to the determined division level; And regenerating the surgical image corresponding to each of the divided surgical steps and determining whether to reproduce the surgical image corresponding to each of the divided surgical steps and the summary level based on the divided level of each of the divided surgical steps .
- the step of acquiring the divided information may include acquiring information obtained by hierarchically dividing the surgical image into one or more classification units, and the step of acquiring information on the surgical step may include: Searching for information that is divided into hierarchically divided information from an upper layer, and acquiring information about a surgical stage corresponding to each of the hierarchically divided information.
- the step of determining the importance may include determining a degree of importance of a surgical stage corresponding to each of the hierarchically divided information
- the step of controlling the reproduction of the surgical image may include: Determining whether or not hierarchical division of each of the operation steps is performed based on the importance levels, and determining whether to reproduce operation images corresponding to each of the hierarchically divided operation steps based on importance of each of the hierarchically divided operation steps, And determining a summary level.
- the method may further include recognizing at least one event included in the surgical image and reproducing the surgical image corresponding to the event.
- the step of reproducing the surgical image corresponding to the event may include the step of determining the importance of the recognized event and the step of controlling the reproduction of the surgical image corresponding to the event based on the importance of the event have.
- a learning data management apparatus including a memory for storing one or more instructions and a processor for executing the one or more instructions stored in the memory, The method comprising the steps of: obtaining a surgical image by a computer; obtaining information obtained by dividing the surgical image into one or more regions; obtaining information about a surgical stage corresponding to each of the divided regions; Determining the importance of each of the surgical steps, and controlling the reproduction of the surgical image based on the determined importance.
- a computer program stored in a recording medium readable by a computer, the computer program being capable of performing a surgical video playback control method in combination with a hardware computer.
- FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
- FIG. 2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
- FIG. 3 is a diagram showing an example of a method of hierarchically dividing and recognizing a surgical operation.
- FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
- FIG. 5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
- FIG. 6 is a configuration diagram of an apparatus according to an embodiment.
- the term “part” or “module” refers to a hardware component, such as a software, FPGA, or ASIC, and a “component” or “module” performs certain roles. However, “part” or “ module “ is not meant to be limited to software or hardware. A “module “ or “ module “ may be configured to reside on an addressable storage medium and configured to play back one or more processors. Thus, by way of example, “a” or " module " is intended to encompass all types of elements, such as software components, object oriented software components, class components and task components, Microcode, circuitry, data, databases, data structures, tables, arrays, and variables, as used herein. Or " modules " may be combined with a smaller number of components and "parts " or " modules " Can be further separated.
- FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
- the robotic surgery system includes a medical imaging apparatus 10, a server 20, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging equipment 10 may be omitted from the robotic surgery system according to the disclosed embodiment.
- the surgical robot 34 includes a photographing device 36 and a surgical tool 38.
- robotic surgery is performed by the user controlling the surgical robot 34 using the control unit 30.
- robot surgery may be performed automatically by the control unit 30 without user control.
- the server 20 is a computing device including at least one processor and a communication unit.
- the control unit 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the photographing apparatus 36 includes at least one image sensor. That is, the photographing device 36 includes at least one camera device and is used to photograph a target object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed at the photographing device 36 is displayed on the display 340.
- the surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, anchoring, grabbing, etc., of the surgical site.
- the surgical tool 38 is used in combination with the surgical arm of the surgical robot 34.
- the control unit 30 receives information necessary for surgery from the server 20, or generates information necessary for surgery and provides the information to the user. For example, the control unit 30 displays on the display 32 information necessary for surgery, which is generated or received.
- the user operates the control unit 30 while viewing the display 32 to perform the robot surgery by controlling the movement of the surgical robot 34.
- the server 20 generates information necessary for the robot surgery using the medical image data of the object photographed previously from the medical imaging apparatus 10 and provides the generated information to the control unit 30.
- the control unit 30 provides the information received from the server 20 to the user by displaying the information on the display 32 or controls the surgical robot 34 using the information received from the server 20.
- the means that can be used in the medical imaging equipment 10 is not limited, and various other medical imaging acquiring means such as CT, X-Ray, PET, MRI and the like may be used.
- the surgical image obtained in the photographing device 36 is transmitted to the control section 30.
- control unit 30 may segment the surgical image obtained during the operation in real time.
- control unit 30 transmits a surgical image to the server 20 during or after surgery.
- the server 20 can divide and analyze the surgical image.
- the server 20 learns and stores at least one model for dividing and analyzing the surgical image. In addition, the server 20 learns and stores at least one model for generating an optimized surgical process.
- the server 20 or the client can display the obtained surgical image.
- the surgical image is generally very long, it is practically difficult for the researcher or the doctor to review the surgical image.
- FIG. 2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
- steps that may be performed on the server 20 or client shown in FIG. 1 are shown in a time-series manner.
- the computer performs the steps shown in FIG. 2 for convenience of explanation.
- all or some of the steps shown in FIG. 2 may be performed in the server 20 or the client, respectively.
- step S110 the computer acquires a surgical image.
- the surgical image may be a surgical image actually performed by the surgical robot 34, or may be a simulation image performed based on the image obtained from the medical imaging apparatus 10.
- the surgical image may be an image according to the optimized surgical method, which is generated based on the image obtained from the medical imaging apparatus 10.
- the surgical image in the present specification may mean an image in which a surgical procedure is actually photographed, or a 3D modeling image generated based on a medical image obtained from the medical image photographing apparatus 10.
- the type of surgical image referred to in the present specification is not limited, and may be understood as meaning including all types of images including at least a part of the surgical procedure.
- the computer obtains information obtained by segmenting the surgical image obtained in step S110 into one or more areas.
- the surgical image is automatically segmented by the computer.
- the surgical image may be segmented by various criteria.
- a surgical image may be segmented based on the type of object included in the image.
- the division method based on the kind of object requires a step in which the computer recognizes each object.
- the object recognized in the surgical image includes the human body, the object introduced from the outside, and the object generated by itself.
- the human body includes a body part taken by medical imaging (e.g., CT) followed by surgery and a body part not taken.
- medical imaging e.g., CT
- a body part photographed by a medical imaging includes an organ, a blood vessel, a bone, a tendon, etc., and such a body part can be recognized based on a 3D modeling image generated based on a medical image.
- the position, size, and shape of each body part are recognized in advance by a 3D analysis method based on a medical image.
- the computer defines an algorithm that can grasp the position of a body part corresponding to a surgical image in real time, and based on the information, information on the position, size, and shape of each body part included in the surgical image Can be obtained.
- body parts not taken by medical imaging include omentum, which is not captured by medical images, so it is necessary to recognize them in real time during surgery.
- the computer can determine the location and size of the omentum using image recognition methods, and predict the location of the vessel in the presence of blood vessels within the omentum.
- Objects introduced from the outside include, for example, surgical tools, gauzes, clips, and the like. Since it has predetermined morphological characteristics, it can recognize the computer in real time through image analysis during surgery.
- Internally generated objects include, for example, bleeding from the body part. This allows the computer to recognize images in real time during image analysis during surgery.
- the movements of organs and organs included in body parts and the causes of objects are all caused by the movement of objects from outside.
- a surgical image can be segmented based on the motion of each object.
- the surgical image may be segmented based on the motion, i.e., action, of the externally introduced object.
- the computer judges the type of each object recognized in the surgical image.
- the computer determines a motion of each object, that is, a motion of each object based on a predetermined operation defined in advance, a series of operations, The action can be recognized.
- the computer recognizes the type of each action, and also recognizes the cause of each action.
- the computer can divide the surgical image based on the recognized action, and it can recognize from each detailed operation to the type of the whole operation through the stepwise division.
- the computer determines the type of predefined operation corresponding to the surgical image from the judgment of the action.
- the type of surgery information about the entire surgical procedure can be obtained. If there are multiple surgical processes for the same kind of surgery, one surgical process may be selected based on the doctor's choice, or based on actions recognized up to a certain point in time.
- the computer can recognize and predict the surgical stage based on the acquired surgical procedure. For example, if a particular step in a series of surgical procedures is recognized, then the steps following it can be predicted or candidates for possible steps can be culled. Therefore, it is possible to greatly reduce the error rate of the surgical image recognition caused by the occlusion or the like. Further, when the surgical image deviates greatly from the predictable surgical stage by more than a predetermined error range, it may be recognized that a surgical error situation has occurred.
- the computer can make a judgment on each action based on the recognition of each action. For example, a computer can recognize the necessity and effectiveness of each action.
- the computer can make a judgment as to whether each action was necessary or unnecessary.
- the computer can determine whether each action was performed efficiently, even if each action was required. This is used to provide an operative report, eliminate unnecessary operations in the surgical procedure, and streamline inefficient operations.
- the surgical image is largely divided into components including body parts (organ and omentum), objects introduced from the outside, objects generated inside, actions, types of surgery, necessity and efficiency of each action . That is, instead of recognizing the surgical image as a whole, the computer divides the surgical image into a component unit that minimizes mutual overlapping, including all elements of the surgical image as much as possible. Based on the divided component units, By recognizing, the surgical image can be recognized more specifically and more easily.
- the computer may divide the surgical image hierarchically (or stepwise).
- the computer divides the surgical image hierarchically (or stepwise) into one or more classification units, and recognizes the operations corresponding to each of the divided classification units hierarchically (or stepwise).
- the computer may sequentially recognize the operations of the first classification unit, the second classification unit, the third classification unit, and the fourth classification unit included in the surgical image.
- the first classification unit is a component unit
- the second classification unit is a subsegment unit
- the third classification unit is a segmentation unit
- the fourth classification unit is a subsegmentation unit, but it is not limited thereto.
- FIG. 3 an example of a method of hierarchically dividing and recognizing a surgical operation is shown.
- the surgical operation is divided into a first classification unit 210 and a second classification unit 220, a third classification unit 230, and a fourth classification unit 240, And a method of recognizing the divided data is schematically shown.
- each code shown in FIG. 3 may refer to a pre-established code that can identify actions included in each classification unit.
- the operation of the first classification unit may include capturing, cutting, moving, etc.
- the operation of the second classification unit may include vascularization, fat removal, Long term resection, long term resection, suture and the like
- the operation of the fourth classification unit may include gastric cancer surgery.
- each operation of gastric cancer surgery can largely include laparotomy, gastrectomy, long-term connection and suture, and each operation is more concretely a vasectomy, And the connection of some parts of other organs and the like, and each operation can be more specifically embodied by cutting the blood vessels, removing obstacles such as fat, etc., and this can be accomplished by more simple operations such as movement, Can be further specified.
- the hierarchy can be used in reverse, and the operation can be divided into the minimum detail unit, and the computer can be learned to recognize the upper operation step by step using the divided result.
- the surgical site is different for each patient, each disease differs in shape, and the operation patterns are different depending on the type of operation.
- the learning model it is possible to provide a surgical motion recognition model that can be applied to the patient regardless of the physical condition of the patient or the type of surgery, and, if necessary, Or may provide a surgical motion recognition model that is tailored to the type of condition or surgery.
- the computer can recognize an event that occurs in a surgical image.
- the event includes a surgical error situation, such as bleeding.
- the computer can recognize this through image processing of the surgical image.
- the computer may divide the surgical image into one or more event groups including recognized events.
- the divided event groups may be managed separately, included in a classification unit according to the disclosed embodiment, or may be utilized as an independent classification unit for analysis of a surgical operation.
- the computer can determine the cause of the event based on the recognized event and the surgical operation before and after the event was recognized.
- the computer may generate learning data for analyzing the cause of the event by storing the operations of the predetermined classification unit before and after the occurrence of the event together with information on the event.
- the computer can perform learning using the generated learning data, and learn the correlation between the operation and the events of each classification unit.
- the computer can determine the cause of the event occurrence and provide feedback to the user.
- the computer may perform learning for optimization of surgical operations based on operation of a given classification unit.
- the computer can learn an optimized sequence and method for performing the operation of each classification unit according to the physical condition of the patient and the type of surgery.
- the computer may perform learning for optimization of the surgical operation based on the operation of the first classification unit.
- the computer may obtain one or more reference surgery information.
- the computer can perform learning based on the order of the operation operations included in the one or more reference operation information and determine the order of the optimized operation operations for each operation according to the learning results.
- the operation of the first classification unit is a minimum unit operation commonly applied in any operation, when learning is performed based on the first classification unit, the order of the optimized operation operations regardless of the type of operation and the body condition of the patient Can be obtained. Likewise, it is also possible to obtain an optimized learning model according to the type of surgery and the patient's physical condition through fine adjustment to the learned model.
- step S130 the computer obtains information on the surgical steps corresponding to each of the one or more areas divided in step S120.
- the computer determines the importance of each surgical step (step S140), and controls the reproduction of the surgical image based on the determined importance (step S150).
- the computer determines the type of operation and determines the importance of each surgical stage based on the type of operation that was determined. For example, certain surgical steps may be important for certain operations, but may be less important for other operations. Accordingly, the computer can determine the type of operation from the surgical image according to the above-described method, and determine the importance of each surgical step in which the surgical image is divided based on the determined type of operation.
- the computer determines whether or not to reproduce the surgical image corresponding to each surgical stage and the summarization level, based on the importance of each of the surgical stages. For example, a surgical stage with a relatively low importance may not be reproduced or may be reproduced in large numbers. Likewise, relatively more important surgical steps may not be summarized, or may be reproduced with less summary.
- the computer determines the segmentation level of each surgical step based on the determined importance for each surgical step.
- the computer divides each of the surgical steps according to the determined division level and regenerates the surgical image corresponding to each of the divided surgical steps so that the surgical image corresponding to each of the divided surgical steps, And the level of summarization.
- FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
- a tree 300 corresponding to an example of hierarchically dividing a surgical image is shown.
- the tree 300 shown in FIG. 4 is shown as a binary tree, a data structure for hierarchically dividing an operation image is not limited to a binary tree, and may have a higher-order tree structure, .
- FIG. 4 for convenience of explanation, it is assumed that information on the surgical images divided hierarchically as shown in FIG. 3 is stored in a tree form as shown in FIG.
- the root node 310 of the tree 300 may correspond to a fourth classification unit, i.e., the type of surgery.
- the child nodes 320 and 330 of the root node 310 correspond to the third classification unit, the child nodes thereof correspond to the second classification unit, and the child nodes correspond to the first classification unit. But is not limited to.
- the computer can determine the type of surgery at the root node 310 and determine the importance of the surgical stage corresponding to each of the child nodes 320 and 330 below.
- node 320 may correspond to a fat removal operation
- node 330 may correspond to a gastrostomy operation.
- the computer may play a summarized surgical image, or skip playback, without further segmentation of the surgical stage corresponding to the node 320 corresponding to a relatively less important fat removal operation.
- the computer may further divide the surgical stage corresponding to node 330 and determine the importance of the surgical stage corresponding to each of the segmented nodes 340 and 350.
- the node 350 may stop further partitioning in response to the relatively insignificant surgical stage, and the surgical operation corresponding to the node 350 may be summarized and regenerated.
- the summary level for node 350 may be lower than the summary level for node 320. [ That is, the surgical image corresponding to the node 320 may be displayed more than the surgical image corresponding to the node 350.
- the node 340 may be further divided in correspondence to a relatively important surgical stage, and the importance of the surgical stage corresponding to each of the nodes 360 and 370 may be determined.
- the computer displays the surgical image corresponding to the node 360 longer (or less summarized) than the surgical image corresponding to the node 370 can do.
- FIG. 5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
- an event may occur in the surgical image regardless of the importance of each surgical step.
- an event may include bleeding or a surgical error situation.
- the node 410 included in the tree 400 corresponds to the fat removal operation of which the relative importance is low, but in the operation stage corresponding to the node 420, which is one of the child nodes of the node 410, (E. G., Bleeding) may occur.
- the computer may summarize and display the surgical image corresponding to the node 410, but may not summarize the surgical image corresponding to the node 420 corresponding to the surgical stage in which the event occurred, have.
- a surgical image corresponding to node 420 may be reproduced to occupy a substantial portion of the summarized surgical image corresponding to node 410.
- the computer can determine the importance of each recognized event.
- the computer can control the reproduction of the surgical image corresponding to the event based on the determined degree of importance. For example, the computer may determine whether to reproduce the surgical image corresponding to the event and the summary level, and may reproduce the surgical image corresponding to the event according to the determined replayability and summary level.
- the surgical image regeneration control method according to the disclosed embodiment can be applied to a situation in which a surgical image is divided into one or more stages, and each stage is grouped into one or more groups.
- the computer determines the importance of each of the grouped groups, and the surgical images corresponding to the group of relatively low importance may be omitted or many summaries can be summarized. Similarly, surgical images corresponding to relatively high importance groups may not be summarized or may be summarized less.
- the computer may obtain information about the time to reproduce the surgical image and determine the level of abstraction for each surgical image such that all of the surgical images may be reproduced within the acquired time. For example, if a surgical image needs to be reproduced in a relatively short time, it may be omitted and further summarized depending on the importance.
- the computer may obtain information about a surgical step desired to be viewed from a user. In this case, the computer does not summarize the surgical steps corresponding to the acquired information, or can reproduce with less summary.
- the computer can search the data structure in real time, determine the importance for each step, and determine whether to play and summary levels.
- the computer may determine that it is less important for a particular surgical step and may decide to omit or summarize it a lot.
- the computer may further divide the surgical stage to display the surgical stage in more detail.
- FIG. 6 is a configuration diagram of an apparatus 100 according to an embodiment.
- the processor 102 may include one or more cores (not shown) and a connection path (e.g., a bus, etc.) to transmit and receive signals to and / or from a graphics processing unit (not shown) .
- a connection path e.g., a bus, etc.
- the processor 102 in accordance with one embodiment performs one or more instructions stored in the memory 104 to perform the training data management method described with reference to Figures 1-8.
- the processor 102 may obtain an operation image by executing one or more instructions stored in a memory, obtain information obtained by dividing the operation image into one or more regions, Acquiring information about the step, determining importance for each of the operation steps, and controlling reproduction of the operation image based on the determined importance.
- the processor 102 may include a random access memory (RAM) (not shown) and a read-only memory (ROM) for temporarily and / or permanently storing signals (or data) , Not shown).
- the processor 102 may be implemented as a system-on-chip (SoC) including at least one of a graphics processing unit, a RAM, and a ROM.
- SoC system-on-chip
- the memory 104 may store programs (one or more instructions) for processing and control of the processor 102. Programs stored in the memory 104 may be divided into a plurality of modules according to functions.
- the learning data management method can be implemented as a program (or an application) to be executed in combination with a computer, which is hardware, and can be stored in a medium.
- the above-described program may be stored in a computer-readable medium such as C, C ++, JAVA, machine language, or the like that can be read by the processor (CPU) of the computer through the device interface of the computer, And may include a code encoded in a computer language of the computer.
- code may include a functional code related to a function or the like that defines necessary functions for executing the above methods, and includes a control code related to an execution procedure necessary for the processor of the computer to execute the functions in a predetermined procedure can do.
- code may further include memory reference related code as to whether the additional information or media needed to cause the processor of the computer to execute the functions should be referred to at any location (address) of the internal or external memory of the computer have.
- the code may be communicated to any other computer or server remotely using the communication module of the computer
- a communication-related code for determining whether to communicate, what information or media should be transmitted or received during communication, and the like.
- the medium to be stored is not a medium for storing data for a short time such as a register, a cache, a memory, etc., but means a medium that semi-permanently stores data and is capable of being read by a device.
- examples of the medium to be stored include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, but are not limited thereto.
- the program may be stored in various recording media on various servers to which the computer can access, or on various recording media on the user's computer.
- the medium may be distributed to a network-connected computer system so that computer-readable codes may be stored in a distributed manner.
- the steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both.
- the software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Robotics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- General Business, Economics & Management (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Gynecology & Obstetrics (AREA)
- Manipulator (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (10)
- 컴퓨터가 수술영상을 획득하는 단계;Obtaining a surgical image by a computer;상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계;Obtaining information obtained by dividing the surgical image into one or more regions;상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계;Obtaining information on a surgical stage corresponding to each of the divided one or more regions;상기 수술단계 각각에 대한 중요도를 판단하는 단계; 및Determining a degree of importance for each of the surgical steps; And상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계; 를 포함하는, 수술영상 재생제어 방법.Controlling reproduction of the surgical image based on the determined importance; And a control unit operable to control the operation of the surgical image.
- 제1 항에 있어서,The method according to claim 1,상기 수술단계에 대한 정보를 획득하는 단계는,Wherein the step of acquiring information on the surgical step comprises:상기 수술의 종류를 판단하는 단계; 를 포함하고,Determining a type of the operation; Lt; / RTI >상기 중요도를 판단하는 단계는,Wherein the step of determining the importance comprises:상기 수술의 종류에 기초하여 상기 수술단계 각각의 중요도를 판단하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining the importance of each of the surgical steps based on the type of surgery; And a control unit operable to control the operation of the surgical image.
- 제1 항에 있어서,The method according to claim 1,상기 수술영상의 재생을 제어하는 단계는,Wherein the step of controlling the reproduction of the surgical image comprises:상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce an operation image corresponding to each of the operation steps and a summary level based on the importance of each of the operation steps; And a control unit operable to control the operation of the surgical image.
- 제3 항에 있어서,The method of claim 3,상기 수술영상의 재생을 제어하는 단계는,Wherein the step of controlling the reproduction of the surgical image comprises:상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 분할수준을 결정하는 단계; Determining a division level of each of the surgical steps based on the importance of each of the surgical steps;상기 결정된 분할수준에 따라 상기 수술단계 각각을 분할하는 단계; 및Dividing each of the surgical steps according to the determined division level; And상기 분할된 수술단계 각각에 대응하는 수술영상을 재생하되, 상기 분할된 수술단계 각각의 분할수준에 기초하여 상기 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce the surgical image corresponding to each of the divided surgical steps and the summary level based on the divided level of each of the divided surgical steps, while reproducing the surgical image corresponding to each of the divided surgical steps; And a control unit operable to control the operation of the surgical image.
- 제1 항에 있어서, The method according to claim 1,상기 분할한 정보를 획득하는 단계는, Wherein the step of acquiring the divided information comprises:상기 수술영상을 하나 이상의 분류단위로 계층적으로 분할한 정보를 획득하는 단계를 포함하고,And obtaining information obtained by hierarchically dividing the surgical image into one or more classification units,상기 수술단계에 대한 정보를 획득하는 단계는,Wherein the step of acquiring information on the surgical step comprises:상기 계층적으로 분할한 정보를 탐색하되, 상위 계층으로부터 상기 계층적으로 분할한 정보를 탐색하는, 단계; 및Searching for hierarchically divided information, searching for hierarchically divided information from an upper layer; And상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계; 를 포함하는, 수술영상 재생제어 방법.Obtaining information on a surgical stage corresponding to each of the hierarchically divided information; And a control unit operable to control the operation of the surgical image.
- 제5 항에 있어서, 6. The method of claim 5,상기 중요도를 판단하는 단계는,Wherein the step of determining the importance comprises:상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 중요도를 판단하는 단계; 를 포함하고,Determining importance of a surgical stage corresponding to each of the hierarchically divided information; Lt; / RTI >상기 수술영상의 재생을 제어하는 단계는, Wherein the step of controlling the reproduction of the surgical image comprises:상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 계층적 분할여부를 결정하는 단계; 및Determining whether each of the operation steps is hierarchically divided based on the importance of each of the operation steps; And상기 계층적으로 분할된 수술단계 각각의 중요도에 기초하여 상기 계층적으로 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce the surgical image corresponding to each of the hierarchically divided surgical steps and the summary level based on the importance of each of the hierarchically divided surgical steps; And a control unit operable to control the operation of the surgical image.
- 제1 항에 있어서,The method according to claim 1,상기 수술영상에 포함된 적어도 하나의 이벤트를 인식하는 단계; 및Recognizing at least one event included in the surgical image; And상기 이벤트에 대응하는 수술영상을 재생하는 단계; 를 더 포함하는, 수술영상 재생제어 방법.Reproducing an operation image corresponding to the event; Further comprising the steps of:
- 제7 항에 있어서,8. The method of claim 7,상기 이벤트에 대응하는 수술영상을 재생하는 단계는, Wherein the step of reproducing the surgical image corresponding to the event comprises:상기 인식된 이벤트의 중요도를 판단하는 단계; 및Determining the importance of the recognized event; And상기 이벤트의 중요도에 기초하여 상기 이벤트에 대응하는 수술영상의 재생을 제어하는 단계; 를 포함하는, 수술영상 재생제어 방법.Controlling the reproduction of the surgical image corresponding to the event based on the importance of the event; And a control unit operable to control the operation of the surgical image.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및A memory for storing one or more instructions; And상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고,And a processor executing the one or more instructions stored in the memory,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써, The processor executing the one or more instructions,컴퓨터가 수술영상을 획득하는 단계;Obtaining a surgical image by a computer;상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계;Obtaining information obtained by dividing the surgical image into one or more regions;상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계;Obtaining information on a surgical stage corresponding to each of the divided one or more regions;상기 수술단계 각각에 대한 중요도를 판단하는 단계; 및Determining a degree of importance for each of the surgical steps; And상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계; 를 수행하는, 수술영상 재생제어 장치.Controlling reproduction of the surgical image based on the determined importance; And a control unit for controlling the operation of the surgical video regeneration controller.
- 하드웨어인 컴퓨터와 결합되어, 제1 항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.A computer program stored in a computer readable recording medium in combination with a computer which is hardware and which is capable of performing the method of claim 1. Description:
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0182900 | 2017-12-28 | ||
KR20170182900 | 2017-12-28 | ||
KR10-2017-0182899 | 2017-12-28 | ||
KR20170182898 | 2017-12-28 | ||
KR20170182899 | 2017-12-28 | ||
KR10-2017-0182898 | 2017-12-28 | ||
KR10-2018-0026574 | 2018-03-06 | ||
KR1020180026574A KR101880246B1 (en) | 2017-12-28 | 2018-03-06 | Method, apparatus and program for controlling surgical image play |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019132169A1 true WO2019132169A1 (en) | 2019-07-04 |
Family
ID=63058435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/010334 WO2019132169A1 (en) | 2017-12-28 | 2018-09-05 | Method, apparatus, and program for surgical image playback control |
Country Status (2)
Country | Link |
---|---|
KR (4) | KR101880246B1 (en) |
WO (1) | WO2019132169A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102172258B1 (en) * | 2018-12-05 | 2020-10-30 | 쓰리디메디비젼 주식회사 | Surgical video procution system and surgical video procution method |
KR102361219B1 (en) * | 2019-09-09 | 2022-02-11 | (주)미래컴퍼니 | Method and apparatus for obtaining surgical data in units of sub blocks |
KR102315212B1 (en) * | 2019-10-14 | 2021-10-20 | 경상국립대학교산학협력단 | Hook device for removal of broken intramedullary nail |
KR102180921B1 (en) * | 2019-10-18 | 2020-11-19 | 주식회사 엠티이지 | Apparatus and method for inserting annotation on surgery video based on artificial intelligence |
CN110840534B (en) * | 2019-12-19 | 2022-05-17 | 上海钛米机器人科技有限公司 | Puncture speed planning method and device, puncture equipment and computer storage medium |
KR20210130041A (en) * | 2020-04-21 | 2021-10-29 | 사회복지법인 삼성생명공익재단 | System for providing educational information of surgical techniques and skills and surgical guide system based on machine learning using 3 dimensional image |
KR102426925B1 (en) * | 2020-06-23 | 2022-07-29 | (주)휴톰 | Method and program for acquiring motion information of a surgical robot using 3d simulation |
KR102407531B1 (en) * | 2020-08-05 | 2022-06-10 | 주식회사 라온메디 | Apparatus and method for tooth segmentation |
KR102427171B1 (en) * | 2020-09-14 | 2022-07-29 | (주)휴톰 | Method and Apparatus for providing object labeling within Video |
KR102619729B1 (en) * | 2020-11-20 | 2023-12-28 | 서울대학교산학협력단 | Apparatus and method for generating clinical record data |
CN112891685B (en) * | 2021-01-14 | 2022-07-01 | 四川大学华西医院 | Method and system for intelligently detecting position of blood vessel |
KR102640314B1 (en) * | 2021-07-12 | 2024-02-23 | (주)휴톰 | Artificial intelligence surgery system amd method for controlling the same |
CN113616336B (en) * | 2021-09-13 | 2023-04-14 | 上海微创微航机器人有限公司 | Surgical robot simulation system, simulation method, and readable storage medium |
KR102405647B1 (en) * | 2022-03-15 | 2022-06-08 | 헬리오센 주식회사 | Space function system using 3-dimensional point cloud data and mesh data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120046439A (en) * | 2010-11-02 | 2012-05-10 | 서울대학교병원 (분사무소) | Method of operation simulation and automatic operation device using 3d modelling |
KR101175065B1 (en) * | 2011-11-04 | 2012-10-12 | 주식회사 아폴로엠 | Method for bleeding scanning during operation using image processing apparatus for surgery |
KR20120126679A (en) * | 2011-05-12 | 2012-11-21 | 주식회사 이턴 | Control method of surgical robot system, recording medium thereof, and surgical robot system |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20160096868A (en) * | 2015-02-06 | 2016-08-17 | 경희대학교 산학협력단 | Apparatus for generating guide for surgery design information and method of the same |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009237923A (en) * | 2008-03-27 | 2009-10-15 | Nec Corp | Learning method and system |
JP2010092266A (en) * | 2008-10-08 | 2010-04-22 | Nec Corp | Learning device, learning method and program |
KR102239714B1 (en) * | 2014-07-24 | 2021-04-13 | 삼성전자주식회사 | Neural network training method and apparatus, data processing apparatus |
-
2018
- 2018-03-06 KR KR1020180026574A patent/KR101880246B1/en active IP Right Grant
- 2018-05-14 KR KR1020180055116A patent/KR20190080703A/en active Application Filing
- 2018-05-14 KR KR1020180055105A patent/KR102298412B1/en active IP Right Grant
- 2018-05-14 KR KR1020180055097A patent/KR20190080702A/en active Application Filing
- 2018-09-05 WO PCT/KR2018/010334 patent/WO2019132169A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120046439A (en) * | 2010-11-02 | 2012-05-10 | 서울대학교병원 (분사무소) | Method of operation simulation and automatic operation device using 3d modelling |
KR20120126679A (en) * | 2011-05-12 | 2012-11-21 | 주식회사 이턴 | Control method of surgical robot system, recording medium thereof, and surgical robot system |
KR101175065B1 (en) * | 2011-11-04 | 2012-10-12 | 주식회사 아폴로엠 | Method for bleeding scanning during operation using image processing apparatus for surgery |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20160096868A (en) * | 2015-02-06 | 2016-08-17 | 경희대학교 산학협력단 | Apparatus for generating guide for surgery design information and method of the same |
Also Published As
Publication number | Publication date |
---|---|
KR101880246B1 (en) | 2018-07-19 |
KR20190080703A (en) | 2019-07-08 |
KR102298412B1 (en) | 2021-09-06 |
KR20190080702A (en) | 2019-07-08 |
KR20190088375A (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019132169A1 (en) | Method, apparatus, and program for surgical image playback control | |
WO2019132168A1 (en) | System for learning surgical image data | |
KR102014385B1 (en) | Method and apparatus for learning surgical image and recognizing surgical action based on learning | |
WO2019132614A1 (en) | Surgical image segmentation method and apparatus | |
WO2019132165A1 (en) | Method and program for providing feedback on surgical outcome | |
US20220108450A1 (en) | Surgical simulator providing labeled data | |
US20230289474A1 (en) | Method and system for anonymizing raw surgical procedure videos | |
WO2019235828A1 (en) | Two-face disease diagnosis system and method thereof | |
WO2019231104A1 (en) | Method for classifying images by using deep neural network and apparatus using same | |
WO2020067632A1 (en) | Method, apparatus, and program for sampling learning target frame image of video for ai image learning, and method of same image learning | |
WO2019132244A1 (en) | Method for generating surgical simulation information and program | |
WO2021206518A1 (en) | Method and system for analyzing surgical procedure after surgery | |
KR102276862B1 (en) | Method, apparatus and program for controlling surgical image play | |
WO2022108387A1 (en) | Method and device for generating clinical record data | |
WO2019164273A1 (en) | Method and device for predicting surgery time on basis of surgery image | |
WO2020159276A1 (en) | Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image | |
WO2023287077A1 (en) | Artificial intelligence surgical system and control method therefor | |
WO2021261727A1 (en) | Capsule endoscopy image reading system and method | |
WO2021246648A1 (en) | Method and device for processing blood vessel image on basis of user input | |
WO2021206517A1 (en) | Intraoperative vascular navigation method and system | |
WO2021015490A2 (en) | Method and device for analyzing specific area of image | |
WO2019164278A1 (en) | Method and device for providing surgical information using surgical image | |
CN116787444A (en) | Mechanical arm simulation system and robot simulation system | |
WO2019164279A1 (en) | Method and apparatus for evaluating recognition level of surgical image | |
JP2000081908A (en) | Plant monitor and control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18896849 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18896849 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/01/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18896849 Country of ref document: EP Kind code of ref document: A1 |