WO2019132170A1 - 학습용 데이터 관리방법, 장치 및 프로그램 - Google Patents
학습용 데이터 관리방법, 장치 및 프로그램 Download PDFInfo
- Publication number
- WO2019132170A1 WO2019132170A1 PCT/KR2018/010335 KR2018010335W WO2019132170A1 WO 2019132170 A1 WO2019132170 A1 WO 2019132170A1 KR 2018010335 W KR2018010335 W KR 2018010335W WO 2019132170 A1 WO2019132170 A1 WO 2019132170A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning data
- learned model
- learning
- new
- existing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013523 data management Methods 0.000 title claims abstract description 15
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000001356 surgical procedure Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000002059 diagnostic imaging Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 238000002432 robotic surgery Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000004873 anchoring Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/20—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
Definitions
- the present invention relates to a learning data management method, apparatus, and program.
- AI Artificial intelligence
- AI is a computer system that implements human-level intelligence. Unlike existing Rule-based smart systems, AI is a system in which machines learn, judge and become smart. Artificial intelligence systems are increasingly recognized and improving their understanding of user preferences as they are used, and existing rule-based smart systems are gradually being replaced by deep-run-based artificial intelligence systems.
- Artificial intelligence technology consists of element technologies that utilize deep learning and machine learning.
- Machine learning is an algorithm technology that classifies / learns the characteristics of input data by itself.
- Element technology is a technology that simulates functions such as recognition and judgment of human brain using machine learning algorithms such as deep learning. Understanding, reasoning / prediction, knowledge representation, and motion control.
- Deep Neural Network is widely used for machine learning and is considered to be close to human level in object, face, and speech recognition.
- DNN Deep Neural Network
- a method of fine tuning by using a basic data set and then using a data set of a specific field is frequently used.
- a problem to be solved by the present invention is to provide a learning data management method, an apparatus, and a program.
- a learning data management method comprising the steps of: acquiring new learning data from a computer; performing a test on the acquired new learning data using the learned model; , Extracting first learning data in which the labeled information is acquired with an accuracy of at least a first reference value, extracting the extracted first learning data from the new learning data, and extracting the extracted learning data And re-learning the learned model using the deleted new learning data.
- the predetermined first reference value may be adjusted such that the amount of the new learning data does not exceed a predetermined range.
- the method may further comprise the steps of: acquiring existing learning data used for learning the learned model; performing a test on the existing learning data using the re-learned model; Extracting second learning data obtained with an accuracy equal to or higher than a predetermined second reference value, and deleting the extracted second learning data from the existing learning data.
- the predetermined second reference value may be adjusted such that the amounts of the new learning data and the existing learning data do not exceed a predetermined range.
- the testing may include determining when to perform the test, and performing the test if the amount of the new learning data exceeds a predetermined reference value.
- the step of performing the test may further include the steps of: inputting the labeled third learning data included in the new learning data into the learned model; obtaining an output of the learned model; And comparing the information labeled with the third learning data.
- the step of re-learning the learned model may include re-learning the learned model so that the decision boundary of the learned model is the same as the learned model again.
- the method may further include acquiring existing learning data used for learning of the learned model, and the step of re-learning the learned model further comprises: extracting from the existing learning data using the re-learned model And re-learning the learned model so as to be located near the feature extracted from the existing learning data using the learned model.
- a learning data management apparatus including a memory for storing one or more instructions and a processor for executing the one or more instructions stored in the memory, Executing an instruction to acquire new learning data; performing a test on the acquired new learning data using the learned model; and a step of performing a test on the acquired new learning data using the learned model, Extracting the first learning data to be acquired, deleting the extracted first learning data from the new learning data, and re-learning the learned model using the new learning data from which the extracted learning data has been deleted .
- a computer program stored in a recording medium readable by a computer to perform a learning data management method according to an embodiment of the present invention in combination with a hardware computer.
- FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
- FIG. 2 is a flowchart illustrating a method of managing learning data according to an embodiment.
- FIG. 3 is a view showing a storage space including existing learning data and a previously learned model according to an embodiment.
- 4 is a diagram showing an example of selecting new learning data.
- 5 is a diagram showing an example in which the learned model is re-learned.
- FIG. 6 is a flowchart illustrating a method of managing and utilizing existing training data according to an embodiment.
- FIG. 7 is a diagram showing an example of a method of managing existing learning data.
- FIG. 8 is a view showing a storage space in which new and existing learning data are stored.
- FIG. 9 is a block diagram of an apparatus according to an embodiment.
- the term “part” or “module” refers to a hardware component, such as a software, FPGA, or ASIC, and a “component” or “module” performs certain roles. However, “part” or “ module “ is not meant to be limited to software or hardware. A “module “ or “ module “ may be configured to reside on an addressable storage medium and configured to play back one or more processors. Thus, by way of example, “a” or " module " is intended to encompass all types of elements, such as software components, object oriented software components, class components and task components, Microcode, circuitry, data, databases, data structures, tables, arrays, and variables, as used herein. Or " modules " may be combined with a smaller number of components and "parts " or " modules " Can be further separated.
- FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
- the robot surgery system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging equipment 10 may be omitted from the robotic surgery system according to the disclosed embodiment.
- the surgical robot 34 includes a photographing device 36 and a surgical tool 38.
- robotic surgery is performed by the user controlling the surgical robot 34 using the control unit 30.
- robot surgery may be performed automatically by the control unit 30 without user control.
- the server 100 is a computing device that includes at least one processor and a communication unit.
- the control unit 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the photographing apparatus 36 includes at least one image sensor. That is, the photographing device 36 includes at least one camera device and is used to photograph a target object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed at the photographing device 36 is displayed on the display 340.
- the surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, anchoring, grabbing, etc., of the surgical site.
- the surgical tool 38 is used in combination with the surgical arm of the surgical robot 34.
- the control unit 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the control unit 30 displays on the display 32 information necessary for surgery, which is generated or received.
- the user operates the control unit 30 while viewing the display 32 to perform the robot surgery by controlling the movement of the surgical robot 34.
- the server 100 generates information necessary for robot surgery using the medical image data of the object photographed beforehand from the medical imaging equipment 10 and provides the generated information to the control unit 30.
- the control unit 30 provides the information received from the server 100 to the user by displaying the information on the display 32 or controls the surgical robot 34 using the information received from the server 100.
- the means that can be used in the medical imaging equipment 10 is not limited, and various other medical imaging acquiring means such as CT, X-Ray, PET, MRI and the like may be used.
- the surgical image obtained in the photographing device 36 is transmitted to the control section 30.
- control unit 30 may segment the surgical image obtained during the operation in real time.
- control unit 30 transmits a surgical image to the server 100 during or after surgery.
- the server 100 can divide and analyze the surgical image.
- the server 100 learns and stores at least one model for dividing and analyzing a surgical image. In addition, the server 100 learns and stores at least one model for generating an optimized surgical process.
- the server 100 uses learning data to learn at least one model, and the learning data includes, but is not limited to, surgical and surgical image labeling information.
- FIG. 2 is a flowchart illustrating a method of managing learning data according to an embodiment.
- the embodiment shown in FIG. 2 includes one or more steps performed in time series by the server 100 shown in FIG.
- step S110 the server 100 acquires new learning data.
- the new learning data includes labeled data that can be used for learning.
- the subject for labeling the training data is not limited.
- the labeling may be performed by a person or automatically by a computer.
- the storage space 200 may mean a memory included in the server 100 or an external storage space or a cloud storage space that can communicate with the server 100 in a wired or wireless manner.
- information on the existing learning data 210 and the learned model 220 may be stored in the same storage space or may be divided and stored in different storage spaces.
- the information about the learned model 220 may include, but is not limited to, information about at least one parameter for constructing the neural network of the learned model 220.
- the server 100 acquires new learning data 300.
- the new learning data 300 may be stored in the storage space 200 or may be stored in a separate database.
- the new learning data 300 may be stored in the same space as the existing learning data 210 or may be separately stored in another space.
- step S120 the server 100 performs a test on the new learning data acquired in step S110, using the learned model 220.
- the server 100 determines when to perform the test, and may perform a test if the amount of new learning data 300 obtained exceeds a predetermined reference value.
- the server 100 may perform the test when the entropy of the learning data or the new learning data 300 stored in the storage space 200 exceeds a predetermined reference value, but the present invention is not limited thereto.
- the server 100 enters one or more labeled learning data contained in the new learning data 300 into the learned model 220.
- the server 100 acquires the output of the learned model 220 and compares the obtained output with the information labeled with the learning data input to the learned model 220. [ For example, if it is determined that the obtained output and the labeled information are similar to each other by a predetermined reference value or more, the server 100 can determine that the test for the corresponding learning data has been passed.
- step S130 the server 100 extracts learning data for which the labeled information is acquired with an accuracy of at least a predetermined reference value, and deletes the extracted learning data from the new learning data.
- the server 100 when the new learning data 300 is acquired for the learned model, the server 100 re-learns the learned model 220 based on the acquired data, so that the accuracy of the learned model 220 , Or broaden the applicable range.
- the server 100 selects training data , It is possible to store only the selected learning data and to use it for learning of the model.
- FIG. 4 an example of selecting new learning data is shown.
- the server 100 may perform a test on the learned model 220 using the new learning data 300 as a test data set.
- the learned model 220 receives the specific learning data and can obtain the information labeled with the learning data with an accuracy of a predetermined reference value or more, the server 100 has already learned the learning data And the corresponding learning data may not be used for learning.
- the server 100 extracts the learning data 310 from which the information labeled from the new learning data 300 is acquired with an accuracy of at least a predetermined reference value, and extracts the extracted learning data 310 from the new learning data 300 .
- the server 100 stores the remaining learning data 320 after deleting the extracted learning data 310 in the storage space 200 and utilizes the data for learning.
- the predetermined reference value can be adjusted such that the amount of new learning data 320 does not exceed a predetermined range.
- the predetermined reference value may be adjusted so that the amount of new learning data 320 does not exceed 50% of the acquired new learning data 300, but is not limited thereto.
- step S140 the server 100 re-learns the learned model 220 using the new learning data 320 from which the extracted learning data 310 has been deleted.
- the server 100 re-learns the learned model 220 using the new learning data 320 and obtains the re-learned model 400 as a result.
- the server 100 re-learns the learned model 220 using the new learning data 320 so that the learned model 220 does not forget the learning results based on the existing learning data 210 can do.
- the learned model 220 includes a Deep Neural Network (DNN), and re-learning of the learned model 220 using the new learning data 320 can be performed by fine tuning ). ≪ / RTI > However, there is a tendency to forget previously learned information in the process.
- DNN Deep Neural Network
- a less-forgetful learning method can be applied in the process of expanding the domain of the neural network, in which less learned information is less forgotten.
- the server 100 can keep the previously learned information even in the case where the existing learning data 210 can not be accessed, thereby preventing the waste of the storage space for storing the existing learning data 210 And it is possible to prevent a waste of learning time such as re-learning existing learning data 210 in a learning process.
- the server 100 may utilize the existing learning data 210, but may select existing learning data 210 according to a predetermined criterion, and utilize the selected learning data 210 only. A method of selecting and utilizing existing learning data 210 will be described later.
- the lower layer of the neural network is recognized as a feature extractor, and the highest layer is recognized as having characteristics of a linear classifier. That is, the weight of the soft max classifier represents a decision boundary for classifying the feature.
- the feature extracted from the data of the existing region using the new neural network should be located close to the feature extracted from the data of the existing region.
- the server 100 performs re-learning of the learned model 220 based on the above two characteristics
- the property that the first characteristic, that is, the decision boundary, should not be changed can be applied by setting the learning rate of the boundary to zero.
- the second characteristic can not be applied simply because it is impossible to access the data of the existing region.
- the second characteristic can be satisfied by using the learning data of the new area instead of the data of the existing area.
- the server 100 first recycles the weight of the existing neural network, as in the conventional fine adjustment method. That is, the weight of the existing neural network is used as the initial weight of the new neural network.
- the server 100 freezes the weight of the soft max classifier to freeze the boundary of the classifier.
- the server 100 may then train the in-depth network in a direction that minimizes the total loss function.
- a method of achieving the above-mentioned second characteristic by accessing a part of the learning data of the existing area can also be considered. For example, by making some of the learning data of the existing area selectively accessible, the learning efficiency can be increased.
- FIG. 6 is a flowchart illustrating a method of managing and utilizing existing training data according to an embodiment.
- step S150 the server 100 acquires existing learning data 210 used for learning of the learned model 220.
- step S160 the server 100 performs the test on the existing training data 210 using the learned model 400 again.
- the server 100 inputs one or more labeled training data contained in the existing training data 210 into the re-learned model 400.
- the server 100 acquires the output of the learned model 400 again and compares the obtained output with the information labeled with the learning data input to the re-learned model 400 again. For example, if it is determined that the obtained output and the labeled information are similar to each other by a predetermined reference value or more, the server 100 can determine that the test for the corresponding learning data has been passed.
- the server 100 may extract learning data for which the labeled information is acquired with an accuracy of a predetermined reference value or more, and delete the extracted learning data from the existing learning data 210 as a result of the test.
- FIG. 7 an example of a method of managing existing training data 210 is shown.
- the server 100 inputs one or more labeled learning data included in the existing learning data 210 into the re-learned model 400 and outputs the learning data 212, which is obtained at an accuracy of a predetermined reference value or more, And the extracted learning data 212 can be deleted from the existing learning data 210. [ That is, the learning data 212 extracted from the existing learning data 210 is deleted, and the remaining existing learning data 214 is stored in the storage space 200.
- the existing learning data is well processed even though the learning has been performed using the new learning data, the existing learning data is deleted, so that the learning speed can be increased and the storage space can be saved.
- the predetermined reference value may be adjusted such that the amount of existing training data does not exceed a predetermined range. Likewise, the predetermined reference value can be adjusted such that the amount of new learning data and existing learning data does not exceed a predetermined range.
- FIG. 8 a storage space storing new and existing learning data is shown.
- the server 100 stores the data 214 in which some of the new learning data has been deleted and the data 320 in which some of the existing learning data has been deleted, in the storage space 200.
- the existing learning data 214 and the new learning data 320 can be stored together in the storage space 200 as existing learning data 210 together.
- the predetermined reference value may be set such that the amount of the existing learning data 214 and the amount of the new learning data 320 does not exceed the amount of the existing learning data 210.
- the server 100 performs re-learning when new learning data is added, but the amount of the entire learning data can be kept constant.
- FIG. 9 is a block diagram of an apparatus according to an embodiment.
- the processor 102 may include one or more cores (not shown) and a connection path (e.g., a bus, etc.) to transmit and receive signals to and / or from a graphics processing unit (not shown) .
- a connection path e.g., a bus, etc.
- the processor 102 in accordance with one embodiment performs one or more instructions stored in the memory 104 to perform the training data management method described with reference to Figures 1-8.
- the processor 102 may perform one or more instructions stored in a memory to allow the computer to acquire new learning data, perform a test on the acquired new learning data using the learned model, Extracts first learning data in which the labeled information is acquired with an accuracy of a predetermined first reference value or more, deletes the extracted first learning data from the new learning data, and deletes the extracted first learning data from the new learning data
- the learned model can be re-learned using data.
- the processor 102 may include a random access memory (RAM) (not shown) and a read-only memory (ROM) for temporarily and / or permanently storing signals (or data) , Not shown).
- the processor 102 may be implemented as a system-on-chip (SoC) including at least one of a graphics processing unit, a RAM, and a ROM.
- SoC system-on-chip
- the memory 104 may store programs (one or more instructions) for processing and control of the processor 102. Programs stored in the memory 104 may be divided into a plurality of modules according to functions.
- the learning data management method can be implemented as a program (or an application) to be executed in combination with a computer, which is hardware, and can be stored in a medium.
- the above-described program may be stored in a computer-readable medium such as C, C ++, JAVA, machine language, or the like that can be read by the processor (CPU) of the computer through the device interface of the computer, And may include a code encoded in a computer language of the computer.
- code may include a functional code related to a function or the like that defines necessary functions for executing the above methods, and includes a control code related to an execution procedure necessary for the processor of the computer to execute the functions in a predetermined procedure can do.
- code may further include memory reference related code as to whether the additional information or media needed to cause the processor of the computer to execute the functions should be referred to at any location (address) of the internal or external memory of the computer have.
- the code may be communicated to any other computer or server remotely using the communication module of the computer
- a communication-related code for determining whether to communicate, what information or media should be transmitted or received during communication, and the like.
- the medium to be stored is not a medium for storing data for a short time such as a register, a cache, a memory, etc., but means a medium that semi-permanently stores data and is capable of being read by a device.
- examples of the medium to be stored include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, but are not limited thereto.
- the program may be stored in various recording media on various servers to which the computer can access, or on various recording media on the user's computer.
- the medium may be distributed to a network-connected computer system so that computer-readable codes may be stored in a distributed manner.
- the steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both.
- the software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- Epidemiology (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 컴퓨터가 신규 학습용 데이터를 획득하는 단계;학습된 모델을 이용하여, 상기 획득된 신규 학습용 데이터에 대한 테스트를 수행하는 단계;상기 테스트 결과, 라벨링된 정보가 소정의 제1 기준값 이상의 정확도로 획득되는 제1 학습용 데이터를 추출하는 단계;상기 추출된 제1 학습용 데이터를 상기 신규 학습용 데이터로부터 삭제하는 단계; 및상기 추출된 학습용 데이터가 삭제된 상기 신규 학습용 데이터를 이용하여 상기 학습된 모델을 다시 학습시키는 단계; 를 포함하는, 학습용 데이터 관리방법.
- 제1 항에 있어서,상기 소정의 제1 기준값은,상기 신규 학습용 데이터의 양이 기 설정된 범위를 넘지 않도록 조정되는 것을 특징으로 하는, 학습용 데이터 관리방법.
- 제1 항에 있어서,상기 학습된 모델의 학습에 기 이용된, 기존 학습용 데이터를 획득하는 단계;상기 다시 학습된 모델을 이용하여, 상기 기존 학습용 데이터에 대한 테스트를 수행하는 단계; 및상기 테스트 결과, 라벨링된 정보가 소정의 제2 기준값 이상의 정확도로 획득되는 제2 학습용 데이터를 추출하는 단계; 및상기 추출된 제2 학습용 데이터를 상기 기존 학습용 데이터로부터 삭제하는 단계; 를 더 포함하는, 학습용 데이터 관리방법.
- 제3 항에 있어서,상기 소정의 제2 기준값은,상기 신규 학습용 데이터 및 상기 기존 학습용 데이터의 양이 기 설정된 범위를 넘지 않도록 조정되는 것을 특징으로 하는, 학습용 데이터 관리방법.
- 제1 항에 있어서,상기 테스트를 수행하는 단계는,상기 테스트를 수행하는 시점을 결정하되, 상기 신규 학습용 데이터의 양이 소정의 기준값을 초과하는 경우 상기 테스트를 수행하는, 단계; 를 포함하는, 학습용 데이터 관리방법.
- 제1 항에 있어서,상기 테스트를 수행하는 단계는,상기 신규 학습용 데이터에 포함된, 라벨링된 제3 학습용 데이터를 상기 학습된 모델에 입력하는 단계;상기 학습된 모델의 출력을 획득하는 단계; 및상기 획득된 출력과 상기 제3 학습용 데이터에 라벨링된 정보를 비교하는 단계; 를 포함하는, 학습용 데이터 관리방법.
- 제1 항에 있어서,상기 학습된 모델을 다시 학습시키는 단계는,다시 학습된 모델과, 상기 학습된 모델의 결정경계(decision boundary)가 동일하도록 상기 학습된 모델을 다시 학습시키는 단계; 를 포함하는, 학습용 데이터 관리방법.
- 제7 항에 있어서,상기 학습된 모델의 학습에 기 이용된, 기존 학습용 데이터를 획득하는 단계; 를 더 포함하고,상기 학습된 모델을 다시 학습시키는 단계는,상기 다시 학습된 모델을 이용하여 상기 기존 학습용 데이터로부터 추출되는 특징이 상기 학습된 모델을 이용하여 상기 기존 학습용 데이터로부터 추출되는 특징과 가까이 위치하도록 상기 학습된 모델을 다시 학습시키는 단계; 를 포함하는, 학습용 데이터 관리방법.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써,신규 학습용 데이터를 획득하는 단계;학습된 모델을 이용하여, 상기 획득된 신규 학습용 데이터에 대한 테스트를 수행하는 단계;상기 테스트 결과, 라벨링된 정보가 소정의 제1 기준값 이상의 정확도로 획득되는 제1 학습용 데이터를 추출하는 단계;상기 추출된 제1 학습용 데이터를 상기 신규 학습용 데이터로부터 삭제하는 단계; 및상기 추출된 학습용 데이터가 삭제된 상기 신규 학습용 데이터를 이용하여 상기 학습된 모델을 다시 학습시키는 단계; 를 수행하는, 학습용 데이터 관리장치.
- 하드웨어인 컴퓨터와 결합되어, 제1 항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0182898 | 2017-12-28 | ||
KR20170182898 | 2017-12-28 | ||
KR20170182899 | 2017-12-28 | ||
KR10-2017-0182899 | 2017-12-28 | ||
KR10-2017-0182900 | 2017-12-28 | ||
KR20170182900 | 2017-12-28 | ||
KR1020180013580 | 2018-02-02 | ||
KR10-2018-0013580 | 2018-02-02 | ||
KR10-2018-0026575 | 2018-03-06 | ||
KR1020180026575A KR101864412B1 (ko) | 2017-12-28 | 2018-03-06 | 학습용 데이터 관리방법, 장치 및 프로그램 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019132170A1 true WO2019132170A1 (ko) | 2019-07-04 |
Family
ID=62628386
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/010335 WO2019132170A1 (ko) | 2017-12-28 | 2018-09-05 | 학습용 데이터 관리방법, 장치 및 프로그램 |
PCT/KR2018/010333 WO2019132168A1 (ko) | 2017-12-28 | 2018-09-05 | 수술영상데이터 학습시스템 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/010333 WO2019132168A1 (ko) | 2017-12-28 | 2018-09-05 | 수술영상데이터 학습시스템 |
Country Status (2)
Country | Link |
---|---|
KR (3) | KR101864412B1 (ko) |
WO (2) | WO2019132170A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102515081B1 (ko) * | 2022-03-22 | 2023-03-29 | 한화시스템 주식회사 | 객체인식 소프트웨어 학습방법 및 학습장치 |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11769594B2 (en) | 2018-10-11 | 2023-09-26 | Jlk Inc. | Deep learning model learning device and method for cancer region |
KR102243644B1 (ko) * | 2018-12-07 | 2021-04-23 | 서울대학교 산학협력단 | 의료 영상 분획 딥러닝 모델 생성 장치 및 방법과, 그에 따라 생성된 의료 영상 분획 딥러닝 모델 |
KR102237009B1 (ko) * | 2018-12-11 | 2021-04-08 | 시너지에이아이 주식회사 | 다낭 신 또는 다낭 간의 진행 상태를 판단하는 방법 및 이를 수행하기 위한 의료용 전자 장치 |
KR102189761B1 (ko) * | 2018-12-21 | 2020-12-11 | 주식회사 엘지씨엔에스 | 딥러닝 학습 방법 및 서버 |
KR102186632B1 (ko) * | 2019-01-07 | 2020-12-02 | 재단법인대구경북과학기술원 | 의료 영상의 분석 모델을 학습시키는 학습 장치 및 그 학습 방법 |
WO2020159276A1 (ko) * | 2019-02-01 | 2020-08-06 | 주식회사 아이버티 | 수술 분석 장치, 수술영상 분석 및 인식 시스템, 방법 및 프로그램 |
US10791301B1 (en) | 2019-06-13 | 2020-09-29 | Verb Surgical Inc. | Method and system for synchronizing procedure videos for comparative learning |
CN113924594A (zh) * | 2019-08-19 | 2022-01-11 | Lg电子株式会社 | 用于产品生产线上的视觉检查的基于ai的新学习模型生成系统 |
KR20210026623A (ko) * | 2019-08-30 | 2021-03-10 | 삼성전자주식회사 | 인공지능 모델을 학습시키는 시스템 및 방법 |
KR102542037B1 (ko) * | 2019-09-26 | 2023-06-12 | 주식회사 루닛 | 인공지능 모델을 사용 기관에 특화시키는 학습 방법, 이를 수행하는 장치 |
KR102119056B1 (ko) * | 2019-10-08 | 2020-06-05 | (주)제이엘케이 | 생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치 |
KR102168558B1 (ko) * | 2019-10-24 | 2020-10-21 | 서울대학교산학협력단 | 액티브 러닝을 위한 학습용 데이터 선정 방법, 액티브 러닝을 위한 학습용 데이터 선정 장치 및 액티브 러닝을 이용한 영상 분석 방법 |
KR102334923B1 (ko) * | 2019-11-06 | 2021-12-03 | 주식회사 에이젠글로벌 | 인공지능을 이용한 대출 확대 가설 검정 시스템 및 이를 이용한 방법 |
KR102328154B1 (ko) * | 2019-11-11 | 2021-11-18 | 주식회사 테서 | 머신러닝에 기반한 의료데이터 수집 및 분석 서비스 제공 방법 및 시스템 |
WO2021107948A1 (en) * | 2019-11-27 | 2021-06-03 | Google Llc | Personalized data model utilizing closed data |
KR102272573B1 (ko) * | 2019-11-28 | 2021-07-05 | 한국전자기술연구원 | 에너지 사용량 데이터의 비지도 학습 기반 부하 모니터링 방법 |
KR102356263B1 (ko) * | 2020-02-12 | 2022-01-28 | 한전케이디엔주식회사 | 전력설비 관리 시스템 및 방법 |
KR102296274B1 (ko) * | 2020-10-26 | 2021-09-01 | 주식회사 보고넷 | 사용자 학습 기반 딥러닝 객체 인식 서비스 제공 방법 |
KR102273494B1 (ko) * | 2020-11-05 | 2021-07-06 | 주식회사 카비랩 | 골절 수술용 의료기기 자동 디자인 생성 시스템 |
KR102287762B1 (ko) * | 2020-11-05 | 2021-08-10 | 주식회사 카비랩 | 앙상블 학습 기반 골절 수술 시뮬레이터 시스템 및 그 구동 방법 |
KR102467047B1 (ko) * | 2020-11-18 | 2022-11-15 | (주)휴톰 | 어노테이션 평가 방법 및 장치 |
KR20220067732A (ko) * | 2020-11-18 | 2022-05-25 | 한국전자기술연구원 | 재학습 가능한 모바일 딥러닝 하드웨어 장치 |
KR102336500B1 (ko) * | 2021-03-02 | 2021-12-07 | 국방과학연구소 | 의사-레이블 데이터 생성 방법 및 장치, 의사-레이블 데이터를 이용한 심층 신경망 학습 방법 및 장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100918361B1 (ko) * | 2008-02-26 | 2009-09-22 | 엔에이치엔(주) | 고속화 검색 모델링 시스템 및 방법 |
US20150254555A1 (en) * | 2014-03-04 | 2015-09-10 | SignalSense, Inc. | Classifying data with deep learning neural records incrementally refined through expert input |
JP2017004509A (ja) * | 2015-06-04 | 2017-01-05 | ザ・ボーイング・カンパニーThe Boeing Company | 機械学習のための高度解析インフラストラクチャ |
JP6182242B1 (ja) * | 2016-06-13 | 2017-08-16 | 三菱電機インフォメーションシステムズ株式会社 | データのラベリングモデルに係る機械学習方法、コンピュータおよびプログラム |
KR20170106338A (ko) * | 2015-01-22 | 2017-09-20 | 퀄컴 인코포레이티드 | 모델 압축 및 미세-튜닝 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3329202B2 (ja) * | 1996-08-07 | 2002-09-30 | ケイディーディーアイ株式会社 | ニューラルネットワーク学習方式 |
WO2002059828A2 (en) * | 2001-01-23 | 2002-08-01 | Biowulf Technologies, Llc | Computer-aided image analysis |
JP4766560B2 (ja) * | 2006-08-11 | 2011-09-07 | Kddi株式会社 | 動画像話題分割装置 |
JP2009075737A (ja) * | 2007-09-19 | 2009-04-09 | Nec Corp | 半教師あり学習方法、半教師あり学習装置及び半教師あり学習プログラム |
JP2009237923A (ja) * | 2008-03-27 | 2009-10-15 | Nec Corp | 学習方法およびシステム |
JP2010092266A (ja) * | 2008-10-08 | 2010-04-22 | Nec Corp | 学習装置、学習方法及びプログラム |
KR101049507B1 (ko) * | 2009-02-27 | 2011-07-15 | 한국과학기술원 | 영상유도수술시스템 및 그 제어방법 |
KR101302595B1 (ko) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | 수술 진행 단계를 추정하는 시스템 및 방법 |
JP5881048B2 (ja) * | 2012-09-18 | 2016-03-09 | 株式会社日立製作所 | 情報処理システム、及び、情報処理方法 |
WO2015143456A1 (en) * | 2014-03-21 | 2015-09-24 | Biolase, Inc. | Dental laser interface system and method |
KR102239714B1 (ko) * | 2014-07-24 | 2021-04-13 | 삼성전자주식회사 | 신경망 학습 방법 및 장치, 데이터 처리 장치 |
KR102601848B1 (ko) * | 2015-11-25 | 2023-11-13 | 삼성전자주식회사 | 데이터 인식 모델 구축 장치 및 방법과 데이터 인식 장치 |
-
2018
- 2018-03-06 KR KR1020180026575A patent/KR101864412B1/ko active IP Right Grant
- 2018-03-06 KR KR1020180026572A patent/KR101864380B1/ko active IP Right Grant
- 2018-05-29 KR KR1020180061323A patent/KR20190088376A/ko not_active Application Discontinuation
- 2018-09-05 WO PCT/KR2018/010335 patent/WO2019132170A1/ko active Application Filing
- 2018-09-05 WO PCT/KR2018/010333 patent/WO2019132168A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100918361B1 (ko) * | 2008-02-26 | 2009-09-22 | 엔에이치엔(주) | 고속화 검색 모델링 시스템 및 방법 |
US20150254555A1 (en) * | 2014-03-04 | 2015-09-10 | SignalSense, Inc. | Classifying data with deep learning neural records incrementally refined through expert input |
KR20170106338A (ko) * | 2015-01-22 | 2017-09-20 | 퀄컴 인코포레이티드 | 모델 압축 및 미세-튜닝 |
JP2017004509A (ja) * | 2015-06-04 | 2017-01-05 | ザ・ボーイング・カンパニーThe Boeing Company | 機械学習のための高度解析インフラストラクチャ |
JP6182242B1 (ja) * | 2016-06-13 | 2017-08-16 | 三菱電機インフォメーションシステムズ株式会社 | データのラベリングモデルに係る機械学習方法、コンピュータおよびプログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102515081B1 (ko) * | 2022-03-22 | 2023-03-29 | 한화시스템 주식회사 | 객체인식 소프트웨어 학습방법 및 학습장치 |
Also Published As
Publication number | Publication date |
---|---|
WO2019132168A1 (ko) | 2019-07-04 |
KR101864380B1 (ko) | 2018-06-04 |
KR20190088376A (ko) | 2019-07-26 |
KR101864412B1 (ko) | 2018-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019132170A1 (ko) | 학습용 데이터 관리방법, 장치 및 프로그램 | |
WO2019132169A1 (ko) | 수술영상 재생제어 방법, 장치 및 프로그램 | |
KR102014377B1 (ko) | 학습 기반 수술동작 인식 방법 및 장치 | |
WO2016171341A1 (ko) | 클라우드 기반 병리 분석 시스템 및 방법 | |
WO2017051945A1 (ko) | 질환 모델 기반의 의료 정보 서비스 제공 방법 및 장치 | |
WO2017022882A1 (ko) | 의료 영상의 병리 진단 분류 장치 및 이를 이용한 병리 진단 시스템 | |
WO2019054638A1 (ko) | 영상 분석 방법, 장치 및 컴퓨터 프로그램 | |
WO2021045367A1 (ko) | 상담 대상자의 드로잉 과정을 통해 심리 상태를 판단하는 방법 및 컴퓨터 프로그램 | |
WO2020196985A1 (ko) | 비디오 행동 인식 및 행동 구간 탐지 장치 및 방법 | |
WO2019235828A1 (ko) | 투 페이스 질병 진단 시스템 및 그 방법 | |
WO2010041836A2 (en) | Method of detecting skin-colored area using variable skin color model | |
US20230102479A1 (en) | Anonymization device, monitoring apparatus, method, computer program, and storage medium | |
WO2020122606A1 (ko) | 인공신경망을 이용한 장기의 부피 측정 방법 및 그 장치 | |
WO2019132165A1 (ko) | 수술결과에 대한 피드백 제공방법 및 프로그램 | |
WO2020067632A1 (ko) | 인공지능 영상 학습을 위한 동영상의 학습 대상 프레임 이미지 샘플링 방법, 장치, 프로그램 및 그 영상 학습 방법 | |
WO2021230534A1 (ko) | 안와 및 안와주변 병변 예측 장치 및 그 예측 방법 | |
WO2021153858A1 (ko) | 비정형 피부질환 영상데이터를 활용한 판독보조장치 | |
WO2019164273A1 (ko) | 수술영상을 기초로 수술시간을 예측하는 방법 및 장치 | |
WO2021206518A1 (ko) | 수술 후 수술과정 분석 방법 및 시스템 | |
WO2016085236A1 (ko) | 갑상선암 자동 판별 방법 및 시스템 | |
WO2022181919A1 (ko) | 가상 현실 기반의 수술 환경을 제공하는 장치 및 방법 | |
WO2017010612A1 (ko) | 의료 영상 분석 기반의 병리 진단 예측 시스템 및 방법 | |
WO2022231200A1 (ko) | 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 | |
WO2022158843A1 (ko) | 조직 검체 이미지 정제 방법, 및 이를 수행하는 컴퓨팅 시스템 | |
WO2023033270A1 (ko) | 딥러닝 기반 분석 결과 예측 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18893715 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18893715 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260121) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18893715 Country of ref document: EP Kind code of ref document: A1 |