WO2022231392A1 - Procédé et dispositif pour mettre en œuvre une plateforme à évolution automatique par apprentissage automatique de machine - Google Patents
Procédé et dispositif pour mettre en œuvre une plateforme à évolution automatique par apprentissage automatique de machine Download PDFInfo
- Publication number
- WO2022231392A1 WO2022231392A1 PCT/KR2022/006212 KR2022006212W WO2022231392A1 WO 2022231392 A1 WO2022231392 A1 WO 2022231392A1 KR 2022006212 W KR2022006212 W KR 2022006212W WO 2022231392 A1 WO2022231392 A1 WO 2022231392A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- evaluation
- expert
- artificial intelligence
- inference
- result
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000010801 machine learning Methods 0.000 title abstract description 11
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 222
- 238000011156 evaluation Methods 0.000 claims abstract description 204
- 238000012549 training Methods 0.000 claims abstract description 10
- 238000013500 data storage Methods 0.000 claims description 24
- 238000012795 verification Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 13
- 230000007423 decrease Effects 0.000 claims description 7
- 230000035945 sensitivity Effects 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 description 14
- 230000004044 response Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 210000002569 neuron Anatomy 0.000 description 6
- 238000012937 correction Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000011017 operating method Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Definitions
- the present invention relates to learning of an artificial intelligence (AI) model, and more particularly, to a method and apparatus for implementing an automatically evolving platform through automatic machine learning.
- AI artificial intelligence
- AI Artificial intelligence
- ANN artificial neural network
- an artificial neural network consists of an input layer, a hidden layer, and an output layer.
- Each layer is composed of neurons, and neurons in each layer are connected to the outputs of neurons in the previous layer.
- the value obtained by adding a bias to the inner product of each output value of the neurons of the previous layer and the corresponding connection weight is applied to the activation function, which is generally non-linear. and pass the output value to the neurons of the next layer.
- Such an artificial neural network may be formed by learning (eg, machine learning, deep learning, etc.).
- the performance of the artificial neural network may vary depending on the selection of learning data for learning, the number of times of learning, and the like. Accordingly, even now, various learning techniques for improving the performance of artificial neural networks are being studied.
- An object of the present invention is to provide a method and apparatus for effectively performing learning on an artificial intelligence model.
- the present invention is to provide a method and apparatus for performing learning on an artificial intelligence model based on expert evaluation.
- the present invention is to provide a method and apparatus for providing an automatically evolving platform through automatic machine learning.
- An object of the present invention is to provide a method and apparatus for providing performance enhancement of an automated artificial intelligence model under various environments such as a cloud environment and a local environment.
- An object of the present invention is to provide a method and apparatus for automatically generating data for artificial intelligence model re-learning by comparing a result inferred by an artificial intelligence model with a result evaluated by an expert for specific data.
- An object of the present invention is to provide a method and apparatus for automatically improving the performance of an artificial intelligence model by evaluating the inference performance of a newly created artificial intelligence model through re-learning.
- the present invention evaluates the inference performance of a newly created artificial intelligence model through automatic generation and re-learning of data for artificial intelligence model re-learning reflecting the expert's evaluation results under various environments such as cloud environment and local environment, and the performance of the artificial intelligence model To provide a method and apparatus for automatically improving the
- a method of operating a server providing an artificial intelligence platform includes receiving an inference request message including input data from a user device to an artificial intelligence model; Obtaining an inference result by performing inference based on input data, transmitting an inference result message including the inference result to the user device, and transmitting an evaluation request message including the inference result to an expert device , receiving an evaluation result message including an expert evaluation of the inference result from the expert device, and performing re-learning on the artificial intelligence model using the learning data generated based on the expert evaluation may include
- the expert evaluation may include at least one of whether the inference result using the artificial intelligence model is accepted, whether the inference result is corrected, and information corrected for the inference result.
- the method may further include generating learning data by comparing the inference result and the expert evaluation, and synthesizing the inference result and the expert evaluation according to the comparison result.
- the learning data when the inference result and the expert evaluation are different from each other, the learning data may include labeled learning data including the input data and the expert evaluation.
- the artificial intelligence model is in a state of being re-learned based on another expert evaluation
- the inference result message is information indicating that the artificial intelligence model updated based on the expert evaluation is used, evaluation It may further include at least one of information on experts participating in the , information on the number of reflected expert evaluations.
- the method includes checking whether the inference result and the expert evaluation are uploaded to a data storage, and when the inference result and the expert evaluation are uploaded to the data storage, generating the training data It may further include the step of
- the method includes the steps of verifying the performance of the retrained artificial intelligence model, and applying the retrained artificial intelligence model for the next reasoning operation when the performance verification criterion is passed It further includes, wherein the performance verification criterion may include that at least one of average precision, sensitivity, specificity, and recall satisfies at least one threshold condition. .
- the re-learning of the artificial intelligence model may include checking the number of training data based on expert evaluation, and if the number of training data is greater than or equal to a threshold, the re-learning It may include the step of performing
- a method of operating an apparatus using an artificial intelligence platform includes transmitting an inference request message including input data to an artificial intelligence model to a server, the input data using the artificial intelligence model Receiving from the server an inference result message including an inference result obtained by performing inference based on It may further include at least one of information indicating that the artificial intelligence model was used, information on experts participating in the evaluation, and information on the number of reflected expert evaluations.
- an operating method of an apparatus for providing expert evaluation to an artificial intelligence platform includes receiving an evaluation request message including an inference result obtained by performing inference using an artificial intelligence model from the server Step, displaying an interface for inputting an evaluation for the inference result, confirming the evaluation data input through the interface, and sending an evaluation result message including an expert evaluation including the evaluation data to the server and the interface includes a result window, a classification window, and a probability window as output items, and an accept button, a decline button, and a save as input items. It may include a button and an annotation tool button.
- a server providing an artificial intelligence platform includes a communication unit, and a processor, wherein the processor receives an inference request message including input data from a user device to the artificial intelligence model, An inference result is obtained by performing inference based on the input data using the artificial intelligence model, an evaluation request message including the inference result is transmitted to the expert device, and expert evaluation of the inference result from the expert device Receive an evaluation result message comprising a, and control to perform re-learning on the artificial intelligence model using the learning data generated based on the expert evaluation, the processor, obtaining the inference operation and the expert evaluation It may include the artificial intelligence platform for controlling the, a data storage for classifying and storing the inference result and the expert evaluation, and a supervisor for monitoring the acquisition of the inference result and the expert evaluation, and controlling re-learning.
- the expert evaluation may include at least one of whether the inference result using the artificial intelligence model is accepted, whether the inference result is corrected, and information corrected for the inference result.
- the monitor may generate learning data by comparing the inference result and the expert evaluation, and synthesizing the inference result and the expert evaluation according to the comparison result.
- the learning data when the inference result and the expert evaluation are different from each other, the learning data may include labeled learning data including the input data and the expert evaluation.
- the artificial intelligence model is in a state of being re-learned based on another expert evaluation
- the inference result message is information indicating that the artificial intelligence model updated based on the expert evaluation is used, evaluation It may further include at least one of information on experts participating in the , information on the number of reflected expert evaluations.
- the supervisor checks whether the inference result and the expert evaluation are uploaded to a data storage, and when the inference result and the expert evaluation are uploaded to the data storage, the learning data can be generated have.
- the supervisor verifies the performance of the retrained AI model, and the AI platform performs the next inference operation on the retrained AI model when it passes the performance verification criterion.
- the performance verification criterion may include that at least one of average precision, sensitivity, specificity, and recall satisfies at least one threshold condition.
- the monitor may check the number of learning data based on expert evaluation, and if the number of learning data is greater than or equal to a threshold, the re-learning may be performed.
- an apparatus using an artificial intelligence platform includes a communication unit, and a processor, wherein the processor transmits an inference request message including input data to the artificial intelligence model to the server, and the artificial intelligence Receive an inference result message including an inference result obtained by performing inference based on the input data using an intelligent model from the server, control to display the inference result, and the inference result message is for expert evaluation It may further include at least one of information indicating that the updated artificial intelligence model is used based on the information, information on experts participating in the evaluation, and information on the number of reflected expert evaluations.
- an apparatus for providing expert evaluation to an artificial intelligence platform includes a communication unit, and a processor, wherein the processor includes an inference result obtained by performing inference using an artificial intelligence model Receive an evaluation request message from the server, display an interface for inputting evaluation for the inference result, check evaluation data input through the interface, and evaluation result including expert evaluation including the evaluation data control to send a message to the server, wherein the interface includes a result window, a classification window, a probability window as output items, an accept button, a decline button as input items, It may include a save button and an annotation tool button.
- FIG. 1 shows a network structure for operating an artificial intelligence model according to an embodiment of the present invention.
- FIG. 2 shows the structure of an artificial neural network applicable to a system according to an embodiment of the present invention.
- FIG 3 shows an inference and learning system of an artificial intelligence model according to an embodiment of the present invention.
- FIG. 4A shows a functional structure of an artificial intelligence platform according to an embodiment of the present invention.
- 4B illustrates an example of input data and output data of a prediction model block in an artificial intelligence platform according to an embodiment of the present invention.
- 5A illustrates an example of an interface for expert evaluation according to an embodiment of the present invention.
- 5B illustrates an example in which an inference result is displayed on an interface for expert evaluation according to an embodiment of the present invention.
- FIG. 6 shows a functional structure of a data storage according to an embodiment of the present invention.
- FIG. 7 shows a functional structure of a monitor according to an embodiment of the present invention.
- FIG. 8 illustrates a procedure for updating an artificial intelligence model according to model verification according to an embodiment of the present invention.
- FIG. 9 shows an example of a procedure for reasoning and re-learning according to an embodiment of the present invention.
- FIG. 10 illustrates an example of an operating method of a server providing an inference result according to an embodiment of the present invention.
- FIG. 11 illustrates an example of an operation method of a server requesting evaluation of an inference result according to an embodiment of the present invention.
- the present invention proposes a technique for performing learning to secure the performance of an artificial intelligence model.
- the present invention relates to an integrated method for providing performance enhancement of an automated artificial intelligence model under various environments such as a cloud environment and a local environment. More specifically, the present invention automatically generates data for re-learning of an artificial intelligence model by comparing the results inferred by the artificial intelligence model with the results evaluated by experts for specific data, and artificial intelligence newly created through re-learning It relates to a technique for automatically improving the performance of an artificial intelligence model by evaluating the model's inference performance. That is, the present invention relates to a technology for implementing an automatically evolving platform through automatic machine learning.
- the existing method it takes a lot of time and effort because it is necessary to separately generate the AI inference result and the expert evaluation result, and to convert the result data after checking it.
- the platform using the artificial intelligence model and the platform for performance enhancement are separated from each other, there are inefficiencies such as having to manually apply the performance enhancement model. Therefore, it is necessary to develop an integrated method from the use of artificial intelligence models to the generation of data for artificial intelligence model re-learning and performance enhancement.
- the present invention automatically generates data for artificial intelligence model learning while collecting expert evaluation data in the process of using an artificial intelligence model, and performs performance enhancement and performance verification on the generated model.
- FIG. 1 shows a network structure for operating an artificial intelligence model according to an embodiment of the present invention.
- a network for operating an artificial intelligence model includes a user device 110a, a user device 110b, and a server 120 connected to a communication network.
- 1 illustrates two user devices 110a and 110b, there may be three or more user devices.
- the user device 110a and the user device 110b are used by a user who wants to perform inference by utilizing an artificial intelligence model.
- the user device 110a and the user device 110b may obtain input data, transmit the input data to the server 120 through the communication network, and receive data including the result of analysis from the server 120 .
- Each of the user devices 110a and 110b may include a communication unit for communication, a storage unit for storing data and programs, a display unit for displaying information, an input unit for user input, and a processor for control.
- the server 120 provides an artificial intelligence model platform according to embodiments of the present invention.
- the server 120 has an artificial intelligence model including an artificial neural network for analysis or inference, and may operate the artificial intelligence model.
- An example of an artificial neural network applicable to the present invention will be described below with reference to FIG. 2 .
- the server 120 may perform learning for the artificial intelligence model using the learning data.
- the server 120 may collect expert evaluations and perform expert evaluation-based learning.
- the server 120 may be a local server existing in a local network or a remote access server (eg, a cloud server) connected through an external network.
- the server 120 may include a communication unit for communication, a storage unit for storing data and programs, and a processor for control.
- the artificial neural network includes an input layer 210 , at least one hidden layer 220 , and an output layer 230 .
- Each of the layers 210 , 220 , and 230 includes a plurality of nodes, and each of the nodes is connected to the output of at least one node belonging to the previous layer.
- Each node adds a bias to the inner product of each output value of the nodes of the previous layer and the corresponding connection weight, and then a non-linear activation function
- the output value multiplied by is delivered to at least one neuron in the next layer.
- Artificial neural network models used in various embodiments of the present invention include a fully convolutional neural network, a convolutional neural network, a recurrent neural network, and a restricted Boltzmann machine (RBM). ) and at least one of a deep belief neural network (DBN), but is not limited thereto.
- machine learning methods other than deep learning may be included.
- it may include a hybrid model that combines deep learning and machine learning. For example, a feature of an image is extracted by applying a deep learning-based model, and a machine learning-based model may be applied when classifying or recognizing an image based on the extracted features.
- the machine learning-based model may include, but is not limited to, a support vector machine (SVM), an AdaBoost, and the like.
- an artificial intelligence platform isolated data storage, data collection and generation for artificial intelligence model re-learning to which a watcher is applied, and an integrated system for automated performance enhancement and performance evaluation of artificial intelligence models This can be provided.
- FIG. 3 shows an inference and learning system of an artificial intelligence model according to an embodiment of the present invention.
- 3 illustrates functional components of a system according to an embodiment of the present invention operable in the network structure illustrated in FIG. 1 .
- the components illustrated in FIG. 3 may be implemented by one hardware device (eg, the server 120) or a plurality of hardware devices.
- the system includes an AI platform 310 , a data storage 320 , and a watcher 330 .
- the artificial intelligence platform 310 , the data storage 320 , and the monitor 330 may be implemented in the server 120 of FIG. 1 , or may be implemented in a separate hardware device.
- the artificial intelligence platform 310 is a platform for using an artificial intelligence model.
- the artificial intelligence platform 310 may be composed of various types of environments such as a GUI (graphic user interface) and CLI (command line interface) that can allocate data to an artificial intelligence model and check the inference result by the artificial intelligence model as an output. can
- GUI graphic user interface
- CLI command line interface
- the artificial intelligence platform 310 may collect the evaluation results of experts and build learning data for upgrading the performance of the artificial intelligence model.
- the data storage 320 is a storage device for storing original data, an inference result of an artificial intelligence model, and an expert evaluation result.
- the data storage 320 may be composed of separately separated sub-repositories to distinguish various users and projects.
- the data store 320 may also be referred to as an 'isolated data store'.
- the monitor 330 is an automated method, device, or program for allocating a re-learning process execution process for upgrading the performance of an artificial intelligence model.
- the monitor 330 may monitor that the original data, the inference result of the artificial intelligence model, and the expert's evaluation result are uploaded to the isolated data storage.
- the monitor 330 may generate data for learning the AI model by comparing the inference result of the AI model with the evaluation result of the expert at the time the upload is completed.
- the artificial intelligence platform 310 includes a prediction model block 412 and an expert check block 414 .
- the prediction model block 412 generates a prediction model result from input data.
- the predictive model block 412 provides an inference result of the artificial intelligence model on the input data.
- the inference result of the AI model may be drawn in the form of a bounding box or a heat map on the original data, or may be provided in various forms such as probability values and classification categories.
- An example of input data and output data provided to the prediction model block 412 is shown in FIG. 4B .
- FIG. 4B illustrates an example of input data and output data of a prediction model block in an artificial intelligence platform according to an embodiment of the present invention.
- 4B illustrates input/output data when the system according to an embodiment is applied to the medical field.
- a medical image 480 may be provided as input data.
- the medical image 480 is a computerized tomograph (CT) image of the lung.
- CT computerized tomograph
- MRI magnetic resonance imaging
- X-ray images may be used as input data.
- the prediction model block 412 analyzes the input medical image 480 , detects a lesion region, and outputs coordinate information, classification of a lesion, and a probability value of a lesion in the corresponding region.
- the output data may include a bounding box 492 indicating a lesion location in the medical image, a probability value 494 , and classification information 496 .
- the expert check block 414 determines the accuracy of the predictive model results based on the expert evaluation. Specifically, the expert examination block 414 additionally receives the expert's evaluation of the inference result of the artificial intelligence model, and transmits the original data, the artificial intelligence model inference result, and the expert evaluation to the data storage 320 .
- the expert evaluation may be obtained from a device accessible to a user having an expert status (hereinafter referred to as an 'expert device').
- the expert device may be a device on which the artificial intelligence platform 310 operates (hereinafter referred to as a 'platform device'), or may be a separate device (eg, a smart phone, personal PC, etc.). When the expert device is a device separate from the platform device, a signaling procedure for obtaining an expert evaluation between the platform device and the expert device may be performed.
- 5A illustrates an example of an interface for expert evaluation according to an embodiment of the present invention.
- 5A illustrates an interface provided for an expert to evaluate the inference result generated by the artificial intelligence platform 310 .
- the interface includes an AI result window 510 , a classification window 520 , and a probability window 530 as output items representing inference results.
- the interface includes an accept button 542 , a decline button 544 , a save button 546 , and an annotation tool button 550 as items for input by the expert.
- the inference result of the artificial intelligence model is displayed through the AI result window 510 .
- classification category information and probability values may be further displayed through the classification window 520 and the probability window 530 .
- the accept button 542 is an input item for feedback that the inference result of the artificial intelligence model is accurate. That is, the expert's evaluation may be obtained by clicking the accept button 542 when the inference result of the artificial intelligence model is correct.
- the reject button 544 is an input item for feedback that the inference result of the artificial intelligence model is inaccurate. That is, the expert may provide feedback that the inference result should be corrected by clicking the reject button 544 .
- the save button 556 is an input item that requests to save the edited contents by the expert.
- the annotation tool button 560 is an input item for executing an annotation tool that can be modified on the inference result.
- the annotation tool button 560 is clicked, the information displayed in the AI result window 510 and the classification window 520 is converted into a state that can be edited. For example, correction of an image, addition of an image (eg, deletion, correction, or addition of a bounding box), change of classification, etc. may be supported. That is, when the inference result of the AI model is inaccurate, the expert may draw a new bounding box, select an accurate classification category, or provide feedback of rejection by using an annotation tool provided by the platform or the like.
- FIG. 5B illustrates an example in which an inference result is displayed on an interface for expert evaluation according to an embodiment of the present invention.
- FIG. 5B exemplifies a case in which the inference result as shown in FIG. 4B is displayed.
- the inference result is displayed through the AI result window 510 , the classification window 520 , and the probability window 530 .
- the medical image 490 including the bounding box is displayed on the AI result window 510
- the classification information 496 is displayed on the classification window 520
- the probability value 494 is displayed on the probability window 530 . can be displayed.
- the expert may select the Accept button when agreeing to the output result as shown in FIG. 5B and select the Decline button when disagreeing.
- the Decline button is selected, the expert can use the Annotation Tool to reposition the bounding box, and the like.
- the save button is selected, coordinate information of the bounding box changed along with the expert's opinion may be stored.
- the data storage 320 includes a plurality of lower storage areas 610 - 1 to 610 -N.
- the first storage area 610 - 1 includes input data 612-1 , a prediction model result 614 - 1 , and an expert examination result 616 - 1 .
- the prediction model result 614-1 includes the inference result of the prediction model block 412 of the artificial intelligence platform 310, and the expert examination result 616-1 is the expert feedback obtained through the interface as shown in FIG. includes
- the lower storage areas 610 - 1 to 610 -N may be classified according to various criteria such as users, experts, and projects.
- the lower storage areas 610 - 1 to 610 -N may occupy separate storage spaces physically or logically isolated.
- FIG. 7 shows a functional structure of a monitor according to an embodiment of the present invention.
- the monitor 330 monitors that the data from the artificial intelligence platform 310 is uploaded to the data storage 320 and detects the upload completion time. Specifically, when data upload starts, the artificial intelligence platform 310 sends an upload start message to the queue 734 of the monitor 330 . When a message is received in the queue 734 , the monitor 330 periodically checks whether the corresponding data exists in the data storage 330 . When it is confirmed that the data exists in the data storage 330, the monitor 330 may determine that the upload is complete.
- the monitor 330 compares the inference result 731 of the artificial intelligence model with the evaluation result 732 of the expert, and based on the comparison result, learning data for learning the artificial intelligence model (735) is generated. That is, the monitor 330 monitors data being uploaded from the artificial intelligence platform 310 to the data storage 320 using the queue 734, and when the upload is completed, the inference result 731 of the artificial intelligence model and the expert By comparing the evaluation results 732 of , data for artificial intelligence model learning is generated. For example, when the bounding box area generated by the artificial intelligence model and the bounding box area verified by the expert are the same, the monitor 330 extracts the coordinate values of the corresponding area, and uses the coordinate values as data for learning the artificial intelligence model. Can be used. On the other hand, when the two regions are different, the monitor 330 uses the coordinates of the bounding box region newly defined by the expert.
- the monitor 330 provides the training data 735 and the input data 733 to the predictive model retraining block 740 .
- the predictive model retraining block 740 is shown as being external to the monitor 330 .
- the predictive model retraining block 740 may be an independent device or may be included in another device (eg, a platform device). Alternatively, according to another embodiment of the present invention, the predictive model retraining block 740 may be included in the monitor 330 .
- the validation block 850 may perform model validation.
- the verification block 850 is a unit test that verifies whether specific performance indicators such as average precision, sensitivity, specificity, and recall measured using data for verification are achieved. test), etc., can perform model verification based on various verification methods.
- the model that has passed the verification is returned to the artificial intelligence platform 310 as a new model 860 and applied to the predictive model block 412 for further inference operation.
- 9 shows an example of a procedure for reasoning and re-learning according to an embodiment of the present invention.
- 9 illustrates signal exchange between the user device 910 , the server 920 , and the expert device 930 .
- server 920 includes the entities illustrated in FIG. 3 .
- the user device 910 transmits an inference request message to the server 920 .
- the inference request message includes input data to the AI model.
- the user device 910 may generate input data through a user input or reception from an external device, and transmit the input data to the server 920 .
- step S903 the server 920 transmits an inference result message to the user device 910 .
- the inference result message includes data representing the result inferred using the artificial intelligence model from the input data received in step S901. That is, the artificial intelligence platform of the server 920 may perform inference using the artificial intelligence model and provide an output result.
- step S905 the server 920 transmits an evaluation request message to the expert device 930 .
- the evaluation request message is a message requesting evaluation of the inference result derived by the server 920 and includes data representing the inference result.
- step S907 the expert device 930 transmits an evaluation result message to the server 920 .
- the evaluation result message includes data indicating the evaluation result of the expert on the inference result received in step S905.
- the evaluation result may include at least one of whether to accept, whether to revise, and whether to revise.
- the server 920 performs re-learning.
- the server 920 generates user evaluation-based learning data based on the evaluation result received from the expert device 930, the inference result generated by the server 920, and the input data received from the user device 910,
- the artificial intelligence model is trained using the generated learning data.
- the server 920 may generate learning data by comparing the inference result and the evaluation result, and synthesizing the inference result and the evaluation result according to the comparison result.
- the training data may be labeled learning data.
- the user may obtain an inference result, and the artificial intelligence model may be updated based on the expert evaluation.
- the artificial intelligence model may be updated based on the expert evaluation.
- FIG. 9 a process in which inference is performed after re-learning is not shown.
- inference is performed using the artificial intelligence model updated through re-learning, and the result of the inference performed using the updated artificial intelligence model is can be provided.
- the server 920 may include information notifying that the updated artificial intelligence model is used based on the expert evaluation in the inference result message transmitted to the user device 910 . Accordingly, when displaying the inference result, the user device 910 may also display an indication indicating that it is an inference result using the artificial intelligence model updated based on the expert evaluation. Through this, the reliability of the inference result felt by the user of the user device 910 may be improved.
- the server 920 includes information about the expert participating in the evaluation in the inference result message transmitted to the user device 910, and the reflected expert evaluation. You can include more times. Accordingly, the user device 910 may acquire additional information (eg, information on the expert, the number of reflected evaluations, etc.) on the expert evaluation used for updating the artificial intelligence model. The user device 910 may further display additional information about the expert evaluation according to the user's request, and for this purpose, display an interface item (eg, a button) for instructing display of the additional information together with the inference result, Additional information may be displayed in response to selection (eg, click) of the corresponding interface item.
- an interface item eg, a button
- the server 920 may obtain an expert evaluation on the inference result through communication with the expert device 930 , and update the artificial intelligence model based on the expert evaluation.
- the user of the user device 910 may also be an expert.
- the user device 910 may be the expert device 930 .
- the transmission of the reasoning result message in step S903 and the transmission of the evaluation request message in step S905 may be replaced with one operation. That is, according to another embodiment of the present invention, the server 920 that generates an inference result by performing inference may transmit a message requesting evaluation of the inference result while including the inference result to the user device 910 at the same time. have. Thereafter, the user device 910 may transmit an evaluation result message including data indicating evaluation of the inference result to the server 920 .
- 10 illustrates an example of an operating method of a server providing an inference result according to an embodiment of the present invention.
- 10 illustrates an operation method of the artificial intelligence platform 310 . Since the artificial intelligence platform 310 may be implemented in a server (eg, the server 120, the server 920), the operating subject of this procedure will be described below as the server.
- a server eg, the server 120, the server 920
- the server receives an inference request.
- the inference request is received from a user device (eg, the user device 110a or the user device 910 ) and includes input data.
- the server determines whether the artificial intelligence model is expected to be updated within a threshold period. That is, the server predicts when the current AI model is updated through re-learning, when the updated new AI model is used, and whether the new AI model will be used at least within a threshold period. For example, the server collects the expert evaluation, the number of learning data generated based on the expert evaluation, whether re-learning using the learning data generated based on the expert evaluation has started, It is possible to predict whether or not a new artificial intelligence model will be used within a threshold period based on the elapsed time after the start of re-learning using the learning data.
- re-learning using the learning data generated based on the expert evaluation is started, and the average value (T total_avg ) of the time taken for re-learning up to now is the elapsed time after the current re-learning starts. If the time value (T total_avg -T lapse ) by subtracting the time (T lapse ) is less than the threshold time, the server may predict that a new AI model will be used within the threshold period.
- step S1009 the server performs inference using the current AI model. That is, without waiting until the update of the AI model, the server performs inference using the current AI model. Then, the server proceeds to the following step S1013.
- the server transmits a notification about the update schedule. It can be expected that the inference performance of the AI model will be improved by updating the AI model. Therefore, in order to provide the user with an opportunity to receive a more accurate inference result, the server does not immediately perform the inference operation, but may send a notification message indicating that the artificial intelligence model is scheduled to be updated within a threshold time. . Accordingly, the user device displays a notification about the update schedule. At this time, the alarm does not simply deliver the fact that the update is scheduled, but includes an inquiry about whether to proceed with inference with the current artificial intelligence model without waiting for the update of the artificial intelligence model. Accordingly, the user may input an answer to the user device as to whether or not to perform inference with the current AI model without waiting for the update of the AI model. The input of the user's response is sent to the server
- step S1007 the server checks whether it is requested to perform inference with the current artificial intelligence model. That is, the server checks the user's response to the inquiry as to whether to proceed with inference with the current artificial intelligence model included in the notification. Since the user's response is transmitted from the user device to the server, the server can check the user's response through the message received from the user device.
- step S1009 the server performs inference using the current AI model. That is, without waiting until the update of the AI model, the server performs inference using the current AI model. Then, the server proceeds to the following step S1013.
- step S1011 the server performs inference after updating the AI model. That is, if a request not to perform inference with the current AI model is requested, or a response to whether to perform inference with the current AI model is not received, the server after updating the AI model updates the new updated AI model inference is performed using In other words, the server waits until the AI model is updated, and then performs inference using the updated AI model.
- step S1013 the server transmits the inference result.
- the server transmits to the user device the result of inference performed using the artificial intelligence model before the update or the artificial intelligence model after the update.
- the inference result sent to the user device may be displayed by the user device and provided to the user.
- the server may manage history information of experts who have provided an evaluation. History information is generated for each expert, the number of times expert evaluation is provided, the response rate to the expert evaluation request, the acceptance rate among expert evaluations provided by the expert, the rejection rate among the expert evaluations provided by the expert, and the Including the percentage of corrections among expert evaluations provided by experts. Based on the history information for each expert, statistical information such as a rank that provides the most evaluation, a reply rate rank, an acceptance rate rank, a rejection rate rank, a revision rate rank, etc. may be generated.
- the history information and statistical information generated as described above may be utilized when requesting evaluation of the inference result. That is, when requesting expert evaluation of the inference result, the server may determine which expert to request expert evaluation based on at least one of history information and statistical information. That is, the server may select a target to transmit the evaluation request message based on at least one of history information and statistical information.
- 11 illustrates an example of an operation method of a server requesting evaluation of an inference result according to an embodiment of the present invention.
- 11 illustrates an operation method of the monitor 330 . Since the monitor 330 may be implemented in a server (eg, the server 120, the server 920), the operating subject of this procedure will be described below as the server.
- a server eg, the server 120, the server 920
- the server obtains an inference result.
- the server may obtain an inference result from input data received from the user device by performing inference using the artificial intelligence model.
- the server checks the evaluation reflection history for the artificial intelligence model used.
- the evaluation reflection history includes whether the current AI model has been retrained based on expert evaluation, how many retraining it has been, how many expert evaluations have been used for retraining, and how long has the most recently updated retraining been done? may include
- the server checks the evaluation performance history of the relevant experts.
- the relevant expert means experts belonging to the expert pool classified as qualified to verify the inference result of the AI model.
- the evaluation performance history includes the number of times expert evaluation was provided, the response rate to expert evaluation requests, the percentage of acceptance among the expert evaluations provided by the expert, the percentage of rejections among the expert evaluations provided by the expert, and the number of requests provided by the expert. including the percentage of corrections among expert evaluations;
- the server selects an expert to request evaluation based on the history information.
- the history information includes an evaluation reflection history for the artificial intelligence model used identified in step S1103 and an evaluation performance history of experts confirmed in step S1105.
- the server may further consider statistical information on the history information in addition to the history information.
- a rule or criterion for selecting an expert may be defined according to various embodiments.
- step S1109 the server sends an evaluation request message to the selected expert's device. That is, the server requests expert evaluation from the selected expert based on the evaluation reflection history of the AI model, the evaluation performance history of experts, and statistical information.
- an expert to request expert evaluation may be selected based on history information, evaluation information, and the like.
- a rule for selecting an expert may be defined according to various embodiments, and the rule may be defined differently according to a policy for learning.
- the server selects an expert who has never provided an expert evaluation for re-learning of the corresponding AI model, and if there is no expert who has never provided an expert, the least You can select an expert who has provided an expert evaluation by the number of times. If there are a plurality of experts with the minimum number of expert evaluations provided, the server may randomly select one expert or select an expert with a high response rate.
- various rules combining the items of the above-described history information with various priorities may be applied.
- Exemplary methods of the present invention are expressed as a series of actions for clarity of description, but this is not intended to limit the order in which the steps are performed, and each step may be performed simultaneously or in a different order if necessary.
- other steps may be included in addition to the illustrated steps, other steps may be included after excluding some steps, or additional other steps may be included except for some steps.
- various embodiments of the present invention may be implemented by hardware, firmware, software, or a combination thereof.
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGAs Field Programmable Gate Arrays
- general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, and the like.
- the scope of the present invention includes software or machine-executable instructions (eg, operating system, application, firmware, program, etc.) that cause operation according to the method of various embodiments to be executed on a device or computer, and such software or and non-transitory computer-readable media in which instructions and the like are stored and executed on a device or computer.
- software or machine-executable instructions eg, operating system, application, firmware, program, etc.
- the above-mentioned contents can be variously applied in the medical field using artificial intelligence.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
La présente invention concerne un procédé et un dispositif pour réaliser un apprentissage automatique sur la base d'une évaluation d'expert, et un procédé de fonctionnement d'un serveur fournissant une plateforme d'intelligence artificielle peut comprendre les étapes suivantes : la réception, en provenance d'un dispositif utilisateur, d'un message de requête d'inférence comprenant une entrée de données d'entrée dans un modèle d'intelligence artificielle ; l'obtention d'un résultat d'inférence par réalisation d'une inférence sur la base des données d'entrée à l'aide du modèle d'intelligence artificielle ; la transmission d'un message de résultat d'inférence comprenant le résultat d'inférence au dispositif utilisateur ; la transmission d'un message de requête d'évaluation comprenant le résultat d'inférence à un dispositif expert ; la réception d'un message de résultat d'évaluation comprenant une évaluation d'expert du résultat d'inférence du dispositif expert ; et le ré-entraînement du modèle d'intelligence artificielle en utilisant des données d'entraînement générées sur la base de l'évaluation d'expert.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2021-0056262 | 2021-04-30 | ||
KR1020210056262A KR102326740B1 (ko) | 2021-04-30 | 2021-04-30 | 자동 기계학습을 통한 자동 진화형 플랫폼 구현 방법 및 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022231392A1 true WO2022231392A1 (fr) | 2022-11-03 |
Family
ID=78702655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/006212 WO2022231392A1 (fr) | 2021-04-30 | 2022-04-29 | Procédé et dispositif pour mettre en œuvre une plateforme à évolution automatique par apprentissage automatique de machine |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102326740B1 (fr) |
WO (1) | WO2022231392A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102326740B1 (ko) * | 2021-04-30 | 2021-11-17 | (주)제이엘케이 | 자동 기계학습을 통한 자동 진화형 플랫폼 구현 방법 및 장치 |
KR20230085001A (ko) * | 2021-12-06 | 2023-06-13 | 한국전자기술연구원 | 지능형 객체 탐지를 위한 최적화 전략 설정 기반 딥러닝 모델 생성 방법 및 시스템 |
WO2024005313A1 (fr) * | 2022-06-30 | 2024-01-04 | 삼성전자 주식회사 | Serveur de mise à jour de modèle d'apprentissage de dispositif électronique et procédé de fonctionnement de celui-ci |
KR102665956B1 (ko) * | 2023-06-22 | 2024-05-14 | 주식회사 페블러스 | 가상 데이터를 처리하기 위한 유저 인터페이스 제공 방법 및 그러한 방법이 구현된 컴퓨팅 장치 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101818074B1 (ko) * | 2017-07-20 | 2018-01-12 | (주)제이엘케이인스펙션 | 인공지능 기반 의료용 자동 진단 보조 방법 및 그 시스템 |
KR101929752B1 (ko) * | 2018-05-30 | 2018-12-17 | (주)제이엘케이인스펙션 | 인공지능 기반 의료기기의 임상적 유효성 평가 방법 및 시스템 |
CN110598549A (zh) * | 2019-08-07 | 2019-12-20 | 王满 | 一种基于心脏功能监控的卷积神经网络信息处理系统及训练方法 |
KR102153920B1 (ko) * | 2018-02-26 | 2020-09-09 | (주)헬스허브 | 정제된 인공지능 강화학습 데이터 생성을 통한 의료영상 판독 시스템 및 그 방법 |
KR20210013830A (ko) * | 2019-07-29 | 2021-02-08 | 주식회사 코어라인소프트 | 의료용 인공 신경망의 분석 결과를 평가하는 의료용 인공 신경망 기반 의료 영상 분석 장치 및 방법 |
KR102326740B1 (ko) * | 2021-04-30 | 2021-11-17 | (주)제이엘케이 | 자동 기계학습을 통한 자동 진화형 플랫폼 구현 방법 및 장치 |
-
2021
- 2021-04-30 KR KR1020210056262A patent/KR102326740B1/ko active IP Right Grant
-
2022
- 2022-04-29 WO PCT/KR2022/006212 patent/WO2022231392A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101818074B1 (ko) * | 2017-07-20 | 2018-01-12 | (주)제이엘케이인스펙션 | 인공지능 기반 의료용 자동 진단 보조 방법 및 그 시스템 |
KR102153920B1 (ko) * | 2018-02-26 | 2020-09-09 | (주)헬스허브 | 정제된 인공지능 강화학습 데이터 생성을 통한 의료영상 판독 시스템 및 그 방법 |
KR101929752B1 (ko) * | 2018-05-30 | 2018-12-17 | (주)제이엘케이인스펙션 | 인공지능 기반 의료기기의 임상적 유효성 평가 방법 및 시스템 |
KR20210013830A (ko) * | 2019-07-29 | 2021-02-08 | 주식회사 코어라인소프트 | 의료용 인공 신경망의 분석 결과를 평가하는 의료용 인공 신경망 기반 의료 영상 분석 장치 및 방법 |
CN110598549A (zh) * | 2019-08-07 | 2019-12-20 | 王满 | 一种基于心脏功能监控的卷积神经网络信息处理系统及训练方法 |
KR102326740B1 (ko) * | 2021-04-30 | 2021-11-17 | (주)제이엘케이 | 자동 기계학습을 통한 자동 진화형 플랫폼 구현 방법 및 장치 |
Also Published As
Publication number | Publication date |
---|---|
KR102326740B1 (ko) | 2021-11-17 |
KR102326740B9 (ko) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022231392A1 (fr) | Procédé et dispositif pour mettre en œuvre une plateforme à évolution automatique par apprentissage automatique de machine | |
WO2021060899A1 (fr) | Procédé d'apprentissage pour spécialiser un modèle d'intelligence artificielle dans une institution pour un déploiement et appareil pour l'apprentissage d'un modèle d'intelligence artificielle | |
WO2019132168A1 (fr) | Système d'apprentissage de données d'images chirurgicales | |
US20230222341A1 (en) | Targeted crowd sourcing for metadata management across data sets | |
WO2019117563A1 (fr) | Appareil d'analyse prédictive intégrée pour télésanté interactive et procédé de fonctionnement associé | |
WO2021120186A1 (fr) | Système et procédé d'analyse distribuée de défauts de produit, et support de stockage lisible par ordinateur | |
WO2020096098A1 (fr) | Procédé de gestion de travail d'annotation et appareil et système le prenant en charge | |
WO2012134106A2 (fr) | Procédé et appareil pour le stockage et l'affichage d'informations d'image médicale | |
WO2021045367A1 (fr) | Procédé et programme informatique visant à déterminer un état psychologique par un processus de dessin du bénéficiaire de conseils | |
WO2020071697A1 (fr) | Dispositif électronique et son procédé de commande | |
WO2019235828A1 (fr) | Système de diagnostic de maladie à deux faces et méthode associée | |
WO2024039120A1 (fr) | Dispositif de diagnostic portable sans face à face ayant des capteurs | |
WO2022177345A1 (fr) | Procédé et système pour générer un événement dans un objet sur un écran par reconnaissance d'informations d'écran sur la base de l'intelligence artificielle | |
CN113569671B (zh) | 异常行为报警方法、装置 | |
WO2021040192A1 (fr) | Système et procédé d'apprentissage de modèle d'intelligence artificielle | |
WO2022119162A1 (fr) | Méthode de prédiction de maladie basée sur une image médicale | |
WO2021107307A1 (fr) | Procédé basé sur un agent conversationnel pour fournir des informations de pêche, et dispositif associé | |
WO2022265399A1 (fr) | Dispositif, système et procédé de fourniture des solutions de gestion personnalisées par un employé sur la base d'une intelligence artificielle | |
WO2022139170A1 (fr) | Procédé d'analyse de lésion sur la base d'image médicale | |
EP4099225A1 (fr) | Procédé de formation d'un classificateur et système de classification de blocs | |
WO2021246625A1 (fr) | Système de plateforme en nuage à base d'intelligence artificielle permettant de lire une image médicale où un temps d'exécution attendu d'une couche individuelle est affiché | |
WO2021015489A2 (fr) | Procédé et dispositif d'analyse d'une zone d'image singulière à l'aide d'un codeur | |
WO2021049700A1 (fr) | Application et serveur pour la gestion de personnels de service | |
CN113360612A (zh) | 一种基于问诊请求的ai诊断方法、装置、存储介质和设备 | |
WO2024106946A1 (fr) | Dispositif et procédé d'aide à la décision clinique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22796224 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.02.2024) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22796224 Country of ref document: EP Kind code of ref document: A1 |