WO2021137420A1 - Appareil de développement pour algorithme d'analyse et son procédé de fonctionnement - Google Patents
Appareil de développement pour algorithme d'analyse et son procédé de fonctionnement Download PDFInfo
- Publication number
- WO2021137420A1 WO2021137420A1 PCT/KR2020/016107 KR2020016107W WO2021137420A1 WO 2021137420 A1 WO2021137420 A1 WO 2021137420A1 KR 2020016107 W KR2020016107 W KR 2020016107W WO 2021137420 A1 WO2021137420 A1 WO 2021137420A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- calculation block
- calculation
- connection
- analysis algorithm
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/33—Intelligent editors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/31—Programming languages or programming paradigms
- G06F8/315—Object-oriented languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
Definitions
- the present invention relates to a method that can effectively support the development (construction) of a machine learning model of high difficulty using a large-scale computational resource without being constrained by a special development language or environment.
- Machine learning Artificial intelligence can be created (trained) or utilized (inferred or predicted) using machine learning.
- the training process is a process of forming artificial intelligence that is used as a criterion for judging operations such as data recognition and classification by using an arithmetic device, and as a result of the training, Recognition and classification criteria may be generated.
- the judgment criterion generated through the training process is called a machine learning model.
- the present invention intends to propose a new method that can effectively support the development (construction) of a machine learning model.
- the present invention was created in consideration of the above circumstances, and the purpose of the present invention is to effectively support the development (construction) of a high-level machine learning model using large-scale computational resources without the restriction of a special development language or environment. have.
- An apparatus for developing an analysis algorithm for achieving the above object includes: a block connection unit for connecting a calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm by a link; an error checking unit for checking whether an error occurs in a result of a forward calculation sequentially calculated from an input calculation block to an output calculation block based on a predefined operator according to the link connection; and a connection value update unit for updating the connection value between each calculation block so that the output value of the output calculation block corresponding to the forward calculation result becomes a predefined optimal value when it is confirmed that an error occurs in the forward calculation result. characterized in that
- the output value from the specific calculation block may be randomized via an aggregation block for applying the calculation rule. It can be linked by a link so that it can be passed to a computational block.
- the block connection unit may connect a calculation block of a layer having a layer interval greater than or equal to a set number of layers from the layer of the specific calculation block, and whose weight change between layers starts below a reference value, to the arbitrary calculation block. have.
- the block connection unit connects a second calculation block that is another arbitrary calculation block through a separate aggregation block to a first calculation block that is an arbitrary calculation block connected to the specific calculation block via the aggregation block.
- the weight change between the layers of the first calculation block and the layer having an interval equal to or greater than the set number is checked in the order of the neighboring layers, and the calculation block of the layer in which the weight change equal to or greater than the reference value is confirmed is set to the second calculation block. It can be connected to a calculation block.
- connection value update unit may update the connection value between each calculation block through a backward operation based on at least one of a curvature and a slope.
- connection value update unit may initially update the connection value between each calculation block based on the curvature when the occurrence of an error in the forward operation result is confirmed, and when the error reduction width using the curvature is less than or equal to a reference value, the slope Based on , the connection value between each calculation block can be updated.
- the analysis algorithm development apparatus as a result of repeatedly executing the analysis algorithm a predetermined number of times through each of two or more matrix operators, has the shortest average execution time in the two execution steps that are adjacent to each other, and the average
- the method may further include a recommendation unit for recommending a specific matrix operator whose difference between execution times is determined to be less than a threshold as an operator for the forward operation.
- the method of operating an analysis algorithm development apparatus for achieving the above object is a block connection step of linking a calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm by a link ; an error checking step of checking whether an error occurs in a forward calculation result sequentially calculated from an input calculation block to an output calculation block based on a predefined operator according to the link connection; and a connection value updating step of updating the connection value between each calculation block so that the output value of the output calculation block corresponding to the forward calculation result becomes a predefined optimal value when it is confirmed that an error occurs in the forward operation result. characterized in that
- the output value from the specific calculation block is arbitrary via an aggregation block for applying the calculation rule It can be linked by a link so that it can be transmitted to the computation block of
- a calculation block of a layer having a layer interval of a set number or more from the layers of the specific calculation block and whose weight change between layers starts less than or equal to a reference value is connected to the arbitrary calculation block.
- a second calculation block that is another arbitrary calculation block through a separate aggregation block for a first calculation block that is an arbitrary calculation block connected to the specific calculation block via the aggregation block to connect the layers of the first calculation block and the layer having an interval equal to or greater than the set number, check the weight change between the layers in the order of the neighboring layers, and select the calculation block of the layer in which the weight change greater than or equal to the reference value is confirmed as the second calculation block. It can be linked with 2 calculation blocks.
- connection value between each calculation block may be updated through a backward operation based on at least one of a curvature and a slope.
- connection value between each calculation block is initially updated based on the curvature, and when the error reduction width using the curvature is less than or equal to a reference value, the Based on the slope, the connection value between each calculation block can be updated.
- the method as a result of repeatedly executing the analysis algorithm a predefined number of times through each of two or more matrix operators, the average execution time in two execution steps in the order adjacent to each other is the shortest, and the average execution time between the average execution times is the shortest.
- the method may further include a recommendation unit for recommending a specific matrix operator whose difference is determined to be less than a threshold as an operator for the forward operation.
- the calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm is connected by a link, and an optimal operator for the execution of the analysis algorithm is selected.
- FIG. 1 is an exemplary diagram for explaining a machine learning model development environment according to an embodiment of the present invention according to an embodiment of the present invention.
- FIG. 2 is a schematic configuration diagram of an analysis algorithm development apparatus according to an embodiment of the present invention.
- FIG. 3 is an exemplary diagram for explaining a calculation block according to an embodiment of the present invention.
- 4 and 5 are exemplary diagrams for explaining a connection method through an aggregation block according to an embodiment of the present invention.
- FIG. 6 is an exemplary diagram for explaining a connection value update between calculation blocks according to an embodiment of the present invention.
- FIG. 7 is a schematic flowchart for explaining an operating method of an analysis algorithm development apparatus according to an embodiment of the present invention.
- first, second, etc. used herein may be used to describe various components, but the components should not be limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
- the user's analysis flow is controlled and managed based on the GUI (Graphic User Interface) and various analysis methods and algorithms such as statistical analysis and numerical analysis, including a machine learning model such as deep learning, are controlled and managed, and the algorithm
- GUI Graphic User Interface
- various analysis methods and algorithms such as statistical analysis and numerical analysis, including a machine learning model such as deep learning, are controlled and managed, and the algorithm
- FIG. 1 schematically shows a machine learning model development environment according to an embodiment of the present invention.
- the machine learning model development environment may divide a system configuration and a processing flow between the configurations according to the system configuration into the following steps.
- Termination and size selection of resources required for data processing and analysis such as CPU, GPU, and FGPA
- step 1) related to the termination and size selection of resources necessary for data processing and analysis such as CPU, GPU, FGPA, records the type and size of available resources, and access information in the system configuration file, and the resource allocation block If you select , it can be understood as the process of receiving this information and making a list.
- step 2) related to input, processing, and loading of original data may be understood as a process executed according to the size of the resource selected in step 1).
- big data technologies including Hadoop, Spark, etc.
- the user developer
- step 3 of the analysis algorithm development the user (developer) can select the input variable used in the analysis algorithm or directly input it, and when the analysis algorithm is not an existing standardized algorithm, the user (developer) It can support modifying the core part of algorithm processing directly with scripts.
- the internal execution code of the analysis algorithm may be provided based on Python and R API provided by, for example, a Python library, an R library, and a deep learning framework (eg, TensorFlow, PyTorch, MXNet, CNTK, DL4J, etc.).
- Python and R API provided by, for example, a Python library, an R library, and a deep learning framework (eg, TensorFlow, PyTorch, MXNet, CNTK, DL4J, etc.).
- step 4) regarding model evaluation and fine-tuning, it is understood as a process of evaluating whether the developed model provides accurate results or has room for further improvement, and adjusting the necessary input variables in the analysis algorithm according to the evaluation results.
- the input variable adjustment may be made in a way that the user (developer) directly selects, or provides information so that the best result can be selected after all cases are substituted and processed according to a predetermined rule.
- step 5 related to the visualization of the analysis result, various charts are provided for the developer to understand the numerical analysis result more easily from the results of model evaluation and fine-tuning or the final result, and for example, neural network configuration information in algorithms such as deep learning It can be understood as a process of visualizing by outputting the graph.
- step 6) of new data inference using the development model and transmission of the inference server is performed by receiving new data not used for analysis model development and performing prediction, or if there is a separate inference server, it is developed in the inference server It can be understood as the process of transmitting a model and applying it to the actual service and operating environment.
- step 7) of adding a user-defined block may be understood as a process of adding a function that cannot be processed in the predefined block provided by the system as a default.
- step 8) related to the use of the pre-learning model can be understood as a process of registering the previously developed model as a library and allowing the user (developer) to use it to build a new model or directly apply it to inference.
- the pre-learning model can be usefully utilized when the concept of the model is very complex, and the user (developer) can adjust the coefficients or weights calculated by the model by adding their own data to this complex model. .
- this process can be used as a substitute for step 3) of the aforementioned analysis algorithm development.
- step 3 related to the development of a block-link-based analysis algorithm, which is a core process for developing (constructing) a machine learning model in a machine learning model development environment according to an embodiment of the present invention, will be continued below. do it with
- FIG. 2 schematically shows the configuration of an analysis algorithm development apparatus 100 for realizing a block-link-based analysis algorithm development process according to an embodiment of the present invention.
- the analysis algorithm development apparatus 100 includes a block connection unit 10 for connecting calculation blocks, an error check unit 20 for checking whether an error occurs, and a block It may have a configuration including a connection value update unit 30 for updating the connection value between connections.
- the analysis algorithm development apparatus 100 may further include a recommendation unit 40 for recommending a speaker in addition to the above-described configuration.
- the whole configuration or at least part of the analysis algorithm development apparatus 100 including the block connection unit 10, the error check unit 20, the connection value update unit 30, and the recommendation unit 40 is in the form of a hardware module or software It may be implemented in the form of a module or may be implemented in a form in which a hardware module and a software module are combined.
- the software module for example, may be understood as an instruction executed by a processor for controlling operations in the analysis algorithm development apparatus 100, and these instructions are in the form mounted in the memory in the analysis algorithm development apparatus 100.
- the analysis algorithm development apparatus 100 may further include a configuration of the communication unit 50, which is a communication module responsible for a communication function for supporting a wired/wireless communication network connection, in addition to the above configuration. .
- the apparatus 100 for developing an analysis algorithm can effectively support the development (construction) of a machine learning model of high difficulty using large-scale computational resources without restriction of a special development language or environment through the above-described configuration.
- each configuration in the analysis algorithm development apparatus 100 for realizing this will be described in more detail.
- the block connection unit 10 performs a function of connecting the calculation blocks with a link.
- the block connection unit 10 connects the calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm by a link.
- the calculation block is an object for inputting or selecting essential information necessary for the overall analysis process such as analysis and data processing
- the link may be understood as an object defining the characteristics of the connection between the calculation blocks.
- individual elements of an analysis algorithm or model are implemented as independent calculation blocks, such as a method of controlling data processing or analysis flow, and the flow of calculation between each calculation block is connected and processed by a link.
- computation block group (computation block group A, computation block group B) supports both the link-based connection within the group, the group and the group, or the group and the external single or multiple computation blocks.
- the implementation of the calculation block in a group unit can be understood to divide and manage the entire complex analysis process and model in a group unit, which is an easy-to-manage form.
- the block connection unit 10 intends to apply a predefined calculation rule to an output value from a specific calculation block
- the output value from the corresponding calculation block is an arbitrary calculation via the aggregation block for applying the calculation rule.
- Links can be linked so that they can be delivered as blocks.
- the calculation result of a specific calculation block is not directly transferred to an adjacent block, but is connected to an arbitrary calculation block through several calculation procedures, or the calculation result of each calculation block is expressed by a specific algorithm or equation It provides aggregate blocks that can be combined according to
- the gradient is not lost and can be transmitted well, or it prevents distortion such as underestimating or overestimating a specific calculation result in the algorithm processing process.
- a method that can freely connect to arbitrary calculation blocks and apply calculation rules, rather than sequential connection with adjacent calculation blocks, is required when a special calculation rule is to be applied.
- a connection to an arbitrary calculation block is supported by applying a predefined calculation rule to an output value from a specific calculation block using the aggregation block.
- connection of arbitrary calculation blocks through the aggregation block can be made according to user (developer) definition, but in one embodiment of the present invention, it is assumed that the system automatically supports connection with arbitrary calculation blocks. .
- the block connection unit 10 when the block connection unit 10 connects arbitrary calculation blocks through an aggregation block, it has a layer interval of a set number (eg, 3 layers) or more from the layer of a specific calculation block, and The calculation block of the layer whose change starts below the reference value is connected to an arbitrary calculation block.
- a layer interval of a set number eg, 3 layers
- the calculation block of the layer whose change starts below the reference value is connected to an arbitrary calculation block.
- the block connection unit 10 for any calculation block (hereinafter, first calculation block) connected to the specific calculation block via the aggregation block, another arbitrary calculation block (hereinafter, 2nd calculation block), check the weight change between the layers of the first calculation block and the layer having an interval greater than or equal to the set number (eg, 3 layers) in the order of neighboring layers, and change the weight greater than or equal to the reference value
- the calculation block of the layer in which ⁇ is confirmed is connected to the second calculation block.
- the change in weights is the reference value input by the user (developer) with an interval of 3 layers or more between intermediate layers.
- An arbitrary calculation block can be constructed by increasing the interval between intermediate layers by one layer (Layer) until the value is sufficiently changed (ae).
- the aggregation block according to an embodiment of the present invention may support simultaneous connection with a plurality of calculation blocks, for example, as shown in FIG. 5 .
- a predefined calculation rule can be applied and one or multiple outputs can be produced, and these results are in turn combined with one or multiple calculation blocks. It can be transmitted through a connection.
- the aggregation block according to an embodiment of the present invention can be connected to an adjacent calculation block or connected to a remote calculation block through the aggregation block, and through this block-to-block connection method, accurate learning even in a model having a complex network structure can make it possible
- the error check unit 20 performs a function of confirming whether an error occurs in the forward calculation result.
- the error check unit 20 checks whether an error occurs in the forward calculation result sequentially calculated from the input calculation block to the output calculation block based on a predefined operator according to the link connection between the calculation blocks.
- checking whether an error occurs in the forward operation result may be understood as updating a connection value between each calculation block through a backward operation in order to improve accuracy when an error occurs in the forward operation result.
- each adjacent calculation block is directly linked, and when this process is performed in the forward direction (input-output), it can be understood as defining a simple workflow.
- the reverse (output->input) operation process is the core of algorithm processing (eg, error function-based iterative processing in machine learning models or numerical analysis models, etc.) and can be widely used in various machine learning models or numerical analysis models.
- algorithm processing eg, error function-based iterative processing in machine learning models or numerical analysis models, etc.
- connection value updater 30 Such a backward operation process will be described in detail in the description of the connection value updater 30 to be described below.
- connection value update unit 30 performs a function of updating the connection value between each calculation block through reverse operation.
- connection value update unit 30 sets the output value of the output calculation block corresponding to the forward calculation result to a predefined value.
- the connection value between each calculation block is updated so that it becomes the optimal value.
- connection value update unit 30 repeatedly updates the connection value between each calculation block through a reverse operation based on at least one of the curvature and the slope, so that the output value of the output calculation block reaches a predefined optimal value. can make it happen
- connection value update unit 30 initially updates the connection value between each calculation block based on the curvature, and the error reduction width using the curvature is the reference value. In the following case, by repeating the operation of updating the connection value between each calculation block based on the slope (ad), the convergence speed of the output value of the output calculation block is accelerated, and the predefined optimal value is always reached. will be.
- the update method using the curvature in the embodiment of the present invention may be calculated using the following Hessian Matrix.
- the above Hessian Matrix can be calculated using the below Jacobian Matrix.
- ⁇ (>0) is a damping factor (DF) and may be increased or decreased together according to the increase/decrease state of the error and may be adjusted.
- DF damping factor
- F(x) denotes a differentiable multivariable function
- F(x) is the gradient vector and represents a constant
- the recommendation unit 40 performs a function of recommending an operator for executing the analysis algorithm.
- the recommendation unit 40 repeatedly executes the analysis algorithm a predetermined number of times through each of the previously registered matrix operators, and selects a specific value according to the repeated execution result.
- the matrix operator is recommended as the optimal operator for forward operation.
- the recommendation unit 40 repeatedly executes the analysis algorithm a predefined number of times through each of the previously registered matrix operators, and as a result of such repeated execution, the average execution time in the two execution steps that are adjacent to each other is the shortest , a specific matrix operator whose difference between average execution times is found to be less than a threshold can be recommended as an operator for forward operation.
- GEMM General Matrix Multiply
- Strassen Pan
- Winograd Coppersmith-Winograd
- Stother and Williams
- the recommendation unit 40 calculates a probabilistic difference between the average execution time in the previous execution step (t-1) and the average execution time in the next execution step (t) for each of the matrix operators to be compared. do.
- Such a stochastic difference is the difference in average execution time within a specific probability based on the random variable F by [Equation 4] below calculated with the values of two random variables V 1 and V 2 with k 1 , k 2 degrees of freedom. It can be calculated as the value of the cumulative probability distribution.
- the cumulative probability serving as the optimal operator recommendation criterion can be changed by the user according to the situation.
- the basic cumulative probability is recommended based on 0.9 and may be changed and input according to the situation.
- the random variable F has little change and may continue until there is no change in the conclusion about the difference in average execution time.
- a user in the case of a matrix operator applied every time the analysis algorithm is repeatedly executed, a user (developer) may be selected in a fixed order or randomly selected in advance and applied to the calculation process.
- an intermediate operator for performing a specific operation in advance using the basic operator of a matrix operation is provided, and a user (developer) can create and provide a new operator.
- CNN Convolutional Neural Network
- An appropriate convolution operation method can be selected.
- the user can directly specify the matrix operator and force its use.
- a user may develop an analysis model by randomly applying an appropriate matrix operator for each step.
- the calculation time according to the size and shape of the matrix is stored in a file or database in the system, and when a matrix of the same type is input later, it is performed in the past. It can also be implemented in such a way that the system first recommends the optimal operator based on the obtained results.
- the calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm is connected by a link, and , it is possible to effectively support the development (construction) of high-level machine learning models using large-scale computational resources through the method of recommending the optimal operator for the execution of the analysis algorithm.
- the block connection unit 10 connects the calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm by a link (S11-S14).
- the calculation block is an object for inputting or selecting essential information necessary for the overall analysis process such as analysis and data processing
- the link may be understood as an object defining the characteristics of the connection between the calculation blocks.
- individual elements of an analysis algorithm or model are implemented as independent calculation blocks, such as a method of controlling data processing or analysis flow, and the flow of calculation between each calculation block is connected and processed by a link.
- a calculation block group (computation block group A, calculation block group) A detailed model including B) can be implemented.
- computation block group (computation block group A, computation block group B) supports both the link-based connection within the group, the group and the group, or the group and the external single or multiple computation blocks.
- the implementation of the calculation block in a group unit can be understood to divide and manage the entire complex analysis process and model in a group unit, which is an easy-to-manage form.
- the block connection unit 10 determines that the output value from the corresponding calculation block is applied to the calculation rule through step S13. It can be linked by a link so that it can be delivered to any computational block via an aggregation block for
- the calculation result of a specific calculation block is not directly transferred to an adjacent block, but is connected to an arbitrary calculation block through several calculation procedures, or the calculation result of each calculation block is expressed by a specific algorithm or equation It provides aggregate blocks that can be combined according to
- the gradient is not lost and can be transmitted well, or it prevents distortion such as underestimating or overestimating a specific calculation result in the algorithm processing process.
- a method that can freely connect to arbitrary calculation blocks and apply calculation rules, rather than sequential connection with adjacent calculation blocks, is required when a special calculation rule is to be applied.
- a connection to an arbitrary calculation block is supported by applying a predefined calculation rule to an output value from a specific calculation block using the aggregation block.
- connection of arbitrary calculation blocks through the aggregation block can be made according to user (developer) definition, but in one embodiment of the present invention, it is assumed that the system automatically supports connection with arbitrary calculation blocks. .
- the block connection unit 10 when the block connection unit 10 connects arbitrary calculation blocks through an aggregation block, it has a layer interval of a set number (eg, 3 layers) or more from the layer of a specific calculation block, and The calculation block of the layer whose change starts below the reference value is connected to an arbitrary calculation block.
- a layer interval of a set number eg, 3 layers
- the calculation block of the layer whose change starts below the reference value is connected to an arbitrary calculation block.
- the block connection unit 10 for any calculation block (hereinafter, first calculation block) connected to the specific calculation block via the aggregation block, another arbitrary calculation block (hereinafter, 2nd calculation block), check the weight change between the layers of the first calculation block and the layer having an interval greater than or equal to the set number (eg, 3 layers) in the order of neighboring layers, and change the weight greater than or equal to the reference value
- the calculation block of the layer in which ⁇ is confirmed is connected to the second calculation block.
- the weight change is the reference value input by the user (developer) with an interval of 3 layers or more between intermediate layers.
- An arbitrary calculation block can be configured by increasing the interval between the intermediate layers by one layer (Layer) until the value is sufficiently changed (ae).
- the aggregation block may support simultaneous connection with a plurality of calculation blocks, as illustrated in FIG. 5 .
- a predefined calculation rule can be applied and one or multiple outputs can be produced, and these results are in turn combined with one or multiple calculation blocks. It can be transmitted through a connection.
- the aggregation block according to an embodiment of the present invention can be connected to an adjacent calculation block or connected to a remote calculation block through the aggregation block, and through this block-to-block connection method, accurate learning even in a model having a complex network structure can make it possible
- the error check unit 20 checks whether an error occurs in the forward calculation result sequentially calculated from the input calculation block to the output calculation block based on a predefined operator according to the link connection between the calculation blocks (S15).
- checking whether an error occurs in the forward operation result may be understood as updating a connection value between each calculation block through a backward operation in order to improve accuracy when an error occurs in the forward operation result.
- connection value update unit 30 sets the output value of the output calculation block corresponding to the forward calculation result to a predefined optimal value.
- the connection value between each calculation block is updated so that it becomes a value (S16-S17).
- connection value update unit 30 repeatedly updates the connection value between each calculation block through a reverse operation based on at least one of the curvature and the slope, so that the output value of the output calculation block reaches a predefined optimal value. can make it happen
- connection value update unit 30 initially updates the connection value between each calculation block based on the curvature when it is confirmed that an error occurs in the forward operation result, and the error reduction width using the curvature is If it is less than the reference value, by repeating the operation of updating the connection value between each calculation block based on the slope (ad), the convergence speed of the output value of the output calculation block is accelerated, and the predefined optimal value is always reached.
- the recommendation unit 40 repeatedly executes the analysis algorithm a predefined number of times through each of the previously registered matrix operators, and according to the results of the repeated execution, the specific matrix operator is recommended as an optimal operator for forward operation (S18-S21).
- the recommendation unit 40 repeatedly executes the analysis algorithm a predefined number of times through each of the previously registered matrix operators, and as a result of such repeated execution, the average execution time in the two execution steps that are adjacent to each other is the shortest, , a specific matrix operator whose difference between average execution times is found to be less than a threshold can be recommended as an optimal operator for forward operation.
- GEMM General Matrix Multiply
- Strassen Pan
- Winograd Coppersmith-Winograd
- Stother and Williams
- the recommendation unit 40 calculates a probabilistic difference between the average execution time in the previous execution step (t-1) and the average execution time in the next execution step (t) for each of the matrix operators to be compared.
- This stochastic difference is averaged within a specific probability based on the random variable F calculated with the values of two random variables V 1 and V 2 having k 1 , k 2 degrees of freedom as in [Equation 4] mentioned above. Whether there is a difference in execution time can be calculated as a value of the cumulative probability distribution.
- the cumulative probability serving as the optimal operator recommendation criterion can be changed by the user according to the situation.
- the basic cumulative probability is recommended based on 0.9 and may be changed and input according to the situation.
- the random variable F has little change and may continue until there is no change in the conclusion about the difference in average execution time.
- the calculation flow between each calculation block in which independent calculation processing is performed in relation to the execution of the analysis algorithm is connected by a link. And, through the method of recommending the optimal operator for the execution of the analysis algorithm, it is possible to effectively support the development (construction) of high-level machine learning models using large-scale computational resources.
- Implementations of the subject matter described in this specification are implemented as digital electronic circuits, computer software, firmware or hardware including the structures disclosed in this specification and structural equivalents thereof, or at least one of these It can be implemented by combining.
- Implementations of the subject matter described herein are one or more computer program products, ie one or more modules of computer program instructions encoded on a tangible program storage medium for controlling the operation of or for execution by a processing system. can be implemented.
- the computer readable medium may be a machine readable storage device, a machine readable storage substrate, a memory device, a composition of matter that affects a machine readable radio wave signal, or a combination of one or more thereof.
- system encompasses all devices, devices and machines for processing data, including, for example, programmable processors, computers, or multiple processors or computers.
- a processing system may include, in addition to hardware, code that, upon request, forms an execution environment for a computer program, such as code constituting processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of these. .
- a computer program (also known as a program, software, software application, script or code) may be written in any form of a programming language, including compiled or interpreted language or a priori or procedural language, and may be written as a stand-alone program or module; It can be deployed in any form, including components, subroutines, or other units suitable for use in a computer environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program may be placed in a single file provided to the requested program, or in multiple interacting files (eg, files that store one or more modules, subprograms, or portions of code), or portions of files that hold other programs or data. (eg, one or more scripts stored within a markup language document).
- the computer program may be deployed to be executed on a single computer or multiple computers located at one site or distributed over a plurality of sites and interconnected by a communication network.
- computer-readable media suitable for storing computer program instructions and data include, for example, semiconductor memory devices such as EPROMs, EEPROMs and flash memory devices, such as magnetic disks such as internal hard disks or external disks, magneto-optical disks and CDs.
- semiconductor memory devices such as EPROMs, EEPROMs and flash memory devices, such as magnetic disks such as internal hard disks or external disks, magneto-optical disks and CDs.
- -Can include all types of non-volatile memory, media and memory devices, including ROM and DVD-ROM disks.
- the processor and memory may be supplemented by, or integrated into, special purpose logic circuitry.
- Implementations of the subject matter described herein may include a backend component, such as a data server, or a middleware component, such as an application server, such as a web browser or graphical user that allows a user to interact with an implementation of the subject matter described herein, for example. It may be implemented in a front-end component, such as a client computer having an interface, or in a computing system including any combination of one or more of such back-end, middleware, or front-end components. The components of the system may be interconnected by any form or medium of digital data communication, such as, for example, a communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
La présente invention concerne un appareil de développement pour un algorithme d'analyse et son procédé de fonctionnement, l'appareil de développement pouvant prendre en charge efficacement le développement (construction) d'un modèle d'apprentissage machine à haut niveau à l'aide de ressources de calcul à grande échelle, y compris sans les contraintes d'un langage ou d'un environnement de développement spécial.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0178384 | 2019-12-30 | ||
KR1020190178384A KR102113546B1 (ko) | 2019-12-30 | 2019-12-30 | 분석알고리즘개발장치 및 그 동작 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021137420A1 true WO2021137420A1 (fr) | 2021-07-08 |
Family
ID=71090699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/016107 WO2021137420A1 (fr) | 2019-12-30 | 2020-11-16 | Appareil de développement pour algorithme d'analyse et son procédé de fonctionnement |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102113546B1 (fr) |
WO (1) | WO2021137420A1 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102113546B1 (ko) * | 2019-12-30 | 2020-06-02 | 한국과학기술정보연구원 | 분석알고리즘개발장치 및 그 동작 방법 |
KR102677938B1 (ko) * | 2022-01-27 | 2024-06-25 | 주식회사 소이넷 | 데이터 저장장치의 작동방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000061371A (ko) * | 1999-03-25 | 2000-10-16 | 김기삼 | 물체인식을 위한 다층 신경망 학습방법 |
US20150324690A1 (en) * | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US20170228646A1 (en) * | 2016-02-04 | 2017-08-10 | Qualcomm Incorporated | Spiking multi-layer perceptron |
US20190114511A1 (en) * | 2017-10-16 | 2019-04-18 | Illumina, Inc. | Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks |
KR102113546B1 (ko) * | 2019-12-30 | 2020-06-02 | 한국과학기술정보연구원 | 분석알고리즘개발장치 및 그 동작 방법 |
-
2019
- 2019-12-30 KR KR1020190178384A patent/KR102113546B1/ko active IP Right Grant
-
2020
- 2020-11-16 WO PCT/KR2020/016107 patent/WO2021137420A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000061371A (ko) * | 1999-03-25 | 2000-10-16 | 김기삼 | 물체인식을 위한 다층 신경망 학습방법 |
US20150324690A1 (en) * | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US20170228646A1 (en) * | 2016-02-04 | 2017-08-10 | Qualcomm Incorporated | Spiking multi-layer perceptron |
US20190114511A1 (en) * | 2017-10-16 | 2019-04-18 | Illumina, Inc. | Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks |
KR102113546B1 (ko) * | 2019-12-30 | 2020-06-02 | 한국과학기술정보연구원 | 분석알고리즘개발장치 및 그 동작 방법 |
Non-Patent Citations (1)
Title |
---|
MOON SANG-WOO, SEONG-GON KONG: "Pattern Classification using the Block-based Neural Network", JOURNAL OF KOREAN FUZZY LOGIC & INTELLIGENT SYSTEM SOCIETY, vol. 9, no. 4, 1 August 1999 (1999-08-01), pages 397 - 400, XP055826270 * |
Also Published As
Publication number | Publication date |
---|---|
KR102113546B1 (ko) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021137420A1 (fr) | Appareil de développement pour algorithme d'analyse et son procédé de fonctionnement | |
EP3735662A1 (fr) | Procédé de réalisation d'apprentissage d'un réseau neuronal profond et appareil associé | |
WO2019209059A1 (fr) | Apprentissage machine sur une chaîne de blocs | |
US7257459B1 (en) | Method and apparatus for scheduling pilot lots | |
WO2018030747A1 (fr) | Appareil et procédé pour générer un plan de planification de distribution par apprentissage d'un trajet de distribution | |
WO2020180084A1 (fr) | Procédé permettant d'achever la coloration d'une image cible, et dispositif et programme informatique associés | |
WO2019132299A1 (fr) | Système, dispositif et procédé de mise à l'échelle de ressources basée sur la priorité dans un système infonuagique | |
WO2020091207A1 (fr) | Procédé, appareil et programme informatique pour compléter une peinture d'une image et procédé, appareil et programme informatique pour entraîner un réseau neuronal artificiel | |
WO2019031783A1 (fr) | Système de fourniture de fonction à la demande (faas), et procédé de fonctionnement du système | |
WO2020096282A1 (fr) | Système informatique en nuage sans serveur en fonction d'un service | |
WO2020114184A1 (fr) | Procédé, appareil et dispositif de modélisation conjointe, et support de stockage lisible par ordinateur | |
WO2020159016A1 (fr) | Procédé d'optimisation de paramètre de réseau neuronal approprié pour la mise en œuvre sur matériel, procédé de fonctionnement de réseau neuronal et appareil associé | |
WO2022085958A1 (fr) | Dispositif électronique et son procédé de fonctionnement | |
WO2022196945A1 (fr) | Appareil pour prévoir une répartition de la population sur la base d'un modèle de simulation de répartition de la population, et procédé de prévision de répartition de la population à l'aide de celui-ci | |
WO2023214624A1 (fr) | Système et procédé de conception de circuit intégré basé sur un apprentissage par renforcement profond utilisant un partitionnement | |
WO2023171981A1 (fr) | Dispositif de gestion de caméra de surveillance | |
WO2019088470A1 (fr) | Processeur et ses procédés de commande | |
WO2021040192A1 (fr) | Système et procédé d'apprentissage de modèle d'intelligence artificielle | |
WO2020159269A1 (fr) | Traitement de modèles de calcul en parallèle | |
WO2020222347A1 (fr) | Procédé d'agencement de machine virtuelle et dispositif d'agencement de machine virtuelle le mettant en œuvre | |
WO2024005562A1 (fr) | Incorporation de réseaux de neurones artificiels en tant que matrice pour un dispositif de réseau dans un réseau sans fil | |
WO2023022321A1 (fr) | Serveur d'apprentissage distribué et procédé d'apprentissage distribué | |
EP3918477A1 (fr) | Traitement de modèles de calcul en parallèle | |
WO2020213885A1 (fr) | Serveur et son procédé de commande | |
WO2022191668A1 (fr) | Réalisation d'une tâche de traitement ordonnée par une application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20909424 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20909424 Country of ref document: EP Kind code of ref document: A1 |