CN110728371A - System, method and electronic device for executing automatic machine learning scheme - Google Patents

System, method and electronic device for executing automatic machine learning scheme Download PDF

Info

Publication number
CN110728371A
CN110728371A CN201910876293.2A CN201910876293A CN110728371A CN 110728371 A CN110728371 A CN 110728371A CN 201910876293 A CN201910876293 A CN 201910876293A CN 110728371 A CN110728371 A CN 110728371A
Authority
CN
China
Prior art keywords
stage
machine learning
strategy
training
scheme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910876293.2A
Other languages
Chinese (zh)
Inventor
乔胜传
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
4Paradigm Beijing Technology Co Ltd
Original Assignee
4Paradigm Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 4Paradigm Beijing Technology Co Ltd filed Critical 4Paradigm Beijing Technology Co Ltd
Priority to CN201910876293.2A priority Critical patent/CN110728371A/en
Publication of CN110728371A publication Critical patent/CN110728371A/en
Priority to PCT/CN2020/115913 priority patent/WO2021052422A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a system, a method and an electronic device for executing an automatic machine learning scheme, wherein the system comprises: the scheme editor is used for setting each stage included in automatic machine learning training and providing at least one configuration interface for a user to configure at least one strategy of the corresponding stage through the at least one configuration interface aiming at each stage; the scheme executor is used for parallelly executing a plurality of workflows formed by serially connecting a strategy selected from each stage according to the execution sequence of each stage to obtain an execution result corresponding to each workflow, and selecting a workflow as a final machine learning scheme according to the execution result; each strategy of the previous stage is respectively connected with each strategy of the next stage in series.

Description

System, method and electronic device for executing automatic machine learning scheme
Technical Field
The present invention relates to the field of data analysis, and more particularly, to a system for executing an automatic machine learning scheme, a method for executing an automatic machine learning scheme, an electronic device, and a computer-readable storage medium.
Background
In the field of automatic machine learning, an automatic machine learning process can be regarded as an automatic modeling program, the core of the program is an automatic modeling strategy, and the automatic modeling process comprises some automatic modeling methods, takes the processes of automatic control sample generation (namely, data splicing), feature extraction, model training and the like as means, and takes the optimal modeling effect as the target.
In the prior art, the program is an automatic modeling black box, that is, after data is input, the automatic modeling black box automatically executes a machine learning process and directly outputs a model result, a system operator cannot intuitively feel the execution condition inside the automatic modeling black box, only waits for the whole program to finish, and then observes the model result, and the visibility is poor.
Disclosure of Invention
It is an object of the present invention to provide a system for executing an automatic machine learning scheme, comprising:
the scheme editor is used for setting each stage included in automatic machine learning training and providing at least one configuration interface for a user to configure at least one strategy of the corresponding stage through the at least one configuration interface aiming at each stage;
the scheme executor is used for parallelly executing a plurality of workflows formed by serially connecting a strategy selected from each stage according to the execution sequence of each stage to obtain an execution result corresponding to each workflow, and selecting a workflow as a final machine learning scheme according to the execution result; each strategy of the previous stage is respectively connected with each strategy of the next stage in series.
Optionally, the stages include a data splicing stage, a feature extraction stage, and a model training stage.
Optionally, the data splicing stage is configured to splice the imported behavior data and the feedback data into training data;
the characteristic extraction stage is used for performing characteristic extraction on the training data to generate training samples; and the number of the first and second groups,
the model training phase is to train a machine learning model based on the training samples using a model training algorithm.
Optionally, the scheme editor is configured to provide different generation manners for generating the policies for each of the policies;
the generation mode comprises at least one of directed acyclic graph generation, script language generation and programming language generation;
wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
Optionally, the scheme editor is configured to provide at least two configuration interfaces for the data splicing phase respectively; and the number of the first and second groups,
one configuration interface is used for configuring the data splicing stage to comprise an expert strategy, and the other configuration interface is used for configuring the data splicing stage to comprise an automatic splicing strategy.
Optionally, the scheme executor is configured to, in a process of executing, in parallel, a plurality of workflows formed by serially connecting one policy selected from each of the stages according to an execution order of the stages, introduce a judgment step after at least one of the stages, so that only an optimal effect of a current stage continues to a next stage.
Optionally, the scheme executor is configured to provide, for the determining step, a configuration interface for a user to configure the determining step through the configuration interface.
Optionally, the scheme executor is configured to provide different generation manners to generate the determination step;
the generation mode comprises at least one of script language generation and programming language generation;
wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
Optionally, the system further comprises an information presenter;
the information displayer is used for displaying the execution information of each stage according to the stage.
Optionally, the information presenter is configured to provide the execution information of each stage on the same interactive interface.
Optionally, the first part of the interactive interface is used for displaying the overall operation condition information and the overall operation chart of the automatic machine learning training process; the whole operation graph is a directed acyclic graph, and the directed acyclic graph displays the current progress of each stage;
the second part of the interactive interface is used for displaying the current running state of each stage;
the third part of the interactive interface is used for displaying resource occupation and log information of the automatic machine learning training process;
the second portion of the interactive interface is also for displaying the policy content for each of the stages.
According to a second aspect of the present invention, there is also provided a method for performing an automatic machine learning scheme, comprising:
setting each stage included in automatic machine learning training;
respectively providing at least one configuration interface for each stage;
obtaining at least one policy of a corresponding stage configured through the at least one configuration interface;
according to the execution sequence of each stage, executing a plurality of workflows formed by serially connecting a strategy selected from each stage in parallel to obtain an execution result corresponding to each workflow; each strategy of the previous stage is respectively connected with each strategy of the next stage in series;
and selecting a workflow as a final machine learning scheme according to the execution result.
Optionally, the stages include a data splicing stage, a feature extraction stage, and a model training stage.
Optionally, the data splicing stage is configured to splice the imported behavior data and the feedback data into training data;
the characteristic extraction stage is used for performing characteristic extraction on the training data to generate training samples; and the number of the first and second groups,
the model training phase is to train a machine learning model based on the training samples using a model training algorithm.
Optionally, the method further comprises:
for each strategy, providing different generating modes to generate the strategy;
the generation mode comprises at least one of directed acyclic graph generation, script language generation and programming language generation;
wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
Optionally, the method further comprises:
at least two configuration interfaces are respectively provided for the data splicing stage; and the number of the first and second groups,
one configuration interface is used for configuring the data splicing stage to comprise an expert strategy, and the other configuration interface is used for configuring the data splicing stage to comprise an automatic splicing strategy.
Optionally, the method further comprises:
according to the execution sequence of each stage, in the process of executing a plurality of workflows formed by serially connecting a strategy selected from each stage in parallel, a judgment step is introduced after at least one stage, so that only the optimal effect of the current stage is continued to the next stage.
Optionally, the method further comprises:
providing a configuration interface aiming at the judging step;
and acquiring the judgment step configured through the configuration interface.
Optionally, the method further comprises:
providing different generation modes to generate the judgment step;
the generation mode comprises at least one of script language generation and programming language generation;
wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
Optionally, the method further comprises:
the execution information for each of the stages is presented in terms of stage.
Optionally, the method further comprises:
the first part of the interactive interface is used for displaying the integral operation condition information and the integral operation chart of the automatic machine learning training process; the whole operation graph is a directed acyclic graph, and the directed acyclic graph displays the current progress of each stage;
the second part of the interactive interface is used for displaying the current running state of each stage;
the third part of the interactive interface is used for displaying resource occupation and log information of the automatic machine learning training process;
the second portion of the interactive interface is also for displaying the policy content for each of the stages.
According to a third aspect of the present invention, there is also provided an electronic device comprising a system for performing an automatic machine learning scheme according to the first aspect of the present invention; alternatively, the first and second electrodes may be,
a processor and a memory for storing instructions for controlling the processor to perform the method according to the first aspect of the invention.
According to a fourth aspect of the present invention, there is also provided a computer-readable storage medium, wherein a computer program is stored thereon, which computer program, when being executed by a processor, carries out the method as set forth in the second aspect of the present invention.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
According to the system, method and electronic device of the present embodiment, on one hand, it divides the automatic machine learning training process into various stages, and at least one configuration interface is provided for each stage, so that the user can configure at least one strategy of the corresponding stage through the at least one configuration interface, and according to the execution sequence of each stage, parallelly executing several workflows formed by serially-connected strategies selected from each stage to obtain the execution result correspondent to each workflow, and selecting a workflow as a final machine learning scheme according to the execution result, wherein each strategy of the previous stage is respectively connected with each strategy of the next stage in series, the system supports multi-stage multi-strategy simultaneous operation, selects the best strategy to generate, and combines the completed automatic machine learning scheme, thereby improving the modeling effect and performance of automatic modeling in various service scenes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of a hardware configuration of an electronic device according to an embodiment of the present invention;
FIG. 2 is a schematic flow diagram for performing an automatic machine learning scheme in accordance with an embodiment of the present invention;
3a, 3b, 4a, 4b, 5, 6 are examples for performing an automatic machine learning scheme according to an exemplary embodiment of the present invention;
FIG. 7a is a functional block diagram of a system for performing an automatic machine learning scheme in accordance with an embodiment of the present invention;
FIG. 7b is a functional block diagram of a system for performing an automatic machine learning scheme in accordance with another embodiment of the present invention;
FIG. 8 is a functional block diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to another embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Various embodiments and examples according to embodiments of the present invention are described below with reference to the accompanying drawings.
< hardware configuration >
Fig. 1 is a block diagram showing a hardware configuration of an electronic apparatus 1000 that can implement an embodiment of the present invention.
The electronic device 1000 may be a laptop, desktop, cell phone, tablet, etc.
In one embodiment, as shown in fig. 1, the electronic device 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and so forth. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 1400 is capable of wired or wireless communication, for example, and may specifically include Wifi communication, bluetooth communication, 2G/3G/4G/5G communication, and the like. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, a somatosensory input, and the like. A user can input/output voice information through the speaker 1700 and the microphone 1800.
The electronic device shown in fig. 1 is merely illustrative and is in no way meant to limit the invention, its application, or uses. In an embodiment of the invention, the memory 1200 of the electronic device 1000 is configured to store instructions for controlling the processor 1100 to operate to perform any of the methods for performing an automatic machine learning scheme provided by the embodiments of the invention. It will be appreciated by those skilled in the art that although a plurality of means are shown for the electronic device 1000 in fig. 1, the present invention may relate to only some of the means therein, e.g. the electronic device 1000 relates to only the processor 1100 and the storage means 1200. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
In another embodiment, as shown in FIG. 8, electronic device 1000 may include a system 7000 for executing an automatic machine learning scheme for implementing the method for executing an automatic machine learning scheme of any of the embodiments of the present invention.
< method examples >
In the present embodiment, a method for executing an automatic machine learning scheme is provided, which may be performed by the system for executing an automatic machine learning scheme 4000.
As shown in fig. 2, the method for executing the automatic machine learning scheme of the present embodiment may include the following steps S2100 to S2500:
in step S2100, the stages included in the automatic machine learning training are set.
The phase is the minimum execution unit of the automatic machine learning training, and in this embodiment, for example, the automatic machine learning training may be divided into a data concatenation phase, a feature extraction phase, and a model training phase.
The data splicing stage is used for splicing the imported behavior data and the feedback data into training data; the characteristic extraction stage is used for performing characteristic extraction on the training data to generate training samples; and a model training phase for training the machine learning model based on the training samples using a model training algorithm. The behavior data and the feedback data relate to a characteristic portion of the training sample, and may be imported by the user in different manners, which is not described in detail herein.
It is understood that the above data splicing phase, feature extraction phase and model training phase are only typical phases divided by the automatic machine learning training, which are applicable to different business scenarios, for example, the automatic machine learning training may be divided into a data template definition phase, a data splicing phase, a feature extraction phase and a model training phase. Of course, the automatic machine learning training may also be divided into other phases according to different business scenarios, and the specific number of the phases and the specific division manner of the phases are not limited in this embodiment.
As an example, in the model training phase, the following methods may be taken, including: obtaining candidate model configurations sampled during automatic machine learning for a target dataset, wherein one candidate model configuration comprises a determined machine learning algorithm and a set of hyper-parameters; for each of at least some of the acquired candidate model configurations, a corrected evaluation value corresponding to that candidate model configuration is obtained by: evaluating the candidate model configuration by a first fidelity evaluation method to obtain a first evaluation value, predicting a difference value between the first evaluation value and a second evaluation value of the candidate model configuration by an evaluation value residual predictor, and correcting the first evaluation value by the difference value to obtain a corrected evaluation value corresponding to the candidate model configuration, wherein the second evaluation value refers to an evaluation value which is obtained when the candidate model configuration is evaluated by a second fidelity evaluation method, and the fidelity of the second fidelity evaluation method is higher than that of the first fidelity evaluation method; one candidate model configuration is selected for the target data set based on the evaluation value corresponding to each candidate model configuration.
Optionally, before the prediction is performed by using the evaluation value residual predictor, the method further includes a step of initializing the evaluation value residual predictor, and the step includes: obtaining at least one candidate model configuration randomly sampled when performing automatic machine learning for a target dataset; evaluating each candidate model configuration of the at least one candidate model configuration by using a first fidelity evaluation method and a second fidelity evaluation method respectively to obtain a first evaluation value and a second evaluation value corresponding to each candidate model configuration; and constructing a training sample for training the evaluation value residual predictor by taking each candidate model configuration in the at least one candidate model configuration as sample data and taking a difference value between a second evaluation value and a first evaluation value of the candidate model configuration as a mark of the sample data, and training the evaluation value residual predictor based on the constructed training sample.
Optionally, the obtaining candidate model configurations sampled when performing automatic machine learning for the target data set, and for each of at least some of the obtained candidate model configurations, the obtaining of the corrected evaluation value corresponding to the candidate model configuration includes: each time a candidate model configuration is sampled, a first fidelity evaluation method is used for evaluating the current candidate model configuration to obtain a first evaluation value, an evaluation value residual predictor is used for predicting a difference value between the first evaluation value and a second evaluation value of the current candidate model configuration, and the difference value is used for correcting the first evaluation value to obtain a corrected evaluation value corresponding to the current candidate model configuration; the new candidate model configuration is sampled based on at least the modified evaluation value corresponding to the current candidate model configuration.
Optionally, the method further comprises: selecting one candidate model configuration from the candidate model configurations with the first preset number after acquiring the candidate model configurations with the first preset number and the corresponding corrected evaluation values; evaluating the selected candidate model configuration by using a second fidelity evaluation method to obtain a second evaluation value of the candidate model, taking the candidate model configuration as sample data, taking a difference value between the second evaluation value and the first evaluation value of the candidate model configuration as a mark of the sample data, and constructing a new training sample for training the evaluation value residual error predictor; training the evaluation value residual error predictor at least based on the constructed new training sample to obtain an updated evaluation value residual error predictor; predicting a difference value between a second evaluation value and a first evaluation value corresponding to the candidate model configuration sampled subsequently by using the updated evaluation value residual error predictor, and further obtaining a corresponding corrected evaluation value; and stopping sampling new candidate model configuration after the evaluation value residual error predictor is updated for the second preset value time.
Optionally, in the method, the selecting one candidate model configuration from the first preset number of candidate model configurations includes: selecting a corresponding candidate model configuration with the highest/lowest corrected evaluation value from the candidate model configurations with the first preset number; selecting one candidate model configuration for the target data set based on the evaluation values corresponding to the candidate model configurations comprises: the candidate model arrangement with the highest/lowest second evaluation value is selected from the candidate model arrangements evaluated by the second fidelity evaluation method and corresponding to the second evaluation value.
Optionally, the method further comprises: initializing the merit value residual predictor prior to prediction with the merit value residual predictor, wherein the merit value residual predictor comprises a plurality of sub-predictors, wherein initializing the merit value residual predictor comprises: obtaining at least one candidate model configuration randomly sampled when performing automatic machine learning for a target dataset; evaluating each candidate model configuration of the at least one candidate model configuration by using a first fidelity evaluation method to obtain a first evaluation value corresponding to each candidate model configuration; for each sub-predictor, it is trained by: taking each candidate model configuration in the at least one candidate model configuration as sample data, taking a difference value between a third evaluation value of the candidate model configuration, which is obtained by evaluating the candidate model configuration by using a third fidelity evaluation method corresponding to the sub-predictor, and a first evaluation value of the candidate model configuration as a mark of the sample data, constructing a first training sample for training the sub-predictor, and training the sub-predictor based on the constructed first training sample, wherein the fidelity of the third fidelity evaluation method corresponding to each sub-predictor is different from that of the first fidelity evaluation method, and the fidelity of the third fidelity evaluation method is between that of the first fidelity evaluation method and that of the second fidelity evaluation method; setting respective weights of the plurality of sub predictors.
Optionally, the obtaining candidate model configurations sampled when performing automatic machine learning for the target data set, and for each of at least some of the obtained candidate model configurations, the obtaining of the corrected evaluation value corresponding to the candidate model configuration includes: each time one candidate model configuration is sampled, a corrected evaluation value corresponding to the current candidate model configuration is obtained by: evaluating the current candidate model configuration by using a first fidelity evaluation method to obtain a first evaluation value; predicting a difference value between a first evaluation value and a third evaluation value of the current candidate model configuration by using each of the plurality of sub-predictors, wherein the third evaluation value refers to an evaluation value which is obtained when the candidate model configuration is evaluated by using a third fidelity evaluation method corresponding to each sub-predictor, and the fidelity of the third fidelity evaluation method is between that of the first fidelity evaluation method and that of the second fidelity evaluation method; correcting the first evaluation value by multiplying the difference predicted by each sub-predictor by the weight of each sub-predictor and summing up the obtained results to obtain a corrected evaluation value corresponding to the current candidate model configuration; the new candidate model configuration is sampled based on at least the modified evaluation value corresponding to the current candidate model configuration.
Optionally, the method further comprises: selecting one candidate model configuration from the candidate model configurations with the first preset number after acquiring the candidate model configurations with the first preset number and the corresponding corrected evaluation values; evaluating the selected candidate model configuration by using a second fidelity evaluation method to obtain a second evaluation value of the candidate model configuration, taking a feature vector consisting of the difference values predicted by each sub-predictor as sample data, taking the second evaluation value and the first evaluation value of the candidate model configuration as marks of the sample data, and constructing a training sample for training a linear regression model; training the linear regression model at least based on the constructed training samples to obtain updated weights of all sub predictors, and then updating the evaluation value residual error predictor; performing prediction on the candidate model configuration of the subsequent sampling by using each sub-predictor, and correcting a first evaluation value by multiplying the prediction result of each sub-predictor by the updated weight of each sub-predictor and summing the obtained results so as to obtain a corresponding corrected evaluation value; and stopping sampling new candidate model configuration after the evaluation value residual error predictor is updated for the second preset value time.
In this embodiment, for example, the system 7000 for executing the automatic machine learning scheme may include a scheme editor 7100, and the automatic machine learning training may be divided into a data splicing stage, a feature extraction stage, and a model training stage by the scheme editor 7100.
After each stage included in the automatic machine learning training is set in step S2100, at least one policy may be set for each stage through at least one configuration interface in combination with subsequent steps, and according to the execution sequence of each stage, a plurality of workflows formed by serially connecting a policy selected from each stage are executed in parallel to obtain an execution result corresponding to each workflow, and then according to the execution result, a workflow with the best effect is selected as a final machine learning scheme.
After setting the various phases involved in the automatic machine learning training, we proceed to:
step S2200 is to provide at least one configuration interface for each stage.
The configuration interface may be, for example, an input box, a drop-down list, a voice input, etc., e.g., a system operator may input at least one policy for a corresponding phase through at least one input box provided; for another example, the system operator may select at least one policy for the corresponding stage via a provided drop-down list; as another example, the system operator may voice-input at least one policy for a corresponding phase.
In this embodiment, the scheme editor 7100 may provide at least one configuration interface for each phase, for example, the scheme editor 7100 provides at least one configuration interface for the data splicing phase, the feature extraction phase, and the model training phase.
After at least one configuration interface is provided for each stage through step S2200, at least one policy may be set for each stage through at least one configuration interface in combination with the subsequent steps, and a plurality of workflows formed by serially connecting one policy selected from each stage are executed in parallel according to the execution sequence of each stage to obtain the execution result corresponding to each workflow, and then a workflow with the best effect is selected as the final machine learning scheme according to the execution result.
After providing at least one configuration interface for each phase, entering:
step S2300, obtaining at least one policy of a corresponding stage configured through at least one configuration interface.
In this embodiment, the scheme editor 7100 may configure at least one policy for each stage through at least one configuration interface, for example, the scheme editor 7100 configures at least one policy included in the corresponding data splicing stage through at least one configuration interface corresponding to the data splicing stage, configures at least one policy corresponding to the feature extraction stage through at least one configuration interface corresponding to the feature extraction stage, and configures at least one policy corresponding to the model training stage through at least one configuration interface corresponding to the model training stage.
In this embodiment, for each policy, different generation manners are provided for generating the policy. The generation manner may be at least one of directed acyclic graph, script language generation and programming language generation, for example.
The above scripting language generation includes at least one of custom scripting language generation and mainstream scripting language generation. The mainstream script language may be, for example, Perl language, Python language, Ruby language, and the like. The custom script language may be a language written in Java, C + + and C languages, for example.
Taking the data splicing stage as an example, for the data splicing stage, at least two configuration interfaces are respectively provided through the step S2200, where one configuration interface is used for configuring the data splicing stage and includes an expert policy, the other configuration interface is used for configuring the data splicing stage and includes an automatic splicing policy, and both the expert policy and the automatic splicing stage may be generated by at least one of a custom scripting language generation and a mainstream scripting language generation.
After at least one policy is set for each stage through at least one configuration interface in step S2300, a plurality of workflows formed by serially connecting one policy selected from each stage may be executed in parallel according to the execution sequence of each stage in combination with the subsequent steps, so as to obtain the execution result corresponding to each workflow, and then according to the execution result, a workflow with the best effect is selected as the final machine learning scheme.
After acquiring at least one policy of a corresponding phase configured by at least one configuration interface, entering:
step S2400 is to execute a plurality of workflows formed by serially connecting one policy selected from each stage in parallel according to the execution sequence of each stage, and obtain an execution result corresponding to each workflow.
In this embodiment, each strategy of the previous stage is connected in series with each strategy of the next stage.
In this embodiment, for example, the scenario executor 7200 may execute a plurality of workflows formed by serially connecting one policy selected from each stage in parallel according to the execution order of each stage, and may obtain an execution result corresponding to each workflow.
Referring to fig. 3a and 3b, the automatic machine learning training includes a data stitching phase, a feature extraction phase, and a model training phase. The data splicing stage is provided with two different strategies, namely an expert strategy and an automatic splicing strategy, wherein the expert strategy can splice the data table 1, the data table 2 and the data table 3 into a spliced table A, and the automatic splicing strategy can splice the data table 1, the data table 2 and the data table 3 into a spliced table B, namely, the spliced table A and the spliced table B can be obtained after the data splicing stage is executed. The feature extraction stage may be provided with two different strategies, an expert feature strategy and an automatic feature strategy, where the expert feature strategy may be to perform feature extraction on the table a and the table B respectively to generate a first training sample and a second training sample, and the automatic feature strategy may be to perform automatic feature extraction on the table a and the table B respectively to generate a third training sample and a fourth training sample, that is, after performing the feature extraction, four training samples are obtained. In the model training stage, only one automatic parameter adjusting strategy can be set, and the automatic parameter adjusting strategy is used for automatically adjusting parameters of the four training samples respectively (in combination with a preset model training algorithm), so that machine learning models corresponding to the four training samples are obtained. It can be understood that in fig. 3a and 3b, the data splicing stage is provided with two strategies, the feature extraction stage is provided with two strategies, and the model training stage is provided with one strategy, and the total five strategies of the three stages are connected in series to form four workflows.
In step S2400, a plurality of workflows formed by serially connecting one policy selected from each stage is executed in parallel according to the execution sequence of each stage, and after the execution result corresponding to each workflow is obtained, a workflow with the best effect can be selected as the final machine learning scheme according to the execution result.
According to the execution sequence of each stage, executing a plurality of workflows formed by connecting a strategy selected from each stage in series in parallel, and after obtaining the execution result corresponding to each workflow, entering:
and step S2500, selecting a workflow as a final machine learning scheme according to an execution result.
In this embodiment, for example, the scenario executor 4200 may select a workflow as a final machine learning scenario according to an execution result.
In this embodiment, for example, the workflow corresponding to the machine learning model having the highest AUC may be selected as the most final machine learning model.
Continuing with the above example, as shown in fig. 3b, for example, after the machine learning models corresponding to the four workflows are obtained through the above step S2400, the workflow corresponding to the machine learning model with the highest AUC may be selected as the final machine learning scheme.
According to the method of the present embodiment, on the one hand, it divides the automatic machine learning training process into various phases, and at least one configuration interface is provided for each stage, so that the user can configure at least one strategy of the corresponding stage through the at least one configuration interface, and according to the execution sequence of each stage, parallelly executing several workflows formed by serially-connected strategies selected from each stage to obtain the execution result correspondent to each workflow, and selecting a workflow as a final machine learning scheme according to the execution result, wherein each strategy of the previous stage is respectively connected with each strategy of the next stage in series, the system supports multi-stage multi-strategy simultaneous operation, selects the best strategy to generate, and combines the completed automatic machine learning scheme, thereby improving the modeling effect and performance of automatic modeling in various service scenes.
In one embodiment, the method for performing an automatic machine learning scheme of the present invention may further comprise:
according to the execution sequence of each stage, in the process of executing a plurality of workflows formed by serially connecting a strategy selected from each stage in parallel, a judgment step is introduced after at least one stage, so that only the optimal effect of the current stage is continued to the next stage.
The determining step may be, for example, at least one of pruning, deleting, and merging.
In this embodiment, for example, the plan executor 7200 may introduce a judgment step after at least one stage in parallel execution of a plurality of workflows in which one policy selected for each stage is connected in series according to the execution order of each stage.
Referring to fig. 4a and 4b, the automatic machine learning training includes a data stitching phase, a feature extraction phase, and a model training phase. The data splicing stage is provided with two different strategies, namely an expert strategy and an automatic splicing strategy, wherein the expert strategy can splice the data table 1, the data table 2 and the data table 3 into a spliced table A, and the automatic splicing strategy can splice the data table 1, the data table 2 and the data table 3 into a spliced table B, namely, the spliced table A and the spliced table B are obtained after the data splicing stage is executed. It should be noted that in this example, after the data splicing stage is completed, the merging steps shown in fig. 4a and 4B may be introduced to merge the tile table a and the tile table B, so as to obtain a tile table. The feature extraction stage may be provided with two different strategies, namely an expert feature strategy and an automatic feature strategy, where the expert feature strategy may be to perform feature extraction on the obtained piece of spelling table to generate a first training sample, and the automatic feature strategy may be to perform automatic feature extraction on the piece of spelling table to generate a second training sample, that is, after the feature extraction stage is executed, two training samples are obtained. In the model training stage, only one automatic parameter adjusting strategy can be set, and the automatic parameter adjusting strategy is used for automatically adjusting parameters of two training samples (by combining a preset model training algorithm), so that machine learning models corresponding to the two training samples are obtained. It can be understood that, in fig. 4b, two strategies are provided in the data splicing stage, a merging step (a judging step) is introduced after the data splicing stage, two strategies are provided in the feature extraction stage, and one strategy is provided in the model training stage, five strategies in the three stages are connected in series to form two workflows, that is, the machine learning models corresponding to the two training samples are the machine learning models corresponding to the two workflows, and the workflow corresponding to the machine learning model with the highest AUC is selected as the final machine learning model.
In this embodiment, the method for executing an automatic machine learning scheme of the present invention may further include:
and aiming at the judging step, providing a configuration interface for a user to configure the judging step through the configuration interface.
The determining step may be generated by different generation methods, for example, at least one of directed acyclic graph, script language generation and programming language generation. Wherein the script language generation comprises at least one of custom script language generation and mainstream script language generation. The custom scripting language and the mainstream scripting language have been described in detail in the above embodiments, and are not described in detail here.
The configuration interface may be, for example, an input box, a drop-down list, a voice input, etc., e.g., a system operator may input the determination step through the input box; for another example, the system operator may select the decision step by a drop-down list; as another example, the system operator may enter the decision step by voice.
According to the method of the embodiment, fusion and comparison among multiple strategies can be realized by introducing the judging step, so that the effect and efficiency of the explored automatic machine learning scheme are improved through strategy fusion.
In one embodiment, the method for performing an automatic machine learning scheme of the present invention may further comprise:
the execution information of each stage is shown by stage.
In this embodiment, for example, the system 7000 for executing the automatic machine learning scheme may include an information presenter 7300, and the information presenter 7300 presents the execution information of each stage according to the stage.
In this embodiment, the information presenter 7300 may also provide the execution information of each phase on the same interactive interface.
Referring to fig. 5, the interactive interface may include a first portion (middle portion), a second portion (left portion), and a third portion (right portion).
The first part of the interactive interface is used for displaying the integral operation condition information and the integral operation chart of the automatic machine learning training process; wherein the overall runtime graph can be a directed acyclic graph (DAG graph), and the directed acyclic graph can display the current progress of each stage.
Specifically, the directed acyclic graph (DAG graph) shown in the middle portion of fig. 6 shows 6 nodes: "feedback data" node, "behavior data" node, "sample generation" node, "feature engineering" node, "LR (logistic regression) algorithm" node, and "GBDT (gradient boosting decision tree) algorithm" node. The data splicing stage, the sample generation stage, the feature extraction stage, the LR (logistic regression) algorithm stage and the GBDT (gradient lifting decision tree) algorithm stage correspond to the model training stage. Note that fig. 6 shows 2 specific preset algorithms, but this is only an exemplary illustration, and the present invention does not limit the number of preset algorithms and the specific algorithms.
The second portion of the interactive interface is used to display the current operating status of each stage and to display the policy content of each stage (not shown).
The third part of the interactive interface is used for displaying resource occupation and log information of the automatic machine learning training process.
In the embodiment, the execution information of each stage can be uniformly displayed according to the stage, so that system operators can conveniently know the execution process of the automatic machine learning scheme in real time, and the user experience is improved.
< System embodiment >
In the present embodiment, a system 7000 for executing an automatic machine learning scheme is provided, as shown in fig. 7a, the system 7000 for executing an automatic machine learning scheme includes a scheme editor 7100 and a scheme executor 7200.
The scenario editor 7100 is configured to set each stage included in the automatic machine learning training, and provide at least one configuration interface for a user to configure at least one policy of a corresponding stage through the at least one configuration interface for each stage.
The scheme executor 7200 is configured to execute a plurality of workflows formed by serially connecting one policy selected from each of the stages in parallel according to the execution sequence of each of the stages, obtain an execution result corresponding to each of the workflows, and select a workflow as a final machine learning scheme according to the execution result; each strategy of the previous stage is respectively connected with each strategy of the next stage in series.
In one embodiment, the stages include a data stitching stage, a feature extraction stage, and a model training stage.
In one embodiment, the data stitching stage is configured to stitch the imported behavioral data and feedback data into training data.
The characteristic extraction stage is used for performing characteristic extraction on the training data to generate training samples; and the number of the first and second groups,
the model training phase is to train a machine learning model based on the training samples using a model training algorithm.
In one embodiment, the scenario editor 7100 is configured to provide different generation manners for each of the policies to generate the policy.
The generation mode comprises at least one of directed acyclic graph generation, script language generation and programming language generation.
Wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
In one embodiment, the schema editor 7100 is configured to provide at least two configuration interfaces for the data splicing phase, respectively; and the number of the first and second groups,
one configuration interface is used for configuring the data splicing stage to comprise an expert strategy, and the other configuration interface is used for configuring the data splicing stage to comprise an automatic splicing strategy.
In an embodiment, the solution executor 7200 is configured to, in accordance with the execution order of the stages, introduce a determination step after at least one of the stages in the process of executing in parallel a plurality of workflows formed by serially connecting one policy selected from each of the stages, so that only the optimal effect of the current stage continues to the next stage.
In an embodiment, the scheme executor 7200 is configured to provide, for the determining step, a configuration interface for a user to configure the determining step through the configuration interface.
In one embodiment, the scenario executor 7200 is configured to provide different generation manners to generate the determination step.
The generation mode comprises at least one of script language generation and programming language generation.
Wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
In one embodiment, as shown in fig. 7b, the system 7000 further comprises an information presenter 7300.
The information presenter 7300 is configured to present execution information of each of the stages according to the stage.
In one embodiment, the information presenter 7300 is used for providing the execution information of the stages on the same interactive interface.
In one embodiment, the first portion of the interactive interface is configured to display overall operational situation information and an overall operational diagram of an automatic machine learning training process; and the whole operation graph is a directed acyclic graph, and the directed acyclic graph displays the current progress of each stage.
The second part of the interactive interface is used for displaying the current running state of each stage.
And the third part of the interactive interface is used for displaying resource occupation and log information of the automatic machine learning training process.
The second portion of the interactive interface is also for displaying the policy content for each of the stages.
< electronic device embodiment >
In this embodiment, an electronic device 1000 is also provided. The electronic device 1000 may be the electronic device shown in fig. 1.
In one aspect, as shown in fig. 8, the electronic device 1000 may include the aforementioned apparatus 7000 for executing an automatic machine learning scheme for implementing the method for executing an automatic machine learning scheme of any embodiment of the present invention.
In another aspect, as shown in fig. 9, the electronic device 1000 may further include a processor 1100 and a memory 1200, the memory 1200 for storing executable instructions; the processor 1100 is configured to operate the electronic device 1000 to perform a method for performing an automatic machine learning scheme according to any embodiment of the present invention, in accordance with the control of the instructions.
In this embodiment, the electronic device 1000 may be a mobile phone, a tablet computer, a palm computer, a desktop computer, a notebook computer, a workstation, a game console, or the like.
< computer-readable storage Medium >
In this embodiment, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for executing an automatic machine learning scheme according to any embodiment of the invention.
The present invention may be an apparatus, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A system for executing an automatic machine learning scheme, comprising:
the scheme editor is used for setting each stage included in automatic machine learning training and providing at least one configuration interface for a user to configure at least one strategy of the corresponding stage through the at least one configuration interface aiming at each stage;
the scheme executor is used for parallelly executing a plurality of workflows formed by serially connecting a strategy selected from each stage according to the execution sequence of each stage to obtain an execution result corresponding to each workflow, and selecting a workflow as a final machine learning scheme according to the execution result; each strategy of the previous stage is respectively connected with each strategy of the next stage in series.
2. The system of claim 1, wherein,
the stages comprise a data splicing stage, a feature extraction stage and a model training stage.
3. The system of claim 2, wherein,
the data splicing stage is used for splicing the imported behavior data and the feedback data into training data;
the characteristic extraction stage is used for performing characteristic extraction on the training data to generate training samples; and the number of the first and second groups,
the model training phase is to train a machine learning model based on the training samples using a model training algorithm.
4. The system of claim 1, wherein,
the scheme editor is used for providing different generation modes for generating the strategies for each strategy;
the generation mode comprises at least one of directed acyclic graph generation, script language generation and programming language generation;
wherein the scripting language generation comprises at least one of custom scripting language generation and mainstream scripting language generation.
5. The system of claim 2, wherein,
the scheme editor is used for respectively providing at least two configuration interfaces aiming at the data splicing stage; and the number of the first and second groups,
one configuration interface is used for configuring the data splicing stage to comprise an expert strategy, and the other configuration interface is used for configuring the data splicing stage to comprise an automatic splicing strategy.
6. The system of claim 1, wherein,
the scheme executor is used for introducing a judgment step after at least one stage in the process of parallelly executing a plurality of workflows formed by serially connecting one strategy selected from each stage according to the execution sequence of each stage, so that only the optimal effect of the current stage is continued to the next stage.
7. The system of claim 6, wherein,
and the scheme executor is used for providing a configuration interface aiming at the judging step so that a user can configure the judging step through the configuration interface.
8. A method for executing an automatic machine learning scheme, comprising:
setting each stage included in automatic machine learning training;
respectively providing at least one configuration interface for each stage;
obtaining at least one policy of a corresponding stage configured through the at least one configuration interface;
according to the execution sequence of each stage, executing a plurality of workflows formed by serially connecting a strategy selected from each stage in parallel to obtain an execution result corresponding to each workflow; each strategy of the previous stage is respectively connected with each strategy of the next stage in series;
and selecting a workflow as a final machine learning scheme according to the execution result.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of claim 8.
10. An electronic device, comprising:
the system for performing an automatic machine learning scheme of claim 1; alternatively, the first and second electrodes may be,
a processor and a memory for storing instructions for controlling the processor to perform the method of claim 8.
CN201910876293.2A 2019-09-17 2019-09-17 System, method and electronic device for executing automatic machine learning scheme Pending CN110728371A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910876293.2A CN110728371A (en) 2019-09-17 2019-09-17 System, method and electronic device for executing automatic machine learning scheme
PCT/CN2020/115913 WO2021052422A1 (en) 2019-09-17 2020-09-17 System and method for executing automated machine learning solution, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910876293.2A CN110728371A (en) 2019-09-17 2019-09-17 System, method and electronic device for executing automatic machine learning scheme

Publications (1)

Publication Number Publication Date
CN110728371A true CN110728371A (en) 2020-01-24

Family

ID=69219106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910876293.2A Pending CN110728371A (en) 2019-09-17 2019-09-17 System, method and electronic device for executing automatic machine learning scheme

Country Status (2)

Country Link
CN (1) CN110728371A (en)
WO (1) WO2021052422A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021052422A1 (en) * 2019-09-17 2021-03-25 第四范式(北京)技术有限公司 System and method for executing automated machine learning solution, and electronic apparatus
CN112558938A (en) * 2020-12-16 2021-03-26 中国科学院空天信息创新研究院 Machine learning workflow scheduling method and system based on directed acyclic graph
CN113033816A (en) * 2021-03-08 2021-06-25 北京沃东天骏信息技术有限公司 Processing method and device of machine learning model, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844837A (en) * 2017-10-31 2018-03-27 第四范式(北京)技术有限公司 The method and system of algorithm parameter tuning are carried out for machine learning algorithm
CN108710949A (en) * 2018-04-26 2018-10-26 第四范式(北京)技术有限公司 The method and system of template are modeled for creating machine learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9939792B2 (en) * 2014-12-30 2018-04-10 Futurewei Technologies, Inc. Systems and methods to adaptively select execution modes
US20180373986A1 (en) * 2017-06-26 2018-12-27 QbitLogic, Inc. Machine learning using dynamic multilayer perceptrons
CN108897587B (en) * 2018-06-22 2021-11-12 北京优特捷信息技术有限公司 Pluggable machine learning algorithm operation method and device and readable storage medium
CN109241139B (en) * 2018-08-31 2023-05-26 联想(北京)有限公司 Data processing method, logic model system and data processing system
CN109376419B (en) * 2018-10-16 2023-12-22 北京字节跳动网络技术有限公司 Data model generation method and device, electronic equipment and readable medium
CN110728371A (en) * 2019-09-17 2020-01-24 第四范式(北京)技术有限公司 System, method and electronic device for executing automatic machine learning scheme

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844837A (en) * 2017-10-31 2018-03-27 第四范式(北京)技术有限公司 The method and system of algorithm parameter tuning are carried out for machine learning algorithm
CN108710949A (en) * 2018-04-26 2018-10-26 第四范式(北京)技术有限公司 The method and system of template are modeled for creating machine learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021052422A1 (en) * 2019-09-17 2021-03-25 第四范式(北京)技术有限公司 System and method for executing automated machine learning solution, and electronic apparatus
CN112558938A (en) * 2020-12-16 2021-03-26 中国科学院空天信息创新研究院 Machine learning workflow scheduling method and system based on directed acyclic graph
CN112558938B (en) * 2020-12-16 2021-11-09 中国科学院空天信息创新研究院 Machine learning workflow scheduling method and system based on directed acyclic graph
CN113033816A (en) * 2021-03-08 2021-06-25 北京沃东天骏信息技术有限公司 Processing method and device of machine learning model, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2021052422A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US10296830B2 (en) Dynamic topic guidance in the context of multi-round conversation
US11710300B2 (en) Computing systems with modularized infrastructure for training generative adversarial networks
WO2021052422A1 (en) System and method for executing automated machine learning solution, and electronic apparatus
CN110807515A (en) Model generation method and device
US9424006B2 (en) Execution optimization of mobile applications
US8533691B2 (en) Managing non-common features for program code translation
US11556860B2 (en) Continuous learning system for models without pipelines
CN108710949A (en) The method and system of template are modeled for creating machine learning
EP3401803A1 (en) Interaction scenario display control program and information processing apparatus
US8621442B2 (en) Quicker translation of a computer program source code
CN111931057A (en) Sequence recommendation method and system for self-adaptive output
US10628287B2 (en) Identification and handling of nested breakpoints during debug session
CN111506575A (en) Method, device and system for training branch point traffic prediction model
CN107203425A (en) Switching method, equipment and the electronic equipment gently applied
Jalilova et al. Development and analysis of logical scenario design invirtual reality laboratories for higher education institutions
US20170213181A1 (en) Automatic solution to a scheduling problem
US20200184261A1 (en) Collaborative deep learning model authoring tool
US10289788B1 (en) System and method for suggesting components associated with an electronic design
CN113366510A (en) Performing multi-objective tasks via trained raw network and dual network
US20220101177A1 (en) Optimizing a machine for solving offline optimization problems
CN114548407A (en) Hierarchical target oriented cause and effect discovery method and device and electronic equipment
CN117203680A (en) Adaptive selection of data modalities for efficient video recognition
CN114077664A (en) Text processing method, device and equipment in machine learning platform
CN111626401A (en) Operation method and device
CN112948555A (en) Man-machine interaction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination