CN110659741A - AI model training system and method based on piece-splitting automatic learning - Google Patents
AI model training system and method based on piece-splitting automatic learning Download PDFInfo
- Publication number
- CN110659741A CN110659741A CN201910827111.2A CN201910827111A CN110659741A CN 110659741 A CN110659741 A CN 110659741A CN 201910827111 A CN201910827111 A CN 201910827111A CN 110659741 A CN110659741 A CN 110659741A
- Authority
- CN
- China
- Prior art keywords
- training
- model
- task
- model training
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 159
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000008569 process Effects 0.000 claims abstract description 26
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 239000012634 fragment Substances 0.000 claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims abstract description 16
- 238000011161 development Methods 0.000 claims abstract description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 22
- 239000003795 chemical substances by application Substances 0.000 claims description 11
- 230000018109 developmental process Effects 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000013515 script Methods 0.000 claims description 7
- 230000006872 improvement Effects 0.000 claims description 6
- 101150082208 DIABLO gene Proteins 0.000 claims description 3
- 102100033189 Diablo IAP-binding mitochondrial protein Human genes 0.000 claims description 3
- 244000024873 Mentha crispa Species 0.000 claims description 3
- 235000014749 Mentha crispa Nutrition 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 238000013468 resource allocation Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 description 37
- 238000010801 machine learning Methods 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000007726 management method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000012466 permeate Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000007634 remodeling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides an AI model training system and method based on fragment type automatic learning, wherein the system comprises a code development module, a multi-scene training module, a training task module, a training resource module, a training arrangement module, an execution engine, a visual training process monitoring and a visual training task monitoring portal. The method comprises the following steps: the method comprises the steps of automatic super-parameter tuning, Bayes automatic parameter tuning based on a piece-separating computing engine and procedural AI model training. The invention provides a multi-scene, extensible, visual, controllable, automatic and monitoring method and a multi-scene, extensible, visual and controllable, automatic and monitoring system for AI model training, which effectively reduce the threshold of AI model training, improve the usability and convenience of model training and provide support for large-scale use of AI models.
Description
Technical Field
The invention belongs to the technical field of big data processing, and particularly relates to an AI model training system and method based on fragment type automatic learning.
Technical Field
Artificial Intelligence (AI) technology is a new engine to lift the next turn of internet wave. At present, through the stage of high-speed development of the mobile internet, the current information technology field is suffering from the dilemma of lack of innovation, gradual and fierce competition and the like, the innovation of the business model based on the technology development disappears, the industry development is suffering from the ceiling, and a new round of technology revolution is urgently needed to drive the comprehensive upgrade of the business model. The artificial intelligence is used as the most advanced basic technology in the world of everything interconnection, can permeate into all industries, assists the traditional industry to realize cross-over upgrading, realizes remodeling of all industries, and becomes a new engine for turning up internet subversive waves.
After years of rapid development, telecom operators have accumulated a large amount of data, including structured data such as industry comprehensive data, user use interactive information, user consumption data, device log records and the like, and unstructured data such as texts, audios and videos, pictures and the like. The telecommunications industry has been unable to continue to gain rapid growth from the population bonus model, turning to the growing emphasis on traffic and data bonus. Externally, the use of AI can effectively improve the customer service level and marketing effect of telecom operators, and can widen the service types and service ranges of the telecom operators; internally, the AI can help telecommunications operators to advance network virtualization and cloud technologies, achieving the effects of improving automation level, and reducing capital and operating expenses.
Although the application of AI capability can help telecom operators to reduce cost and improve efficiency, the lack of an efficient, convenient and easy-to-use AI model training method and system can cause the following problems in the practical application process:
1) the utilization rate of training resources is low. When the model is trained, the training personnel can only carry out training operation according to the existing physical hardware resources such as a CPU, a GPU, a memory and the like, and cannot carry out dynamic expansion according to the resource use condition during model training, so that the model training efficiency is low; when the model is not trained, the resources which are already allocated to the model training personnel can only be left unused, and the full utilization can not be realized.
2) The training algorithm is not flexible to use. Different machine learning frames need to be installed by a model trainer before training, parameters need to be adjusted by depending on experience in the training process, and the model trainer is manually packed and issued after the training is finished, so that the free selection of the machine learning frames cannot be realized, and the operations of parameter adjustment, optimization, evaluation, issuing and the like of the model can be automatically finished.
3) The training process is not manageable. The compiling of model training scripts, the starting and stopping of model training tasks, the monitoring of model training process logs and the consulting of model training output parameters need to be finished in a training server in a command line mode, and model training personnel have a higher use threshold.
Disclosure of Invention
Aiming at the technical problems, the invention discloses an AI model training system and method based on fragment type automatic learning, and provides a multi-scene, expandable, visual, controllable, automatic and monitoring method and system for AI model training, thereby effectively reducing the threshold of AI model training, improving the usability and convenience of model training and providing support for large-scale use of AI models.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
an AI model training system based on piece-separating automatic learning comprises a code development module, a multi-scene training module, a training task module, a training resource module, a training arrangement module, an execution engine, a visual training process monitoring and a visual training task monitoring portal.
An AI model training method based on fragment type automatic learning comprises the following steps:
s1, automatically adjusting and optimizing the super-parameters;
s2, automatically adjusting parameters based on Bayes of the fractal calculation engine;
s3, procedural AI model training.
The further improvement is that the automatic super-parameter tuning in step S1 is to use Spearmint (gaussian process agent), SMAC (random forest regression) or hyper (Tree park Estimator-TPE) bayes optimization algorithm as a proxy model, and iteratively find the optimal super-parameter of the objective function according to the following 5 sub-steps:
s11: establishing an objective function (prior function) of the agent model;
s12: finding out the hyper-parameters which are best represented on the proxy model (the best represented judgment is to use an Expected Improvement (EI) value which is obtained according to an Acquisition Function);
s13: applying the found optimal hyper-parameter to the true objective function;
s14: updating the agent model containing the new result;
s15: repeating the steps 2-4 until the maximum number of iterations or time is reached.
Knowledge about the relation between the hyper-parameter setting and the model performance is formed through the hyper-parameter automatic tuning algorithm, and in the process of searching the optimal hyper-parameter through the algorithm, the knowledge is continuously utilized to select the next group of hyper-parameters, so that the test times are reduced as much as possible when the optimal hyper-parameter is found.
In a further improvement, the step S2 of bayesian automatic parameter adjustment based on the fractal calculation engine comprises the following sub-steps:
s21: after an AI model training task is submitted at a Master (Web front end), a Driver service and one or more Call Node services are created for the task by a piece-wise computing engine;
s22: the Driver segments and distributes tasks through a scheduling algorithm, executes task segmentation in cooperation with each called Node, collects task segmentation result models uploaded by the called nodes, compares and evaluates the task segmentation result models, and finally returns an optimal model;
s23: and the Call Node service receives and executes the task fragments distributed by the Driver and returns the task result models of all the fragments.
The method for fragmenting the task by the Driver is to generate a parameter search group based on a seed tree and generate inheritance and derivation fragments according to the randomness and the network.
The automatic optimization of the hyper-parameters realized by the traditional Bayesian optimization generally needs to calculate a large number of hyper-parameter combination agent models, and if a single model serial calculation method is adopted to evaluate the quality of the hyper-parameter combination, the efficiency of exploring the hyper-parameter optimal combination will be influenced. In order to improve the efficiency of searching the hyper-parameter combination by the Bayesian automatic parameter adjusting algorithm, the invention can effectively improve the searching efficiency of the Bayesian automatic parameter adjusting algorithm by the constructed piecewise calculation engine.
In a further improvement, the step S3 of training the flow AI model includes the following steps:
s31: developing a training script;
s32: creating a training task;
s33: setting a super-parameter search;
s34: associating the labeled sample;
s35: setting resource allocation;
s36: training a model;
s37: generating an optimal model;
s38: and (6) model release.
The invention discloses a fragment type automatic learning and training system, which integrates a hyper-parameter automatic tuning algorithm, Bayesian automatic tuning parameters based on a fragment type calculation engine and a flow AI model training method, solves the problems of low utilization rate of training resources, inflexible use of the training algorithm, unfriendly management of the training process and the like in the AI model training process for clients of government, enterprises and the like, and provides the following seven main functions: 1) multi-scenario training support: meanwhile, model training facing computer vision and structured data is supported, and model training of deep learning and traditional machine learning is also supported; 2) training task management: the method comprises the following steps of showing distribution views of code development, model training and model versions of training tasks in the forms of instrument panels, data tables and the like, wherein the distribution views comprise monitoring results of indexes such as the total number of the training tasks and completion conditions, and the summary indexes can be clicked to check a detail list; 3) training resource management: the method comprises the steps of providing application and allocation of model training cloud platform virtualization resources, and applying and using the resource demand quantity such as a CPU core, a GPU and a video memory required by model training according to the preset virtualization resource specification and according to the demand; 4) visual training and arrangement: the method supports the arrangement of a model training flow in a visual interface dragging mode, can customize the training task type, can optionally select Tensorflow, Keras, Pythroch, Ali and other machine learning frames for training, and also supports the CLI command line mode; 5) automatic model training: the method can automatically generate a plurality of super-parameter combinations, divide the exploration task into a plurality of subtasks by taking the super-parameter combinations as units, distribute the subtasks to a plurality of computing nodes for computing, select a model of the optimal super-parameter combination by utilizing an automatic super-parameter tuning algorithm, and fully automatically complete all modeling processes such as tuning, selecting, evaluating, publishing and the like, and 6) intelligent parallel training: the platform intelligently loads training tasks into a plurality of machine learning engines based on a piece-wise intelligent parallel training method according to the available conditions of resources such as a current CPU (central processing unit), a GPU (graphics processing unit), a memory and the like and the number of super-parameter combinations of a machine learning algorithm, so that parallel training is realized, and the training results of each parameter combination are finally integrated and output; 7) controlling a training process: the system can collect the sequence task log of each training batch in real time in the training process, and provides control functions for the training process by combining external requirements, wherein the control functions comprise the functions of integrally starting and stopping tasks, starting and stopping fragments and the like.
Compared with the traditional AI model training, the method has the following advantages: the dynamic expansion of the machine learning framework is supported, along with the continuous development of the machine learning technology, more and better machine learning frameworks will appear in the future, and the traditional AI model training does not need to be continuously adjusted and adapted to the bottom machine learning framework. In the scheme, machine learning frames such as Tensorflow, Keras, Pythrch, Caffe, MXNet, Ali and the like are preset, and a new machine learning frame can be dynamically loaded according to the development of future machine learning technology without modifying the basic function of the system. The method has the capability of automatically training the AI model, one AI model is successfully trained, and each parameter of a machine learning algorithm is usually required to be continuously adjusted to obtain a preset effect, for example, when a classification model is trained by using the XGboost algorithm, more than ten parameters such as eta, max _ depth, subsample and the like are usually required to be adjusted to achieve the preset accuracy and recall rate, if the classification model is manually adjusted, an experienced advanced model trainer can find the optimal parameter after multiple times of training, and the whole process of parameter adjustment, optimization selection, evaluation and release of the AI model training is fully automatically completed based on the automatic super-parameter optimization algorithm. Possess visual training arrangement ability, the reduction of AI model training threshold needs a set of visual interface to arrange the required data of model, components such as algorithm of use, shields the complicated model training configuration of bottom, and this scheme provides one individuation, the very high visual operation interface of usability, and the ordinary model training personnel of help also can accomplish the high model training of complexity. The method has the advantages that the parallel training of the split type model is realized, the automatic training efficiency of the AI model is improved, a plurality of models are required to be trained in parallel through a plurality of super-parameter combinations, the optimal parameter combination is found out from the training results of the models, the training task is intelligently loaded into a plurality of machine learning engines according to the resource condition and the requirement of the training task through an intelligent split parallel training scheduling algorithm, and the parallel split type training is realized.
Drawings
FIG. 1 is a block diagram of a tiled computing engine according to the present invention;
FIG. 2 is a flow chart of an AI model training method according to the present invention;
fig. 3 is an AI model training system architecture according to the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
Example one
An AI model training system based on piece-separating automatic learning comprises a code development module, a multi-scene training module, a training task module, a training resource module, a training arrangement module, an execution engine, a visual training process monitoring and a visual training task monitoring portal.
Firstly, an AI model training platform is built, and a client identification model is built based on face data, specifically:
(1) through a code development module, various scripts such as frame calling, data calculation, model training and the like are provided for a task of model training;
(2) creating a computer vision model training scene based on operator stock customer photos through a multi-scene training module;
(3) establishing a task of model training through a training task module, and providing information such as data, scripts and the like required by training;
(4) creating cloud platform resources such as a CPU (Central processing Unit), a GPU (graphics processing Unit), a memory and the like used by a model training task through a training resource module;
(5) establishing each node of a model training process through a training arrangement module, wherein the nodes comprise a data source, a machine learning framework, a machine learning algorithm and the like;
(6) training, arranging and configuring a training flow according to data and scripts configured by a training task through an execution engine, and completing the piece-wise automatic training of the AI model;
(7) the visual training process monitoring provided by the platform is used for realizing the visual monitoring of the training process;
(8) and browsing information such as the total number of codes, the total number of models, the total number of training, calculation power statistics and the like of model training through a visual monitoring portal of a training task to know the overall condition of the current system model training.
An AI model training method based on fragment type automatic learning comprises the following steps:
s1, automatic super-parameter tuning algorithm;
s2, automatically adjusting parameters based on Bayes of the fractal calculation engine;
s3, training a flow AI model;
the step S1 hyper-parameter automatic tuning algorithm uses a Spearmint (gaussian process agent), SMAC (random forest regression), or hyper (Tree park Estimator-TPE) bayes optimization algorithm as a proxy model, and iteratively finds the optimal hyper-parameter of the objective function according to the following 5 sub-steps:
s11: establishing an objective function (prior function) of the agent model;
s12: finding out the hyper-parameters which are best represented on the proxy model (the best represented judgment is to use an Expected Improvement (EI) value which is obtained according to an Acquisition Function);
s13: applying the found optimal hyper-parameter to the true objective function;
s14: updating the agent model containing the new result;
s15: repeating the above steps 2-4 until a maximum number of iterations or time is reached
The step S2 of bayesian automatic parameter adjustment based on the fractal calculation engine includes the following sub-steps:
s21: after an AI model training task is submitted at a Master (Web front end), a Driver service and one or more Call Node services are created for the task by a piece-splitting computing engine;
s22: the Driver segments and distributes tasks through a scheduling algorithm, executes task segmentation in cooperation with each called Node, collects task segmentation result models uploaded by the called nodes, compares and evaluates the task segmentation result models, and finally returns an optimal model;
s23: the Call Node service receives and executes the task fragments distributed by the Driver, and returns the task result models of the fragments;
the step S3 of the streamlined AI model training includes the following steps:
s31: developing a training script;
s32: creating a training task;
s33: setting a super-parameter search;
s34: associating the labeled sample;
s35: setting resource allocation;
s36: training a model;
s37: generating an optimal model;
s38: and (6) model release.
Firstly, the invention establishes a hyper-parameter automatic tuning algorithm for generating an objective function based on a Bayesian optimization algorithm, can efficiently tune hyper-parameters by using priori knowledge, in the process of searching for the optimal hyper-parameters, the test of each parameter combination is not independent, the former promotes the next selection, the process of searching for the optimal hyper-parameters is accelerated by reducing the calculation task, and the number of samples required by artificial guessing is not depended, the optimization technology is based on randomness and probability distribution, and under the condition that the objective function is unknown and the calculation complexity is high, the invention has extremely strong generalization and robustness; secondly, the fragment type computing engine provided by the invention supports the personalized customization of a training task and a machine learning framework, the system can automatically generate a plurality of super-parameter combinations according to the selected framework, automatically divides an exploration task into a plurality of subtasks to be distributed to a plurality of computing nodes for computing, and automatically selects a model of the optimal super-parameter combination, thereby fully automatically completing the modeling processes of parameter adjustment, optimization, evaluation, release and the like; finally, the invention provides a streamlined model training management aiming at the process of AI model training, and establishes a systematized and standardized model training step and scheme, thereby realizing comprehensive and scientific AI model training management, effectively reducing the threshold of model training and simultaneously ensuring the quality of model training.
While particular embodiments of the present invention have been illustrated and described, it would be obvious that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.
Claims (5)
1. An AI model training system based on piece-wise automatic learning is characterized by comprising a code development module, a multi-scene training module, a training task module, a training resource module, a training arrangement module, an execution engine, a visual training process monitoring and a visual training task monitoring portal.
2. An AI model training method based on slice type automatic learning is characterized by comprising the following steps:
s1: automatically adjusting and optimizing the super parameters;
s2: bayes automatic parameter adjustment based on the piecewise computing engine;
s3: and (4) carrying out procedural AI model training.
3. The AI model training method of claim 2, wherein the step S1 super-parameter autotuning algorithm uses Spearmint (Gaussian process agent), SMAC (random forest regression) or Hyperopt (Tree park Estimator-TPE) Bayesian optimization algorithm as agent model to iteratively find the best super-parameter of the objective function according to the following 5 sub-steps:
s11: establishing an objective function (prior function) of the agent model;
s12: finding out the hyper-parameters which are best represented on the proxy model (the best represented judgment is to use an Expected Improvement (EI) value which is obtained according to an Acquisition Function);
s13: applying the found optimal hyper-parameter to the true objective function;
s14: updating the agent model containing the new result;
s15: repeating the steps 2-4 until the maximum number of iterations or time is reached.
4. The AI model training method based on sliced automatic learning as claimed in claim 2, wherein the step S2 bayesian automatic parameter tuning based on a sliced computing engine comprises the following sub-steps:
s21: after an AI model training task is submitted at a Master (Web front end), a Driver service and one or more Call Node services are created for the task by a piece-splitting computing engine;
s22: the Driver segments and distributes tasks through a scheduling algorithm, executes task segmentation in cooperation with each called Node, collects task segmentation result models uploaded by the called nodes, compares and evaluates the task segmentation result models, and finally returns an optimal model;
s23: and the Call Node service receives and executes the task fragments distributed by the Driver and returns the task result models of all the fragments.
5. The AI model training method based on sliced automatic learning according to claim 2, wherein the step S3 of performing the flow-based AI model training comprises the following steps:
s31: developing a training script;
s32: creating a training task;
s33: setting a super-parameter search;
s34: associating the labeled sample;
s35: setting resource allocation;
s36: training a model;
s37: generating an optimal model;
s38: and (6) model release.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827111.2A CN110659741A (en) | 2019-09-03 | 2019-09-03 | AI model training system and method based on piece-splitting automatic learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827111.2A CN110659741A (en) | 2019-09-03 | 2019-09-03 | AI model training system and method based on piece-splitting automatic learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110659741A true CN110659741A (en) | 2020-01-07 |
Family
ID=69037750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910827111.2A Pending CN110659741A (en) | 2019-09-03 | 2019-09-03 | AI model training system and method based on piece-splitting automatic learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659741A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242317A (en) * | 2020-01-09 | 2020-06-05 | 深圳供电局有限公司 | Method and device for managing application, computer equipment and storage medium |
CN111553482A (en) * | 2020-04-09 | 2020-08-18 | 哈尔滨工业大学 | Method for adjusting and optimizing hyper-parameters of machine learning model |
CN111950601A (en) * | 2020-07-20 | 2020-11-17 | 上海淇馥信息技术有限公司 | Method and device for constructing resource return performance prediction model and electronic equipment |
CN112446110A (en) * | 2020-11-06 | 2021-03-05 | 电子科技大学 | Application method of EOASM algorithm in proxy model construction of robot palletizer driving arm seat |
CN112580820A (en) * | 2020-12-01 | 2021-03-30 | 遵义师范学院 | Intermittent machine learning training method |
CN112685457A (en) * | 2020-12-31 | 2021-04-20 | 北京思特奇信息技术股份有限公司 | Automatic training system and method for package recommendation machine learning model |
CN112801304A (en) * | 2021-03-17 | 2021-05-14 | 中奥智能工业研究院(南京)有限公司 | Automatic data analysis and modeling process |
CN112966438A (en) * | 2021-03-05 | 2021-06-15 | 北京金山云网络技术有限公司 | Machine learning algorithm selection method and distributed computing system |
CN113726960A (en) * | 2020-05-26 | 2021-11-30 | 中国电信股份有限公司 | Multi-AI capability engine interfacing and content distribution apparatus, methods, and media |
CN113780568A (en) * | 2020-06-09 | 2021-12-10 | 子长科技(北京)有限公司 | Automatic model training framework, device and storage medium |
CN115952417A (en) * | 2022-12-23 | 2023-04-11 | 昆岳互联环境技术(江苏)有限公司 | Genetic algorithm-based hyper-parameter automatic tuning method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062572A (en) * | 2017-12-28 | 2018-05-22 | 华中科技大学 | A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models |
CN108154430A (en) * | 2017-12-28 | 2018-06-12 | 上海氪信信息技术有限公司 | A kind of credit scoring construction method based on machine learning and big data technology |
CN108881446A (en) * | 2018-06-22 | 2018-11-23 | 深源恒际科技有限公司 | A kind of artificial intelligence plateform system based on deep learning |
CN109376869A (en) * | 2018-12-25 | 2019-02-22 | 中国科学院软件研究所 | A kind of super ginseng optimization system of machine learning based on asynchronous Bayes optimization and method |
CN109447277A (en) * | 2018-10-19 | 2019-03-08 | 厦门渊亭信息科技有限公司 | A kind of general machine learning is super to join black box optimization method and system |
CN109725531A (en) * | 2018-12-13 | 2019-05-07 | 中南大学 | A kind of successive learning method based on gate making mechanism |
CN109857804A (en) * | 2018-12-26 | 2019-06-07 | 同盾控股有限公司 | A kind of searching method, device and the electronic equipment of distributed model parameter |
CN110119271A (en) * | 2018-12-19 | 2019-08-13 | 厦门渊亭信息科技有限公司 | A kind of model across machine learning platform defines agreement and adaption system |
-
2019
- 2019-09-03 CN CN201910827111.2A patent/CN110659741A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062572A (en) * | 2017-12-28 | 2018-05-22 | 华中科技大学 | A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models |
CN108154430A (en) * | 2017-12-28 | 2018-06-12 | 上海氪信信息技术有限公司 | A kind of credit scoring construction method based on machine learning and big data technology |
CN108881446A (en) * | 2018-06-22 | 2018-11-23 | 深源恒际科技有限公司 | A kind of artificial intelligence plateform system based on deep learning |
CN109447277A (en) * | 2018-10-19 | 2019-03-08 | 厦门渊亭信息科技有限公司 | A kind of general machine learning is super to join black box optimization method and system |
CN109725531A (en) * | 2018-12-13 | 2019-05-07 | 中南大学 | A kind of successive learning method based on gate making mechanism |
CN110119271A (en) * | 2018-12-19 | 2019-08-13 | 厦门渊亭信息科技有限公司 | A kind of model across machine learning platform defines agreement and adaption system |
CN109376869A (en) * | 2018-12-25 | 2019-02-22 | 中国科学院软件研究所 | A kind of super ginseng optimization system of machine learning based on asynchronous Bayes optimization and method |
CN109857804A (en) * | 2018-12-26 | 2019-06-07 | 同盾控股有限公司 | A kind of searching method, device and the electronic equipment of distributed model parameter |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242317A (en) * | 2020-01-09 | 2020-06-05 | 深圳供电局有限公司 | Method and device for managing application, computer equipment and storage medium |
CN111242317B (en) * | 2020-01-09 | 2023-11-24 | 深圳供电局有限公司 | Method, device, computer equipment and storage medium for managing application |
CN111553482B (en) * | 2020-04-09 | 2023-08-08 | 哈尔滨工业大学 | Machine learning model super-parameter tuning method |
CN111553482A (en) * | 2020-04-09 | 2020-08-18 | 哈尔滨工业大学 | Method for adjusting and optimizing hyper-parameters of machine learning model |
CN113726960A (en) * | 2020-05-26 | 2021-11-30 | 中国电信股份有限公司 | Multi-AI capability engine interfacing and content distribution apparatus, methods, and media |
CN113726960B (en) * | 2020-05-26 | 2022-09-30 | 中国电信股份有限公司 | Multi-AI capability engine interfacing and content distribution apparatus, methods, and media |
CN113780568B (en) * | 2020-06-09 | 2024-05-14 | 子长科技(北京)有限公司 | Automatic model training system, apparatus, and storage medium |
CN113780568A (en) * | 2020-06-09 | 2021-12-10 | 子长科技(北京)有限公司 | Automatic model training framework, device and storage medium |
CN111950601A (en) * | 2020-07-20 | 2020-11-17 | 上海淇馥信息技术有限公司 | Method and device for constructing resource return performance prediction model and electronic equipment |
CN111950601B (en) * | 2020-07-20 | 2024-04-26 | 奇富数科(上海)科技有限公司 | Method and device for constructing resource return performance prediction model and electronic equipment |
CN112446110A (en) * | 2020-11-06 | 2021-03-05 | 电子科技大学 | Application method of EOASM algorithm in proxy model construction of robot palletizer driving arm seat |
CN112446110B (en) * | 2020-11-06 | 2022-04-05 | 电子科技大学 | Application method of agent model based on EOASM algorithm in robot palletizer driving arm seat |
CN112580820A (en) * | 2020-12-01 | 2021-03-30 | 遵义师范学院 | Intermittent machine learning training method |
CN112685457A (en) * | 2020-12-31 | 2021-04-20 | 北京思特奇信息技术股份有限公司 | Automatic training system and method for package recommendation machine learning model |
CN112966438A (en) * | 2021-03-05 | 2021-06-15 | 北京金山云网络技术有限公司 | Machine learning algorithm selection method and distributed computing system |
CN112801304A (en) * | 2021-03-17 | 2021-05-14 | 中奥智能工业研究院(南京)有限公司 | Automatic data analysis and modeling process |
CN115952417A (en) * | 2022-12-23 | 2023-04-11 | 昆岳互联环境技术(江苏)有限公司 | Genetic algorithm-based hyper-parameter automatic tuning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110659741A (en) | AI model training system and method based on piece-splitting automatic learning | |
CN108536650B (en) | Method and device for generating gradient lifting tree model | |
US11030521B2 (en) | Estimating cardinality selectivity utilizing artificial neural networks | |
CN110287029A (en) | A method of it is adjusted based on kubernetes container resource dynamic | |
CN110956202B (en) | Image training method, system, medium and intelligent device based on distributed learning | |
US11299177B2 (en) | Information processing method, electronic device, and storage medium | |
CN111126621B (en) | Online model training method and device | |
US10963232B2 (en) | Constructing and enhancing a deployment pattern | |
CN113010312B (en) | Super-parameter tuning method, device and storage medium | |
CN112257868A (en) | Method and device for constructing and training integrated prediction model for predicting passenger flow | |
CN113557534A (en) | Deep forest model development and training | |
CN113094116A (en) | Deep learning application cloud configuration recommendation method and system based on load characteristic analysis | |
CN108073582B (en) | Computing framework selection method and device | |
CN109558248A (en) | A kind of method and system for the determining resource allocation parameters calculated towards ocean model | |
US20210326761A1 (en) | Method and System for Uniform Execution of Feature Extraction | |
CN117992078A (en) | Automatic deployment method for reasoning acceleration service based on TensorRT-LLM model | |
CN104657422B (en) | A kind of content issue intelligent method for classifying based on categorised decision tree | |
CN112231299B (en) | Method and device for dynamically adjusting feature library | |
CN112925811A (en) | Data processing method, device, equipment, storage medium and program product | |
CN114661571B (en) | Model evaluation method, device, electronic equipment and storage medium | |
US20210311942A1 (en) | Dynamically altering a query access plan | |
CN114417980A (en) | Business model establishing method and device, electronic equipment and storage medium | |
CN114187259A (en) | Creation method of video quality analysis engine, video quality analysis method and equipment | |
CN113986222A (en) | API (application programming interface) translation system for cloud computing | |
CN113805850A (en) | Artificial intelligence management system based on multiple deep learning and machine learning frameworks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |
|
RJ01 | Rejection of invention patent application after publication |