CN112883654A - Model training system based on data driving - Google Patents

Model training system based on data driving Download PDF

Info

Publication number
CN112883654A
CN112883654A CN202110311639.1A CN202110311639A CN112883654A CN 112883654 A CN112883654 A CN 112883654A CN 202110311639 A CN202110311639 A CN 202110311639A CN 112883654 A CN112883654 A CN 112883654A
Authority
CN
China
Prior art keywords
data
model
training
parameters
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110311639.1A
Other languages
Chinese (zh)
Other versions
CN112883654B (en
Inventor
康波
孟祥飞
孙华文
郭佳
李菲菲
高佑强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Supercomputer Center In Tianjin
Original Assignee
National Supercomputer Center In Tianjin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Supercomputer Center In Tianjin filed Critical National Supercomputer Center In Tianjin
Priority to CN202110311639.1A priority Critical patent/CN112883654B/en
Publication of CN112883654A publication Critical patent/CN112883654A/en
Application granted granted Critical
Publication of CN112883654B publication Critical patent/CN112883654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to a model training system based on data driving, which comprises: the initial model builder is used for acquiring a data training set from a user terminal browser and selecting a corresponding network structure model from a preset network structure model library according to the data training set; the parameter extractor is used for extracting preset parameters from the network structure model according to a preset parameter frame; the model mapper is used for selecting a target deep learning frame and mapping preset parameters into mapping data and a mapping model corresponding to the target deep learning frame; a code generator for dynamically generating executable code from the mapping data and the mapping model; and the code executor is used for executing the executable code and generating an operation result. The invention simplifies the model training process, improves the efficiency of model training, reduces the operation difficulty and enables non-professional personnel to use the model training process.

Description

Model training system based on data driving
Technical Field
The invention relates to the technical field of model training, in particular to a model training system based on data driving.
Background
With the continuous development of machine learning technology, model training is widely applied, but a professional who usually knows machine learning knowledge can only perform model training. However, more and more professionals in the field of non-machine learning need to use model training as well. The supercomputer has natural attributes of huge computing power and large-scale data fusion, so that the execution of a model training process on the supercomputer is a development trend, and similarly, the supercomputer also needs a professional mastering the supercomputer to operate the supercomputer. Therefore, how to train the model on the supercomputer improves the efficiency of model training and simplifies the use process, so that non-professionals can use the model as a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a model training system based on data driving, which simplifies the model training process, improves the efficiency of model training, reduces the operation difficulty and enables non-professionals to use the model training system.
A data-driven model-based training system, comprising:
the initial model builder is arranged on a cloud server, and the cloud server is in communication connection with the supercomputer and the user terminal and is used for acquiring a data training set from a user terminal browser and selecting a corresponding network structure model from a preset network structure model library according to the data training set;
the parameter extractor is arranged on the cloud server and used for extracting preset parameters from the network structure model according to a preset parameter frame;
the model mapper is arranged on the cloud server and used for selecting a target deep learning frame and mapping the extracted preset parameters into mapping data and a mapping model corresponding to the target deep learning frame;
the code generator is arranged on the super computer and used for dynamically generating executable codes from the mapping data and the mapping model;
and the code executor is arranged on the super computer and used for running the executable code and generating a running result.
Further, the system further includes a network structure model library module, configured to integrate and package the generic network structure models and the corresponding parameters, set the preset parameters to be configured corresponding to each network structure model in the network structure model library as a state to be input, and configure all the parameters except the parameters to be configured as default values.
Further, the initial model builder comprises a network structure model recommending unit, and is configured to recommend a corresponding network structure model to the user terminal according to the number of the data training set samples.
Further, the system also comprises a target deep learning frame determining module, which is used for determining the target deep learning frame according to the deep learning frame configured on the super computer and the resource use condition.
Furthermore, a data layer, a calculation layer and a training layer are arranged in the initial model builder, the initial model builder sets the position of training data in the data layer, sets a directed graph structure of a network structure model in the calculation layer, sets resources required by training calculation in the training layer, and the resources required by the training calculation comprise the number of nodes, the number of training rounds and the size of data quantity required by each round.
Further, the preset parameter frame includes data parameters, model parameters, and calculation resources, and the parameter extractor is specifically configured to:
extracting the data parameters from the data layer, the data parameters including a data source, a data format, and a data size;
extracting model parameters from the calculation layer, wherein the model parameters comprise node composition and connection relation of the graph and parameters corresponding to each graph node in the calculation layer;
computing resources required for training computations are extracted from the training layer.
Further, the mapping data is training data corresponding to a target deep learning frame obtained by analyzing the data parameters, and the training data comprises a training data source, data size of each training batch, an augmentation option and a preprocessing option;
the mapping model is diagram form data corresponding to the target deep learning frame obtained by mapping the model parameters and the like, and comprises a json form and an xml form.
Further, the code generator is specifically configured to preset a conversion template, and sequentially convert the mapping data and the mapping model into corresponding codes by using a sequence of dependent library loading, data loading, model loading, and training configuration.
Further, the code executor is configured to convert the executable code into an operation script corresponding to the supercomputer to operate, and generate the operation result.
Further, the system further comprises:
the result recoverer is used for acquiring an operation result from the code executor, storing the operation result to a cloud server, storing the operation result into a network disk, mounting the operation result to the cloud server, and sending the operation result to the result viewer by the cloud server;
and the result viewer is arranged on a browser of the user terminal and used for displaying and displaying key parameters according to a preset display rule, wherein the key parameters comprise loss parameters, learning rate and accuracy rate.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the model training system based on data driving provided by the invention can achieve considerable technical progress and practicability, has wide industrial utilization value and at least has the following advantages:
(1) the model training process is simplified, the model training efficiency is improved, the operation difficulty is reduced, and non-professional personnel can use the model training process.
(2) Algorithm multiplexing and framework migration are realized, multiple times of programming are avoided, the model training process is simplified, the model training efficiency is improved, and the fault tolerance is good.
(3) The method has the advantages that the effective and quick execution of the model is guaranteed, the resource allocation is dynamically realized, the adaptation of different frames is realized, and the system can automatically configure corresponding computing environment and resources according to the installation condition of the current frame and the available condition of the resources as long as a user establishes a training task, so that the method can be quickly executed.
(4) The system has a set of unified model structure, so that the system has good portability, namely, a set of written codes can be executed under different frames, and the comparative analysis and the later improvement are facilitated.
(5) The visual function is achieved, the interaction and execution efficiency is high, and good user experience is achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of a model training system based on data driving according to an embodiment of the present invention.
[ notation ] to show
1: initial model builder 2: parameter extractor
3: the model mapper 4: code generator
5: the code executor 6: result recoverer
7: a result viewer.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to an embodiment of a model training system based on data driving and its effects according to the present invention with reference to the accompanying drawings and preferred embodiments.
In order to solve the technical problems in the background art, in the embodiment of the invention, a corresponding network structure model can be selected according to a data training set input by a user, and a technical process is generated by designing a machine learning model, so that dynamic mapping from a front-end visual model to a back-end machine learning framework is realized. The method comprises the steps of carrying out flow analysis and parameter analysis on a model description file formed by drag-and-drop modeling to form a network structure with a uniform description format, and then loading a training database according to the designation of network parameters on training data to form a target model. And (4) adapting the corresponding training frame and the target model by specifying the computing resources required by training. And finally, training is carried out, and operations such as model storage are realized.
Specifically, an embodiment of the present invention provides a data-driven model training system, as shown in fig. 1, including an initial model builder 1, a parameter extractor 2, a model mapper 3, a code generator 4, and a code executor 5, where the initial model builder 1 is disposed on a cloud server, and the cloud server is in communication connection with a super computer and a user terminal, and is configured to obtain a data training set from a user terminal browser, and select a corresponding network structure model from a preset network structure model library according to the data training set; the parameter extractor 2 is arranged on the cloud server, and the parameter extractor 2 is used for extracting preset parameters from the network structure model according to a preset parameter frame; the model mapper 3 is arranged on the cloud server and used for selecting a target deep learning framework and mapping the extracted preset parameters into mapping data and a mapping model corresponding to the target deep learning framework; the code generator 4 is arranged on the super computer and used for dynamically generating executable codes from the mapping data and the mapping model; the code executor 5 is arranged on the super computer and used for running the executable code and generating a running result.
The data driving means that mass data are collected by means of mobile internet or other related software, the data are organized to form information, then the related information is integrated and refined, and an automatic decision model is formed through training and fitting on the basis of the data.
In order to facilitate the user to directly obtain the execution result through the visual interface, the user experience is improved. The system further comprises a result recoverer 6 and a result viewer 7, as shown in fig. 1, wherein the result recoverer 6 is configured to obtain an operation result from the code executor 5, store the operation result in a cloud server, and send the operation result to the result viewer 7; the result viewer 7 is arranged on a browser of the user terminal and is used for displaying the display parameters corresponding to the result generated by the operation according to a preset display rule. And the result recoverer 6 stores the operation result into a network disk, mounts the operation result onto the cloud server, and sends the operation result to the result viewer 7 through the cloud server. The result viewer 7 is configured to display key parameters according to preset rules, where the key parameters include loss parameters, learning rate, and accuracy rate, and are also used as references for next model editing and adjustment.
The system can effectively solve the technical problem of high algorithm multiplexing and modifying difficulty when artificial intelligence operation is executed on a super computer by setting the initial model builder 1 and the parameter extractor 2. When the structure or parameters of the model network change, codes do not need to be rewritten, and the difficulty increase of the management of the model file and the corresponding data log file is reduced. The system solves the technical problem of high difficulty in cross-frame migration by arranging the model mapper 3, can be directly converted into mapping data and a mapping model corresponding to a corresponding target deep learning frame, does not need to rewrite codes, and improves fault tolerance. According to the system, the code executor 5, the result recoverer 6 and the result viewer 7 are arranged, so that the calculation process runs on the super computer, the user inputs tasks and inquires results on the browser, the debugging difficulty is simplified, the interaction efficiency is improved, the calculation efficiency is also improved, and the user experience is improved.
As an example, the system further includes a network structure model library module, configured to integrate and package generic network structure models and corresponding parameters, set preset parameters to be configured corresponding to each network structure model in the network structure model library as a to-be-input state, configure all parameters except the to-be-configured parameters as default values, therefore, the system can only open some necessary parameters which need to be set by the user, and set other parameters as default values in advance and package the default values into the template, when the user uses the model training system, the model training can be carried out by parameters set by the user, the user operation difficulty is reduced by inputting only, the user use is convenient, and the common network structure model is similar in network structure and identical in functional system and input data, namely, the models are identical in property.
As an example, the system according to the embodiment of the present invention is applicable to application scenarios such as image classification and image target detection, and in the application scenario of image classification, multiple mature models such as AlexNet, LeNet, VGG, ResNet, inclusion, MobileNet, and multiple versions thereof may be integrated in the network structure model library.
As an example, the initial model builder includes a network structure model recommending unit, configured to recommend a corresponding network structure model to the user terminal according to the number of the data training set samples, so as to further simplify a model training process and reduce user operation difficulty.
The initial model builder 1 is internally provided with a data layer, a calculation layer and a training layer, the initial model builder 1 sets the position of training data in the data layer, sets a directed graph structure of a network structure model in the calculation layer, sets resources required by training calculation in the training layer, and the resources required by the training calculation comprise the number of nodes, the number of training rounds and the size of data quantity required by each round.
As an example, the system further includes a target deep learning framework determination module, configured to determine the target deep learning framework according to the deep learning framework configured on the super computer and the resource usage. It is understood that the default deep learning frame can be directly specified by the user, or the system elects the default deep learning frame as the target deep learning frame according to the task requirement.
As an example, the preset parameter framework includes data parameters, model parameters, and computing resources, and the parameter extractor 2 is specifically configured to: extracting the data parameters from the data layer, the data parameters including a data source, a data format, and a data size; extracting model parameters from the calculation layer, wherein the model parameters comprise node composition and connection relation of the graph and parameters corresponding to each graph node in the calculation layer; computing resources required for training computations are extracted from the training layer. It should be noted that, the parameter frame may also be reserved with status parameters for recording the operation parameters and results of each operation stage, which may reflect the status of the whole operation process, and then may call the status parameters of the corresponding stage for display according to the requirements.
The mapping data can be training data corresponding to a target deep learning frame obtained by analyzing the data parameters, and comprises a training data source, data size of each training batch, an augmentation option and a preprocessing option; the mapping model can be chart form data corresponding to the target deep learning framework obtained by mapping the model parameters and the like, and comprises a json form and an xml form.
As an example, the code generator 4 is specifically configured to preset a conversion template, and sequentially convert the mapping data and the mapping model into corresponding codes by using a sequence of dependent library loading, data loading, model loading, and training configuration, and may be specifically implemented by using a Python language, where Python is a computer programming language and is an object-oriented dynamic type language.
The code executor 5 is configured to convert the executable code into an operation script corresponding to the supercomputer, and execute the operation script to generate the operation result. For example, on the Tianhe super computer, the running script is assumed to be named xxx.bat is yhrun-N N-p P run.py, where N is the number of nodes specified by the computing resource; p is a resource partition on the super computing; py is the code generated by the code generator 4 and the script will be bat by yhbath-N N-p P xxx. When the computing resources are configured as the core number, the node and core number conversion needs to be performed, that is: node number = INT (number of cores per node core), if the number of cores per node core is not divided exactly, node number = INT (number of cores per node core) + 1.
The system of the embodiment of the invention simplifies the model training process, improves the efficiency of model training, reduces the operation difficulty and enables non-professional personnel to use the system. The system realizes algorithm multiplexing and frame migration, avoids multiple times of programming, simplifies the model training process, improves the efficiency of model training and has good fault tolerance. The method can also ensure the effective and quick execution of the model, dynamically realize resource allocation, realize the adaptation of different frames, and the system can automatically configure corresponding computing environment and resources according to the installation condition of the current frame and the available condition of the resources as long as a user establishes a training task, thereby being capable of realizing the quick execution. In addition, the system has a set of unified model structure, so that the system has good portability, namely, a set of written codes can be executed under different frames, and the comparison analysis and the later improvement are facilitated. The system also has the visual function, the interaction and execution efficiency is high, and the system has good user experience.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A data-driven model-based training system, comprising:
the initial model builder is arranged on a cloud server, the cloud server is in communication connection with the super computer and the user terminal, and the initial model builder is used for acquiring a data training set from a user terminal browser and selecting a corresponding network structure model from a preset network structure model library according to the data training set;
the parameter extractor is arranged on the cloud server and used for extracting preset parameters from the network structure model according to a preset parameter frame;
the model mapper is arranged on the cloud server and used for selecting a target deep learning frame and mapping the extracted preset parameters into mapping data and a mapping model corresponding to the target deep learning frame;
the code generator is arranged on the super computer and used for dynamically generating executable codes from the mapping data and the mapping model;
and the code executor is arranged on the super computer and used for running the executable code and generating a running result.
2. The data-driven-based model training system of claim 1,
the system also comprises a network structure model library construction module which is used for carrying out integrated packaging on the common network structure models and the corresponding parameters, setting the preset parameters to be configured corresponding to each network structure model in the network structure model library to be in a state to be input, and configuring all the parameters except the parameters to be configured as default values.
3. The data-driven-based model training system of claim 1,
the initial model builder comprises a network structure model recommending unit, and the network structure model recommending unit is used for recommending a corresponding network structure model to the user terminal according to the number of the data training set samples.
4. The data-driven-based model training system of claim 1,
the system also comprises a target deep learning frame determining module which is used for determining the target deep learning frame according to the deep learning frame configured on the super computer and the resource using condition.
5. The data-driven-based model training system of claim 1,
the initial model builder is internally provided with a data layer, a calculation layer and a training layer, the initial model builder sets the position of training data on the data layer, sets a directed graph structure of a network structure model on the calculation layer, sets resources required by training calculation on the training layer, and the resources required by the training calculation comprise the number of nodes, the number of training rounds and the size of data quantity required by each round.
6. The data-driven-based model training system of claim 5,
the preset parameter frame comprises data parameters, model parameters and computing resources, and the parameter extractor is specifically configured to:
extracting the data parameters from the data layer, the data parameters including a data source, a data format, and a data size;
extracting model parameters from the calculation layer, wherein the model parameters comprise node composition and connection relation of the graph and parameters corresponding to each graph node in the calculation layer;
computing resources required for training computations are extracted from the training layer.
7. The data-driven-based model training system of claim 6,
the mapping data is training data corresponding to a target deep learning frame obtained by analyzing the data parameters, and comprises a training data source, data size of each training batch, an augmentation option and a preprocessing option;
the mapping model is diagram form data corresponding to the target deep learning frame obtained by mapping the model parameters and the like, and comprises a json form and an xml form.
8. The data-driven-based model training system of claim 1,
the code generator is specifically used for presetting a conversion template, and sequentially converting the mapping data and the mapping model into corresponding codes by adopting the sequence of dependent library loading, data loading, model loading and training configuration.
9. The data-driven-based model training system of claim 1,
the code executor is used for converting the executable code into an operation script corresponding to the super computer to operate, and generating the operation result.
10. The data-driven-based model training system according to any one of claims 1-9,
the system further comprises:
the result recoverer is used for acquiring an operation result from the code executor, storing the operation result to a cloud server, storing the operation result into a network disk, mounting the operation result to the cloud server, and sending the operation result to the result viewer by the cloud server;
and the result viewer is arranged on a browser of the user terminal and used for displaying and displaying key parameters according to a preset display rule, wherein the key parameters comprise loss parameters, learning rate and accuracy rate.
CN202110311639.1A 2021-03-24 2021-03-24 Model training system based on data driving Active CN112883654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110311639.1A CN112883654B (en) 2021-03-24 2021-03-24 Model training system based on data driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110311639.1A CN112883654B (en) 2021-03-24 2021-03-24 Model training system based on data driving

Publications (2)

Publication Number Publication Date
CN112883654A true CN112883654A (en) 2021-06-01
CN112883654B CN112883654B (en) 2023-01-31

Family

ID=76042149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110311639.1A Active CN112883654B (en) 2021-03-24 2021-03-24 Model training system based on data driving

Country Status (1)

Country Link
CN (1) CN112883654B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665072A (en) * 2018-05-23 2018-10-16 中国电力科学研究院有限公司 A kind of machine learning algorithm overall process training method and system based on cloud framework
CN109376844A (en) * 2018-10-30 2019-02-22 银河水滴科技(北京)有限公司 The automatic training method of neural network and device recommended based on cloud platform and model
CN109412829A (en) * 2018-08-30 2019-03-01 华为技术有限公司 A kind of prediction technique and equipment of resource distribution
US20200027210A1 (en) * 2018-07-18 2020-01-23 Nvidia Corporation Virtualized computing platform for inferencing, advanced processing, and machine learning applications
CN110852449A (en) * 2019-11-25 2020-02-28 北京百度网讯科技有限公司 Model migration method and electronic device
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
US20200222949A1 (en) * 2017-09-19 2020-07-16 Intuitive Robotics, Inc. Systems and methods for waste item detection and recognition
US20200250585A1 (en) * 2019-01-31 2020-08-06 EMC IP Holding Company LLC Method, device and computer program product for deploying a machine learning model
CN111696661A (en) * 2020-05-13 2020-09-22 平安科技(深圳)有限公司 Patient clustering model construction method, patient clustering method and related equipment
CN112162734A (en) * 2020-10-23 2021-01-01 福州大学 Integrated machine learning algorithm library and unified programming framework (for deep learning)
CN112328325A (en) * 2020-11-06 2021-02-05 深圳壹账通智能科技有限公司 Execution method and device of model file, terminal equipment and storage medium
CN112541577A (en) * 2020-12-16 2021-03-23 上海商汤智能科技有限公司 Neural network generation method and device, electronic device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200222949A1 (en) * 2017-09-19 2020-07-16 Intuitive Robotics, Inc. Systems and methods for waste item detection and recognition
CN108665072A (en) * 2018-05-23 2018-10-16 中国电力科学研究院有限公司 A kind of machine learning algorithm overall process training method and system based on cloud framework
US20200027210A1 (en) * 2018-07-18 2020-01-23 Nvidia Corporation Virtualized computing platform for inferencing, advanced processing, and machine learning applications
CN109412829A (en) * 2018-08-30 2019-03-01 华为技术有限公司 A kind of prediction technique and equipment of resource distribution
CN109376844A (en) * 2018-10-30 2019-02-22 银河水滴科技(北京)有限公司 The automatic training method of neural network and device recommended based on cloud platform and model
US20200250585A1 (en) * 2019-01-31 2020-08-06 EMC IP Holding Company LLC Method, device and computer program product for deploying a machine learning model
CN110852449A (en) * 2019-11-25 2020-02-28 北京百度网讯科技有限公司 Model migration method and electronic device
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN111696661A (en) * 2020-05-13 2020-09-22 平安科技(深圳)有限公司 Patient clustering model construction method, patient clustering method and related equipment
CN112162734A (en) * 2020-10-23 2021-01-01 福州大学 Integrated machine learning algorithm library and unified programming framework (for deep learning)
CN112328325A (en) * 2020-11-06 2021-02-05 深圳壹账通智能科技有限公司 Execution method and device of model file, terminal equipment and storage medium
CN112541577A (en) * 2020-12-16 2021-03-23 上海商汤智能科技有限公司 Neural network generation method and device, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王子枫: "人工智能芯片软件栈的开发及算法", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Also Published As

Publication number Publication date
CN112883654B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
Granchelli et al. Towards recovering the software architecture of microservice-based systems
CN103198009B (en) A kind of universal testing method, system and related device
US8417798B2 (en) Deploying artifacts for packaged software application in cloud computing environment
CN112199086B (en) Automatic programming control system, method, device, electronic equipment and storage medium
Bünder Decoupling Language and Editor-The Impact of the Language Server Protocol on Textual Domain-Specific Languages.
WO2004021207A1 (en) Systems and methods for improving service delivery
CN113467771B (en) Model-based industrial edge cloud collaboration system and method
CN112130812B (en) Analysis model construction method and system based on data stream mixed arrangement
CN115794106A (en) Method and system for analyzing configuration of binary protocol data of rail transit
CN110780856A (en) Electricity consumption data publishing platform based on micro-service
CN113987398A (en) Software self-defined form content web development system and method
CN111125451A (en) Data production processing method and device, electronic equipment and storage medium
CN111104181A (en) Webpage data filling system for visually editing task flow
CN113505054B (en) Network data static test system and test method for unmanned aerial vehicle control station
CN114138274A (en) High-level intermediate representation conversion method and related device of deep learning compiler
CN114048188A (en) Cross-database data migration system and method
CN112882696B (en) Full-element model training system based on supercomputer
CN112883654B (en) Model training system based on data driving
WO2023160402A1 (en) Data modeling method and apparatus, and device and storage medium
US20230041718A1 (en) Automated code generation based on pseudo-code
CN111309378A (en) Machine learning model life cycle management system and method
Shah et al. DesignSystemsJS-Building a Design Systems API for aiding standardization and AI integration
CN115525321A (en) Distributed task generation method, device, equipment and storage medium
CN113885844A (en) Business service arranging method and related device
CN113204866A (en) Computing middleware method and system suitable for cloud computing of power system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant