CN109669973A - One kind being based on distributed dynamic training system - Google Patents

One kind being based on distributed dynamic training system Download PDF

Info

Publication number
CN109669973A
CN109669973A CN201811606377.6A CN201811606377A CN109669973A CN 109669973 A CN109669973 A CN 109669973A CN 201811606377 A CN201811606377 A CN 201811606377A CN 109669973 A CN109669973 A CN 109669973A
Authority
CN
China
Prior art keywords
data
neural network
loss function
deep learning
data source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811606377.6A
Other languages
Chinese (zh)
Inventor
蒋健
兰毅
谭涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Computing Technology (chongqing) Co Ltd
Original Assignee
Deep Computing Technology (chongqing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Computing Technology (chongqing) Co Ltd filed Critical Deep Computing Technology (chongqing) Co Ltd
Priority to CN201811606377.6A priority Critical patent/CN109669973A/en
Publication of CN109669973A publication Critical patent/CN109669973A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides basis for artificial intelligence industry and calculates power, provides distal end calculation, discloses one kind based on distributed dynamic training system, circular is as follows: step 1: establishing deep learning model;Step 2: remote data source is obtained.Step 3: the data source got is handled.Step 4: training in real time.One kind of the invention is based on distributed dynamic training system, and step is easy, it is easy to accomplish, have the function of real-time dynamic training, greatlys improve the deep learning and distributed computation ability of machine.

Description

One kind being based on distributed dynamic training system
Technical field
The present invention relates to a kind of training systems, and in particular to one kind is based on distributed dynamic training system.
Background technique
In recent years, deep learning and distributed computing are the research contents being concerned in machine learning field, at present It is widely used in the research and development of artificial intelligence related application.The restriction bottle of artificial intelligence industry and field development Neck is divided into three bulks, is algorithm respectively, calculates power and data.As the k speed of big data technology develops, the acquisition of data information, place From proposing to nowadays having had, more mature technology solves these hardly possiblies to the crucial technical problems such as reason, transmitting and storage Topic, provides the data source of magnanimity for all trades and professions, also steps into society by the benefit of analysis and processing generation to data Meeting, into the commercialized stage.Meanwhile along with the huge advance of modern computer scientific domain, in machine learning field The rapid rising of deep learning algorithm, neural network occupy the leading position in machine learning field, and efficient cm, rm etc. are deep It spends the maturation of learning network and gradually generates, simultaneously, cloud computing and cloud supercomputing based on the application on the basis of them The birth of cluster and node provides powerful calculation power support for artificial intelligence field.But the calculation power that cloud computing provides is having Under the high efficiency of standby elasticity extendible capacity, will lead to a large amount of data source the process generation huge time by locally arriving operation at This is proposed to solve the time cost of the data model transmitting between local and operation calculating based on each neural network Dynamic training mode.This mode solves most of user's transmission cost and benefit.
Summary of the invention
To solve the above-mentioned problems, the present invention provides basis for artificial intelligence industry and calculates power, provides distal end calculation, public It has opened a kind of based on distributed dynamic training system.
The technical scheme is that a kind of be based on distributed dynamic training system, it is characterised in that: its specific calculating side Method is as follows:
Step 1: deep learning model is established;Obtain the characteristic information of object initialization neural network;It will be at the beginning of the target The characteristic information of beginningization neural network is analyzed in default neural network big data or database, obtains analysis result;Root The object initialization neural network is determined according to the analysis result;
Step 2: remote data source is obtained;Data source is data needed for user's neural computing, is mentioned by user For this system carries out data according to system prescribed form to sequentially cut processing, and the data that processing is completed then are passed through network It is transferred to the cloud storage end of computing platform, is stored in data source and has trained neural network;
The characteristic information that remote data source is used to obtain object initialization neural network is obtained, for the target is initial The characteristic information for changing neural network is analyzed in default neural network big data or database, obtains analysis result;Target It initializes network and obtains module, for determining the object initialization neural network according to the analysis result;Neural network is deep Study module is spent, for obtaining the target nerve of deep learning by the training data training object initialization neural network Network;
Step 3: the data source got is handled;Firstly, a correspondence can be constructed by system in the data Layer of platform The partial data pond of data source size, data source are stored in cloud after the cutting process Jing Guo the first step, can will be in data source HASH value after one-to-one encryption is sent to platform by System Scheduler, passes through the HASH after encryption in platform Code is from extracting the data source transmitted in client in network;
For data source calculation method in the way of circulation, every wheel uses a loss function in the multiple loss function The loss value between the prediction target and the real goal is calculated, and then according to every calculated loss value backpropagation of wheel Adjust the parameter in the deep learning model, comprising: jth wheel uses the i-th loss function meter in the multiple loss function The loss value between the prediction target and the real goal is calculated, and then according to the calculated loss of the i-th loss function Value carries out backpropagation and adjusts the parameter in the deep learning model, and i successively takes 1,2 ... ..., and N, N are multiple loss functions Number, j=1,2 ... ..., M, M is default the number of iterations;If the calculated loss value of loss function used in the wheel of jth+1 Greater than the calculated loss value of loss function that jth wheel uses, then it is calculated to restore the loss function used according to jth wheel Parameter value and the loss function that is used according to jth wheel in the deep learning model of loss value backpropagation adjustment is to described Deep learning model carries out backpropagation adjusting parameter;
Step 4: training in real time;It is mentioned in the mapping address for the data pool that platform end is constructed by cloud storage service system For the Computational frame in computing platform can call data from data pool, have according to the data volume and data pool that calculate required Data volume corresponding relationship carry out data extraction and calculating, reach real-time purpose.
Further, the training system includes:
First acquisition unit, for obtaining training sample data;
Second acquisition unit, for obtaining institute based on the deep learning model and the training sample data pre-established State the prediction target of training sample data;
Third acquiring unit, for obtaining multiple loss functions;Adjustment unit, for each by the multiple loss function From the loss value calculated between the prediction target and the real goal of the training sample data, and then according to calculated Loss value backpropagation adjusts the parameter in the deep learning model.
Further, the cutting and processing mode of the data in the described acquisition remote data source are as follows: by partial data according to Size of dynamic value etc. point cutting, then can by the data progress one systemization formation of completion by formatting binary data stream Formatted data for dynamic training.
Beneficial effect
One kind of the invention is based on distributed dynamic training system, and step is easy, it is easy to accomplish, have and dynamically instructs in real time Do exercises can, greatly improve the deep learning and distributed computation ability of machine.
Specific embodiment
Distributed dynamic training system is based on to one kind of the invention below to elaborate.
One kind being based on distributed dynamic training system, it is characterised in that: its circular is as follows:
Step 1: deep learning model is established;Obtain the characteristic information of object initialization neural network;It will be at the beginning of the target The characteristic information of beginningization neural network is analyzed in default neural network big data or database, obtains analysis result;Root The object initialization neural network is determined according to the analysis result;
Step 2: remote data source is obtained;Data source is data needed for user's neural computing, is mentioned by user For this system carries out data according to system prescribed form to sequentially cut processing, and the data that processing is completed then are passed through network It is transferred to the cloud storage end of computing platform, is stored in data source and has trained neural network;
The characteristic information that remote data source is used to obtain object initialization neural network is obtained, for the target is initial The characteristic information for changing neural network is analyzed in default neural network big data or database, obtains analysis result;Target It initializes network and obtains module, for determining the object initialization neural network according to the analysis result;Neural network is deep Study module is spent, for obtaining the target nerve of deep learning by the training data training object initialization neural network Network;
Step 3: the data source got is handled;Firstly, a correspondence can be constructed by system in the data Layer of platform The partial data pond of data source size, data source are stored in cloud after the cutting process Jing Guo the first step, can will be in data source HASH value after one-to-one encryption is sent to platform by System Scheduler, passes through the HASH after encryption in platform Code is from extracting the data source transmitted in client in network;
For data source calculation method in the way of circulation, every wheel uses a loss function in the multiple loss function The loss value between the prediction target and the real goal is calculated, and then according to every calculated loss value backpropagation of wheel Adjust the parameter in the deep learning model, comprising: jth wheel uses the i-th loss function meter in the multiple loss function The loss value between the prediction target and the real goal is calculated, and then according to the calculated loss of the i-th loss function Value carries out backpropagation and adjusts the parameter in the deep learning model, and i successively takes 1,2 ... ..., and N, N are multiple loss functions Number, j=1,2 ... ..., M, M is default the number of iterations;If the calculated loss value of loss function used in the wheel of jth+1 Greater than the calculated loss value of loss function that jth wheel uses, then it is calculated to restore the loss function used according to jth wheel Parameter value and the loss function that is used according to jth wheel in the deep learning model of loss value backpropagation adjustment is to described Deep learning model carries out backpropagation adjusting parameter;
Step 4: training in real time;It is mentioned in the mapping address for the data pool that platform end is constructed by cloud storage service system For the Computational frame in computing platform can call data from data pool, have according to the data volume and data pool that calculate required Data volume corresponding relationship carry out data extraction and calculating, reach real-time purpose.
The training system includes:
First acquisition unit, for obtaining training sample data;
Second acquisition unit, for obtaining institute based on the deep learning model and the training sample data pre-established State the prediction target of training sample data;
Third acquiring unit, for obtaining multiple loss functions;Adjustment unit, for each by the multiple loss function From the loss value calculated between the prediction target and the real goal of the training sample data, and then according to calculated Loss value backpropagation adjusts the parameter in the deep learning model.
The cutting and processing mode of the data in the acquisition remote data source are as follows: by partial data according to the big of dynamic value Then the data progress one systemization formation of completion can be used for dynamically instructing by small equal part cutting by formatting binary data stream Experienced formatted data.
The above described is only a preferred embodiment of the present invention, be not intended to limit the present invention in any form, though So the present invention has been disclosed as a preferred embodiment, and however, it is not intended to limit the invention, any technology people for being familiar with this profession Member, without departing from the scope of the present invention, when the technology contents using the disclosure above make a little change or modification For the equivalent embodiment of equivalent variations, but anything that does not depart from the technical scheme of the invention content, according to the technical essence of the invention Any simple modification, equivalent change and modification to the above embodiments, all of which are still within the scope of the technical scheme of the invention.

Claims (3)

1. one kind is based on distributed dynamic training system, it is characterised in that: its circular is as follows:
Step 1: deep learning model is established;Obtain the characteristic information of object initialization neural network;By the object initialization The characteristic information of neural network is analyzed in default neural network big data or database, obtains analysis result;According to institute It states analysis result and determines the object initialization neural network;
Step 2: remote data source is obtained;Data source is data needed for user's neural computing, is provided by user, this System carries out data according to system prescribed form to sequentially cut processing, then arrives the data that processing is completed by network transmission The cloud storage end of computing platform is stored in data source and has trained neural network;
The characteristic information that remote data source is used to obtain object initialization neural network is obtained, for the object initialization is refreshing Characteristic information through network is analyzed in default neural network big data or database, obtains analysis result;Target is initial Change network and obtain module, for determining the object initialization neural network according to the analysis result;Neural network depth Module is practised, for obtaining the target nerve network of deep learning by the training data training object initialization neural network;
Step 3: the data source got is handled;Firstly, a corresponding data can be constructed by system in the data Layer of platform The partial data pond of source size, data source are stored in cloud after the cutting process Jing Guo the first step, can will be in data source one by one HASH value after corresponding encryption is sent to platform by System Scheduler, in platform by encryption after HASH code from The data source transmitted in client is extracted in network;
In the way of circulation, every wheel is calculated data source calculation method using a loss function in the multiple loss function Loss value between the prediction target and the real goal, and then adjusted according to every calculated loss value backpropagation of wheel Parameter in the deep learning model, comprising: jth wheel calculates institute using the i-th loss function in the multiple loss function State prediction target and the real goal between loss value, and then according to the calculated loss value of i-th loss function into Row backpropagation adjusts the parameter in the deep learning model, and i successively takes 1,2 ... ..., and N, N are of multiple loss functions Number, j=1,2 ... ..., M, M are default the number of iterations;If the calculated loss value of loss function used in the wheel of jth+1 is greater than The calculated loss value of the loss function that jth wheel uses then restores the calculated loss value of loss function used according to jth wheel Parameter value and the loss function that is used according to jth wheel in the deep learning model of backpropagation adjustment is to the depth It practises model and carries out backpropagation adjusting parameter;
Step 4: training in real time;It is provided, is counted by cloud storage service system in the mapping address for the data pool that platform end is constructed The Computational frame calculated in platform can call data from data pool, the data having according to data volume and data pool needed for calculating Extraction and calculating that corresponding relationship carries out data are measured, real-time purpose is reached.
2. according to claim 1 a kind of based on distributed dynamic training system, it is characterised in that: the training system packet It includes and includes:
First acquisition unit, for obtaining training sample data;
Second acquisition unit, for obtaining the instruction based on the deep learning model and the training sample data pre-established Practice the prediction target of sample data;
Third acquiring unit, for obtaining multiple loss functions;Adjustment unit, by by the multiple loss function respectively based on The loss value between the prediction target and the real goal of the training sample data is calculated, and then according to calculated loss value Backpropagation adjusts the parameter in the deep learning model.
3. according to claim 1 a kind of based on distributed dynamic training system, it is characterised in that: the acquisition distal end The cutting and processing mode of the data of data source are as follows: by partial data according to size of dynamic value etc. point cutting, then pass through lattice The data of completion are carried out systemization formation by formula binary data stream can be used for the formatted data of dynamic training.
CN201811606377.6A 2018-12-27 2018-12-27 One kind being based on distributed dynamic training system Pending CN109669973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811606377.6A CN109669973A (en) 2018-12-27 2018-12-27 One kind being based on distributed dynamic training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811606377.6A CN109669973A (en) 2018-12-27 2018-12-27 One kind being based on distributed dynamic training system

Publications (1)

Publication Number Publication Date
CN109669973A true CN109669973A (en) 2019-04-23

Family

ID=66146307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811606377.6A Pending CN109669973A (en) 2018-12-27 2018-12-27 One kind being based on distributed dynamic training system

Country Status (1)

Country Link
CN (1) CN109669973A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368336A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and device, electronic equipment and storage medium
CN112562069A (en) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 Three-dimensional model construction method, device, equipment and storage medium
JP2022547722A (en) * 2019-10-14 2022-11-15 ベンタナ メディカル システムズ, インコーポレイテッド Weakly Supervised Multitask Learning for Cell Detection and Segmentation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022547722A (en) * 2019-10-14 2022-11-15 ベンタナ メディカル システムズ, インコーポレイテッド Weakly Supervised Multitask Learning for Cell Detection and Segmentation
JP7427080B2 (en) 2019-10-14 2024-02-02 ベンタナ メディカル システムズ, インコーポレイテッド Weakly supervised multitask learning for cell detection and segmentation
CN111368336A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and device, electronic equipment and storage medium
CN112562069A (en) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 Three-dimensional model construction method, device, equipment and storage medium
CN112562069B (en) * 2020-12-24 2023-10-27 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing three-dimensional model

Similar Documents

Publication Publication Date Title
TWI794157B (en) Automatic multi-threshold feature filtering method and device
CN109669973A (en) One kind being based on distributed dynamic training system
CN109711544A (en) Method, apparatus, electronic equipment and the computer storage medium of model compression
CN107871164B (en) Fog computing environment personalized deep learning method
GB2588523A (en) Epilepsy seizure detection and prediction using techniques such as deep learning methods
CN107729322A (en) Segmenting method and device, establish sentence vector generation model method and device
CN108898219A (en) A kind of neural network training method based on block chain, device and medium
CN115116109B (en) Virtual character speaking video synthesizing method, device, equipment and storage medium
CN107797867A (en) A kind of method and device for strengthening edge side intelligence computation ability
CN109120936A (en) A kind of coding/decoding method and device of video image
GB2611719A (en) Forecasting multivariate time series data
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
CN108280207A (en) A method of the perfect Hash of construction
CN109381329A (en) A kind of intelligent blind-guiding helmet and its operation method
CN113598759B (en) Myoelectricity feature optimization-based lower limb action recognition method and system
CN112258557B (en) Visual tracking method based on space attention feature aggregation
CN112036564B (en) Picture identification method, device, equipment and storage medium
CN107909003A (en) A kind of gesture identification method for large vocabulary
CN106682729A (en) BP neural network MapReduce training method based on local convergence weight matrix evolution
CN109544530B (en) Method and system for automatically positioning structural feature points of X-ray head radiography measurement image
CN116668068A (en) Industrial control abnormal flow detection method based on joint federal learning
CN1556522A (en) Telephone channel speaker voice print identification system
WO2023219647A3 (en) Nlp based identification of cyberattack classifications
CN110008880A (en) A kind of model compression method and device
CN109408853A (en) A kind of power station water characteristic analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190423