CN109754090A - It supports to execute distributed system and method that more machine learning model predictions service - Google Patents
It supports to execute distributed system and method that more machine learning model predictions service Download PDFInfo
- Publication number
- CN109754090A CN109754090A CN201811613738.XA CN201811613738A CN109754090A CN 109754090 A CN109754090 A CN 109754090A CN 201811613738 A CN201811613738 A CN 201811613738A CN 109754090 A CN109754090 A CN 109754090A
- Authority
- CN
- China
- Prior art keywords
- machine learning
- learning model
- data
- module
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Disclose a kind of distributed system and its operating method for supporting to execute more machine learning model prediction services.The system includes main control module and multiple operational modules, wherein, the main control module is used for: the prediction service execution sent respectively to multiple operational modules for multiple machine learning models instructs, and obtains the prediction result of the multiple corresponding machine learning model of operational module;Each operational module is used for: data needed for executing instruction itself at least one corresponding machine learning model of load according to what is received, feature based on feature needed at least one machine learning model described in the data configuration loaded, and based on construction executes the prediction service of at least one machine learning model to generate prediction result.As a result, by the distributed circuit of the centralized management of main control module and operational module, distributed online and operation while multi-model is realized.Above-mentioned distributed system can be used for wind generator system.
Description
Technical field
The present invention relates to field of distributed type more particularly to a kind of distributions for supporting to execute more machine learning model prediction services
Formula system and its corresponding implementation method.
Background technique
In recent years, with the development of artificial intelligence (AI) technology, people are increasingly utilized machine learning (Machine
Learning it) is automatically analyzed from data and obtains rule, and unknown data is predicted using the rule of acquisition.
It not only include data acquisition, data processing, feature work for the machine learning exploitation of complete set and application platform
Journey building, model training etc. investigate link offline, it is also necessary to produce ring on the lines such as, A/B test online including model, gray scale publication
Section.Existing model is online to relate generally to single model, the execution model online service of single group data or timing multi-group data batch
Execute model online service.For multiple models, batch executes answering for model service to online and multi-group data online in real time simultaneously
With scene, the prior art then lacks corresponding support.
For this reason, it may be necessary to a kind of scheme that can support multiple groups machine learning model on-line operation.
Summary of the invention
In order to solve the problems, such as above at least one, multiple models can be supported online simultaneously the invention proposes one kind and
Multi-group data executes the distributed structure/architecture of model service, centralized management and Working mould by main control module online in batches in real time
The distributed circuit of block realizes distributed online and operation while multi-model.Further, for example, by unified API, shared
The scalability of configuration and feature stores, and realizes the loose coupling between each component of frame and configurability.
According to an aspect of the present invention, it proposes a kind of distributions for supporting to execute more machine learning model prediction services
System, including main control module and multiple operational modules, wherein the main control module is used for: being sent respectively to multiple operational modules
For the prediction service execution instruction of multiple machine learning models, and obtain the multiple corresponding machine of operational module
The prediction result of learning model;Each operational module is used for: executing instruction load itself corresponding at least one according to what is received
Data needed for a machine learning model, based on spy needed at least one machine learning model described in the data configuration loaded
Sign, and the feature based on construction execute the prediction service of at least one machine learning model to generate prediction result.By
This, by the centralized management of main control module and the distributed circuit of operational module, realize while multi-model it is distributed it is online with
Operation.Preferably, latent structure, which can be, handles the load data within the scope of the predetermined time, to obtain described at least one
Temporal aspect needed for a machine learning model.
Preferably, main control module can be used for: summarize the prediction result of acquisition;And the prediction knot that will summarize
Fruit exports to user in a predefined manner, or the prediction result summarized is supplied to interface service module, the interface clothes
Business module includes in the system and for being exported the prediction result summarized in a predefined manner to user.As a result,
User is facilitated to obtain prediction result in a predefined manner.For example, interface service module can be for unified boundary's user oriented
The Web server of specific format data is provided, to facilitate user to obtain lattice in the form of (for example, via URI) the conventional web access
The unified prediction result of formula.
Preferably, the distributed system may include configuration sharing memory module;The configuration sharing memory module is used
In: storage configuration information;Each operational module is used for: obtaining itself at least one corresponding engineering according to the configuration information
Practise the related data of model.It stores and shares by the unification of configuration information as a result, the increasing to multiple machine moulds can be facilitated
Delete management and granting.Specifically, configuration sharing memory module may include for multiple machine learning with all kinds of interfaces
The interface unit of model docking, to facilitate the addition of each class model.Related data then may include: at least one described engineering
The processing related data of the related data of model itself and the input data of at least one machine learning model is practised, thus
Facilitate flexible utilization of the operational module to machine learning model.Preferably, the related data can also include: from the configuration
Acquisition of information, corresponding at least one described machine learning model self-starting data, are achieved in operational module to it
The bootstrap type of the machine learning model of acquisition is run.
Preferably, the distributed system may include: shared tool memory module;The shared tool memory module is used
In: conserving appliance collection, to facilitate use of each operational module to tool set.
Distributed system of the invention can also include: sharing feature matrix memory module;The sharing feature matrix is deposited
Storing up module includes multiple characteristic storing units, and each storage unit is realized using circular list;Each operational module is used for: according to
Data needed for what is received execute instruction itself at least one corresponding machine learning model are loaded into sharing feature storage mould
In individual features storage unit in block.Alternatively or additionally, distributed system of the invention also may include: multiple points
The eigenmatrix memory module of cloth;Each eigenmatrix memory module includes at least one characteristic storing unit,;Each
Operational module is used for: the load of data needed for executing instruction itself at least one corresponding machine learning model according to what is received
Into the individual features storage unit in character pair memory module.Since characteristic storage module is for storing each operational module
Feature needed for executing machine learning model, therefore it can be shared for all working module in different application scenarios
Centralised storage, be also possible to the distributed storage used for single or part operational module.
Preferably, each storage unit is realized using circular list, and each operational module can will be different types of
Data are loaded onto individual features storage unit with unified format.Prevent data to the excessive of memory space by circulation as a result,
It occupies, and passes through the convenient storage to each category feature of uniform format.Map data knot can be used in eigenmatrix memory module
Structure stores eigenmatrix, wherein the key character pair title in each eigenmatrix, value character pair storage unit, by
This convenient simple management to high-volume data.
In addition, different according to realizing, each eigenmatrix or each characteristic storing unit for including in it can be equipped with
One timestamp item, the timestamp of the latest data in circular list for recording characteristic storing unit.Pass through circulation as a result,
The combination of list and timestamp facilitates the reading of data and saves unnecessary memory space.
Since different data generate, interval is different, and each operational module can be further used for required time interval
It chooses for being loaded onto or filling to the data of the characteristic storing unit locally generated.
In addition, each operational module can also be used in the data void holes in the identification load period and will be in characteristic storing unit
Corresponding loaded value is loaded as " none ".Thus facilitate each machine learning module for the different operation of " none " data.
Preferably, the prediction result that the operational module generates can be used for adjustment local corresponding with the operational module
The operating status of equipment.
According to a further aspect of the invention, a kind of distribution for supporting to execute more machine learning model prediction services is proposed
The operating method of formula system, the distributed system include main control module and multiple operational modules, and the described method includes: institute
It states main control module and sends the prediction service execution instruction for being directed to multiple machine learning models respectively to multiple operational modules;Each work
Make data needed for module executes instruction itself at least one corresponding machine learning model of load according to what is received, is based on institute
Feature needed at least one machine learning model described in the data configuration of load, and the execution of the feature based on construction are described extremely
The prediction service of a few machine learning model is to generate prediction result;And the main control module obtains the multiple Working mould
The prediction result of the corresponding machine learning model of block.
Preferably, this method can also include: that the main control module summarizes the prediction result of acquisition and will summarize
The prediction result is exported in a predefined manner to user.
Preferably, the distributed system further includes interface service module, and the method can also include: described connects
Mouth service module obtains the prediction result summarized and exports the prediction result summarized to user in a predefined manner.
Preferably, the distributed system further includes configuration sharing memory module, and the method can also include: every
A operational module obtains itself at least one corresponding engineering according to the configuration information from the configuration sharing memory module
Practise the related data of model.
Preferably, the distributed system further includes shared and/or distributed nature matrix memory module, the feature square
Battle array memory module includes the eigenmatrix for respectively correspond toing an operational module, and each eigenmatrix includes multiple characteristic storage lists
Member, each storage unit realized using circular list, and the method can also include: each operational module according to receiving
Execute instruction itself at least one corresponding machine learning model needed for data be loaded into sharing feature memory module
In individual features storage unit.
Preferably, this method can also include: the prediction result adjustment and the work generated using the operational module
The operating status of the corresponding local device of module.
According to another aspect of the present invention, a kind of wind generator system, including main control server, multiple work are proposed
Server and multiple wind-driven generators, wherein each workspace server corresponds to a wind-driven generator, and, wherein institute
State main control server to be used for: the prediction service execution sent respectively to multiple workspace servers for multiple machine learning models refers to
It enables, and obtains the prediction result of the multiple corresponding machine learning model of workspace server;Each workspace server
For: at least one machine that load is obtained from corresponding wind-driven generator and corresponding with itself is executed instruction according to what is received
Data needed for learning model, based on feature needed at least one machine learning model described in the data configuration loaded, with
And the feature based on construction executes the prediction service of at least one machine learning model to generate prediction result;And it is each
Wind-driven generator is used for: acquiring itself operation and status signal, the operation of acquisition and status signal, which include at least, to be corresponded to
Operation and status signal needed at least one machine learning model loaded on operational module carries out prediction service.It is real as a result,
It is predicted while now to widely distributed all kinds of blowers.
Preferably, above-mentioned wind power system can also include: Web server, be used for: obtain the prediction of each workspace server
As a result, and the prediction result is presented to user with unified format.
Preferably, main control server can be used for: be at least partially based on itself operation and state of wind-driven generator acquisition
Information, determination will correspond to the prediction service execution at least one machine learning model that workspace server is sent to it and refer to
It enables, thus promotes the flexibility of blower prediction.
In addition, each wind-driven generator can be at least partially based on the prediction result that corresponding workspace server generates to adjust
The mode of operation of itself, to realize machine learning to the improved efficiency of wind power system entirety.
According to a further aspect of the invention, a kind of distribution for supporting to execute more machine learning model prediction services is provided
The operating method of formula system, the distributed system include main control module and multiple operational modules, and the described method includes:
The prediction service sent respectively from the main control module to multiple operational modules for multiple machine learning models is held
Row instruction;Itself at least one corresponding machine learning model institute of load is executed instruction according to what is received by each operational module
The data needed, based on feature needed at least one machine learning model described in the data configuration loaded, and based on construction
Feature execute the prediction service of at least one machine learning model to generate prediction result;And by the main control module
Obtain the prediction result of the corresponding machine learning model of the multiple operational module.
Preferably, this method further include: the institute that is summarized the prediction result of acquisition by the main control module and will be summarized
Prediction result is stated to be exported in a predefined manner to user;Alternatively, the distributed system further includes interface service module;By the master
The prediction result summarized is supplied to interface service module by control module, then described in being summarized as the interface service module
Prediction result is exported in a predefined manner to user.
Preferably, the prediction result summarized is exported by the interface service module in a predefined manner and is wrapped to user
It includes: providing a user the prediction result data with specific format with unified web interface by the interface service module.
Preferably, the distributed system further includes configuration sharing memory module, and the method also includes: by described
Configuration sharing memory module storage configuration information;By each operational module according to the configuration from the configuration sharing memory module
The related data of at least one corresponding machine learning model of acquisition of information itself.
Preferably, configuration sharing memory module includes for docking with multiple machine learning models with all kinds of interfaces
Interface unit, also, the related data includes: the related data of at least one described machine learning model and described itself
The processing method related data of the input data of at least one machine learning model.
Preferably, the related data further include: obtained from the configuration information and at least one described machine learning
The corresponding self-starting data of model.
Preferably, the distributed system further include: shared tool memory module;The method also includes: by described total
Enjoy tool memory module conserving appliance collection.
Preferably, the distributed system further include: sharing feature matrix memory module;The sharing feature matrix storage
Module includes multiple characteristic storing units, and each storage unit is realized using circular list;The described method includes: by each work
Data needed for module executes instruction itself at least one corresponding machine learning model according to what is received are loaded into shared spy
It levies in the individual features storage unit in memory module.
Preferably, the distributed system includes: multiple distributed eigenmatrix memory modules;Each feature square
Battle array memory module includes at least one characteristic storing unit, and each storage unit is realized using circular list;The described method includes:
Data needed for executing instruction itself at least one corresponding machine learning model according to what is received as each operational module add
It is downloaded in the individual features storage unit in character pair memory module.
Preferably, itself corresponding multiple machine learning model institute is executed instruction according to what is received by each operational module
The different types of data needed are loaded onto individual features storage unit with unified format.
Preferably, eigenmatrix, each eigenmatrix are stored using map data structure in the eigenmatrix memory module
In key character pair title, value character pair storage unit.
Preferably, this method further include: for each eigenmatrix or be that each characteristic storing unit for including is set in it
A timestamp item is set, which is used to record the timestamp of the latest data in the circular list of characteristic storing unit.
Preferably, this method further include: chosen with required time interval for being loaded by each operational module or
It fills to the data of the characteristic storing unit locally generated.
Preferably, this method further include: identified the data void holes in the load period by each operational module and deposited feature
Correspondence loaded value in storage unit is loaded as " none ".
Preferably, this method further include: the load data within the scope of the predetermined time are handled by each operational module,
With temporal aspect needed at least one described machine learning model of acquisition.
Preferably, this method further include: according to the prediction result that the operational module generates, adjustment and the operational module
The operating status of corresponding local device.
According to an aspect of the present invention, a kind of operating method of wind generator system is additionally provided, wherein the wind-force
Electricity generation system includes main control server, multiple workspace servers and multiple wind-driven generators, wherein each workspace server pair
Ying Yuyi wind-driven generator, the operating method include:
The prediction sent respectively from the main control server to multiple workspace servers for multiple machine learning models takes
Business executes instruction, and obtains the prediction result of the multiple corresponding machine learning model of workspace server;By each
Workspace server according to receive execute instruction load obtained from corresponding wind-driven generator and corresponding with itself at least one
Data needed for a machine learning model, based on spy needed at least one machine learning model described in the data configuration loaded
Sign, and the feature based on construction execute the prediction service of at least one machine learning model to generate prediction result;With
And itself operation and status signal are acquired by each wind-driven generator, the operation of acquisition and status signal include at least pair
Operation and status signal needed for answering at least one machine learning model loaded on operational module to carry out prediction service.
Preferably, the system also includes Web server, the method also includes: it is taken by Web server from the master control
Business device obtains the prediction result of each workspace server, and the prediction result is presented to user with unified format.
Preferably, this method comprises: being at least partially based on itself behaviour of wind-driven generator acquisition by the main control server
Make and status information, determination will correspond to the prediction service at least one machine learning model that workspace server is sent to it
It executes instruction.
Preferably, the prediction result that corresponding workspace server generates is at least partially based on by each wind-driven generator,
Adjust the mode of operation of itself.
The present invention also provides a kind of computer readable storage mediums of store instruction, wherein when described instruction is by least
When one computing device operation, at least one described computing device is promoted to execute as above any method.
The present invention also provides a kind of storage devices including at least one computing device and at least one store instruction
System, wherein described instruction promotes at least one described computing device to execute when being run by least one described computing device
As above any method.
The present invention realizes the distribution of multi-model with loading the multi-model supporting architecture combined respectively by centralized control
Formula batch is online, and the free additions and deletions and local load of machine learning model are realized by the mutual cooperation between each component, and
And it copes with all kinds of problems in data acquisition and treatment process and facilitates access and utilization of the user to result data.
Detailed description of the invention
Disclosure illustrative embodiments are described in more detail in conjunction with the accompanying drawings, the disclosure above-mentioned and its
Its purpose, feature and advantage will be apparent, wherein in disclosure illustrative embodiments, identical reference label
Typically represent same parts.
Fig. 1 shows the distribution according to an embodiment of the invention for supporting to execute more machine learning model prediction services
The schematic diagram of system.
Fig. 2 shows an examples for carrying out data load to characteristic storing unit according to an embodiment of the present invention.
Fig. 3 shows 4 kinds of situations for meeting with data void holes.
Fig. 4 shows the flow chart for the respective handling method in the case of this 4 kinds of data void holes.
Fig. 5 shows an example of the distributed system of the invention realized via different inter-host communications.
Fig. 6 shows a kind of point for supporting to execute more machine learning model prediction services according to an embodiment of the invention
The flow chart of the operating method of cloth system.
Fig. 7 shows the implementation of primary server side according to an embodiment of the invention.
Fig. 8 shows the implementation of workspace server side according to an embodiment of the invention.
Specific embodiment
The preferred embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Preferred embodiment, however, it is to be appreciated that may be realized in various forms the disclosure without the embodiment party that should be illustrated here
Formula is limited.On the contrary, these embodiments are provided so that this disclosure will be more thorough and complete, and can be by the disclosure
Range is completely communicated to those skilled in the art.
Machine learning can automatically analyze from data and obtain rule, and be carried out in advance using the rule obtained to unknown data
It surveys.Generally for the whole efficiency for promoting large scale system, need to design the machine learning development and application platform of complete set.
It not only include data acquisition, data processing, feature work for the machine learning exploitation of complete set and application platform
Journey building, model training etc. investigate link offline, it is also necessary to produce ring on the lines such as, A/B test online including model, gray scale publication
Section.Existing model is online to relate generally to single model, the execution model online service of single group data or timing multi-group data batch
Execute model online service.For multiple models, batch executes answering for model service to online and multi-group data online in real time simultaneously
With scene, the prior art then lacks corresponding support.
For this purpose, the invention proposes a kind of Distributed Architecture for supporting to execute more machine learning model prediction services, the frame
It is online and allow multi-group data batch is online in real time to execute model service while frame can be realized multiple machine learning models,
And it is able to solve during training obtained machine mould group online by machine learning (for example, deep learning) tool and encounters
Estimate in batches in real time, the problems such as more sample frequencys of time series data, data void holes, multi-data source, external service interface are unified.
Fig. 1 shows the distribution according to an embodiment of the invention for supporting to execute more machine learning model prediction services
The schematic diagram of system.As shown in Figure 1, distributed system 100 includes main control module 120 and multiple operational modules 130.Here, main
Control module 120 is responsible for the control of the distributed system overall flow, and each operational module is obtained for being for example performed locally data
Take the load and prediction with machine learning model.Each operational module can correspond to one that above-mentioned machine prediction serviced or
It is a group object equipment.It should be noted that only illustrating the friendship of main control module 120 and an operational module 130 in Fig. 1
Mutually, main control module 120 and the interaction having the same of other operational modules.
" machine learning model " may refer to be based on specific mould by machine learning (for example, deep learning) algorithm herein
The data set that type training process obtains.Before these machine learning models of training, need to handle the data of acquisition, with
Feature needed for obtaining model training (process is referred to alternatively as " Feature Engineering ").It then can be with through the obtained model of feature training
The prediction of correlated results is carried out for the feature to input.Multiple machine learning models can form " Model Group ", for common
Some task is completed in cooperation.It is understood that referring to involved in the present invention for carrying out the machine learning model of prediction service
Be housebroken model (being the model through just instructing at least).Although in certain embodiments, above-mentioned prediction data can feed back use
In the instruction again of model.
Specifically, main control module 120 can be used for sending respectively to multiple operational modules 130 for multiple machine learning moulds
The prediction service execution of type instructs, and obtains the prediction result of these corresponding machine learning models of operational module 130.
Each operational module 130 then can be used for executing instruction load itself corresponding at least one according to what is received from main control module 120
Data needed for a machine learning model, based on feature needed at least one machine learning model of the data configuration loaded,
And the feature based on construction executes the prediction service of at least one machine learning model to generate prediction result.As a result,
By the centralized management of main control module 120 and the distributed circuit data of each operational module 130 and model prediction is executed, is realized more
Distributed online and operation while model.
Here, main control module 120 can determine each operational module machine learning mould to be loaded based on certain rule
Type.In one embodiment, each operational module machine learning model to be loaded can be constant.For example, being at one
It, can be identical, 4 for being performed in unison with same business for each module loading in such as 500 operational modules in system
Model.It in another embodiment, can be to load different models for instance in the operational modules of different zones or grouping
Group.For example, being 70 operational modules of the section B for 100 operational modules Model Group a of the load including 3 models of the section A
Load includes the Model Group b, etc. of 2 models.In other embodiments, state or the behaviour of input can be at least based partially on
Determine to be the machine learning model or part of it of each operational module load as reference value.For example, can be based on each
The state or operation information that operational module is obtained from its correspondent entity equipment determine machine that the operational module should currently load
Device learning model.Alternatively, main control module 120 can determine each work based on its dependent variable (for example, season, weather condition etc.)
Making module 130 needs machine learning model to be loaded.
Operational module 130 can add after receiving corresponding prediction service order from main control module 120 according to instruction
Data needed for being downloaded from least one corresponding machine learning model (for example, three machine learning models) of body.Based on above-mentioned number
According to can establish the example for specific machine learning model or Model Group in particular job module 130 and carry out subsequent
Characteristic value calculates and prediction of result.The data loaded needed for operational module 130 can be from its corresponding one or a set of entity
The operation obtained in equipment or status data, input feature vector needed for above-mentioned data can be processed into model prediction, and given
Enter and is predicted in model instance to generate prediction result.
Each operational module 130 carries out predicting that required input feature vector data can be stored in eigenmatrix and deposit using model
It stores up in module.In distributed system of the invention, features described above matrix memory module can be realized in various ways.For example,
It can be stored in a sharing feature matrix memory module for the eigenmatrix of each operational module 130.Above-mentioned module can
To realize at main control module as global common component, there can also be corresponding point at all or part of operational module
Body.It as replacement or adds, distributed nature matrix memory module can be used in the present invention.For example, each work can be directed to
Make module and realize an eigenmatrix memory module, or for the public feature of the multiple operational modules for belonging to the same area
Matrix memory module is to realize centralization distribution storage.
No matter using shared or distributed nature matrix memory module, one can be corresponded to using an eigenmatrix
A operational module or targeted one or more of the group object equipment of the operational module.It can in one eigenmatrix
To include multiple characteristic storing units, each characteristic storing unit stores a specific feature.Operational module can according to from
Instruction that main control module receives and data needed for itself at least one corresponding machine learning model are loaded into shared spy
It levies in the individual features storage unit in memory module.For example, a certain operational module A according to instruction need to load 4 it is different
Machine learning model, these models with 20 input feature vectors are needed, then operational module A can will be obtained from entity device with
This corresponding data of 20 features is loaded onto 20 characteristic storing units in its character pair matrix.It should be understood that
It is that the different machines learning model loaded on same operational module can have entirely different, the identical even complete phase in part
With input feature vector, the present invention is herein with no restrictions.
In one embodiment, circular list realization can be used in each characteristic storing unit.In same eigenmatrix
Each circular list can be with identical or different length circular list, while be directed to the feature square of different operating module
Battle array can also be to each other with the circular list of identical or different length.It, can be by all working in a simplified realization
The circular list length of characteristic storing unit corresponding to the eigenmatrix of module is unified for a specific length, for example,
86400, for using the second as all data in interval circulation storage 1 day.
In order to facilitate storage, each operational module can be used for executing instruction itself corresponding multiple machine according to what is received
Different types of data needed for device learning model are loaded onto individual features storage unit with unified format.In one embodiment
In, it may include dataArray, tri- fields of head, len using the characteristic storing unit that circular list is realized, wherein
DataArray is the array of storing data, and head is the index of current latest data, and len is circular list length.When new addition
When data, head=(head-1) %len, dataArray [head]=current time point data is executed, being achieved in can meet
The calculating of characteristic, and realize the quick update of data.
One eigenmatrix then may include multiple characteristic storing units (i.e. multiple circular list).At of the invention one
In embodiment, eigenmatrix has Map data structure, and wherein key is characterized title, and value is characterized storage unit.Entire
In the case that each characteristic storing unit of eigenmatrix uses identical timestamp, one can be further added by for this feature matrix
The key of entitled baseTimeStamp, value are the timestamp of current latest data.This structure of feature list is especially
Facilitate the dynamic additions and deletions of feature.For the data of different data sources, the characteristic storing unit that can be generated easily adds
Enter into eigenmatrix, to facilitate and subsequent such as temporal aspect computation decoupling.
By only storing the newest timestamp shared by each characteristic storing unit in eigenmatrix, when obtaining history
It carves or the data of period can be by knowing to realize with the relative position head.For example, in the storage using the second as interval, to obtain
Preceding 10 seconds data are taken, can be performed dataArray [(head+10) %len].
Working model by for example from entity device operation or status data be loaded onto eigenmatrix during, add
The initial time for carrying data can be the time of last time the last item data, when deadline then can be current newest local
Between.Due to real time data have delay may, can be according to actually getting latest data after having loaded data every time
Time stamp setting baseTimeStamp.
In data load process, operational module can be chosen with required time interval for being loaded onto or filling to institute
State the data of characteristic storing unit locally generated.In other words, when the frequency of data source acquisition data is higher than storage time
Interval then can carry out samples storage to data source data according to storage time interval.On the contrary, when the frequency of data source acquisition data
Rate will be lower than storage time interval, then can be replicated and be filled storage to data source data according to storage time interval.
Fig. 2 shows an examples for carrying out data load to characteristic storing unit according to an embodiment of the present invention.Tool
Body, the timestamp and the corresponding value of the timestamp at some time point can be got from data source, and according to true in advance
Fixed loading frequency (Interval), the timestamp (itemTs) of the first data of data source got and load data
The correlation of initial time (baseTime) is operated.Herein, it can be assumed that the time interval of storing data be the second (that is,
Loading frequency is the second).If the timestamp of continuous two data differs extra 1 second, replicating preceding 1 second data is current number
According to.Such as first data timestamp be 2018-09-10 10:00:00, Article 2 data time stamp be 2018-09-10 10:
00:05, then 2018-09-10 10:00:01 replicates 2018-09-10 10:00:00 data, and 2018-09-10 10:00:02 is multiple
2018-09-10 10:00:01 data processed, and so on, until copy to 2018-09-10 10:00:04, then then at
2018-09-10 10:00:05 takes out data from data source.
Different data sources usually has different use frequencies when obtaining data, thus in a preferred embodiment
In, the time interval of continuous two data of characteristic storing unit is configurable according to demand.Such as frequency configuration is 20, then
Indicate corresponding data when the first data of data storage cell is 0 second, Article 2 when being 20 seconds corresponding data, Article 3 be
Corresponding data at 40 seconds, and so on.Each eigenmatrix is illustrated in addition, though equipped with a shared timestamp
, but in other embodiments, or the timestamp item of each characteristic storing unit or its grouping setting difference.
Since for certain entity devices, the state in certain period of time more better than immediate status can reflect it
Current operating conditions, therefore operational module of the invention usually requires to handle the load data within the scope of the predetermined time,
With temporal aspect needed at least one described machine learning model of acquisition.For example, operational module can be obtained according to timestamp
Characteristic value in special time period and temporal aspect needed for being processed into mode input.Here, " temporal aspect " can refer to
In generation, portrays the feature of behavior and state of object etc. in the time window of delimitation.
In addition, will also experience " data void holes " during operational module carries out data load to eigenmatrix
Situation.Here, " data void holes (Data Hollow) " is referred within certain time since odjective cause is (for example, data collection terminal
Power-off) caused by data the case where being not present.For this purpose, distributed system of the invention can identify that the data in the load period are empty
Correspondence loaded value in characteristic storing unit is simultaneously loaded as " none " by hole.Specifically, in the case where suffering from data void holes,
The period of data void holes is set first in configuration file, then when loading data to eigenmatrix, by request data
Period with the period of data void holes successively compared with.If the period for loading data includes multiple data void holes, only need
First data void holes is handled, then sets first empty end time for baseTimeStamp, and to subsequent number
The algorithm of last time is repeated according to cavity.Fig. 3 shows 4 kinds of situations for meeting with data void holes, and Fig. 4 is shown for this 4 kinds of data skies
The flow chart of respective handling method in the case of hole.
As shown in figure 4, by comparing load data time started (startTime), load end of data time
(endTime), corresponding load data duration (interval) and current data empty time started
(hollowStartTime), the correlation between the empty end time (hollowEndTime) of current data, copes with
The data load shown in Fig. 3 various situations overlapping with the empty duration, and it is accurate in the respective items of characteristic storing unit
Store " none " value." none " value of storage is subsequent to can be used as feature input machine learning model, and model can be according to input value
Make corresponding judgement.
In a preferred embodiment, distributed system of the invention can also include configuration sharing memory module, shared work
Have memory module and/or log memory module.Different from the characteristic storage for each operational module storage different characteristic data
Module, above-mentioned configuration and tool model are preferably realised as centrally stored sharing module, and in certain embodiments can be real
It is now global common component.Log memory module can then be responsible for the log of management program, such as uploaded by each operational module
Log.
Configuration sharing memory module can be used for storage configuration information, for example, being directed to the Managed Solution of the distributed system
All configuration informations.In one embodiment, it can store in configuration sharing memory module for various machine learning
The basic data and configuration information of model and Model Group.Basic data may include certain model algorithms of machine learning itself
And model parameter data, the processing method data (for example, feature extraction and make etc.) of mode input data, in Model Group
Interaction data etc. between each model.Configuration information then may include for example for voluntarily configuring machine learning model in operational module
Or data of Model Group etc..Operational module can obtain itself at least one corresponding machine learning model according to configuration information
Related data.For example, operational module can find configuration sharing storage mould based on the prediction service order obtained from main control module
Correspondence configuration information in block, and obtain the related data of itself at least one corresponding machine learning model.Related data can
With include for example, at least one machine learning model itself related data and the model input data processing it is related
Data.
In one embodiment, configuration sharing memory module may include for multiple engineerings with all kinds of interfaces
The interface unit of model docking is practised, to facilitate the addition in configuration file to the model with all kinds of interfaces.In addition, adding
When new model, the self-starting data for the model can also be added in configuration file.The available packet of operational module as a result,
The related data of the self-starting data is included, to facilitate the self-starting in local to corresponding machine learning model.
Shared tool memory module can be used to save tool set, and tool set can provide main control module and operational module institute
(that is, method bases) such as the common tools needed, such as access Solr, Oracle.
In order to provide a user the means for obtaining prediction result, the main control module 120 of Fig. 1 be can be also used for: summarize acquisition
The prediction result;And the prediction result summarized is exported in a predefined manner to user, or described in summarizing
Prediction result is supplied to interface service module, and the interface service module includes the institute in the system and being used to summarize
Prediction result is stated to be exported in a predefined manner to user.In various embodiments, different mechanism can be used and realize prediction knot
The service of fruit.
In one embodiment, main control module 120 may include the api interface for user's access in itself.At another
In embodiment, above-mentioned interface service module is the Web server for providing a user specific format data with unified interface
(being also possible to API server).Main control module gives the prediction result Post (submission) summarized to API server, and API server can
Externally to provide unified interface according to user demand, and the prediction result data for returning to specific format are inputted according to user.
For example, API server can will correspond to URI from the prediction result of each operational module, and it is contained by it
Web service module returns to the result data with json format to user.User is known and extracts above-mentioned data in which can be convenient
For subsequent use.For example, operational module generate prediction result can be subsequently used to adjustment it is corresponding with the operational module
The operating status of local device (for example, above-mentioned one or a set of entity device).
Above-mentioned main control module, multiple operational modules and interface service module are processes independent.Implement at one
In example, above-mentioned module is realized on different physical hosts, and can be communicated by socket between process.Fig. 5 is shown
Via an example of the distributed system of the invention that different inter-host communications are realized.As shown in figure 5, included by system 500
Main control module 510, multiple operational modules (Worker execution module) 520 and Web service module 530 respectively in entitled Master
It is run on (primary server), Worker (workspace server) and the unique host of API server.Above-mentioned module can pacify on host
It is realized in the linux system of dress, and respectively includes global common component: tool set, configuration center, characteristic storing unit and day
Will center.
Using system shown in Fig. 5 (and Fig. 1), a kind of operating method can also be realized.Fig. 6 is shown according to the present invention
A kind of flow chart of the operating method of the distributed system for supporting to execute more machine learning model prediction services of one embodiment.
The distributed system includes at least main control module and multiple operational modules as shown in Figure 1.
Then, in step S610, main control module is sent respectively to multiple operational modules for multiple machine learning models
Predict service execution instruction (corresponding to the transmission order in Fig. 5).
In step S620, each operational module loads itself at least one corresponding machine according to executing instruction for receiving
Data needed for learning model, based on feature needed at least one machine learning model described in the data configuration loaded, with
And the feature based on construction executes the prediction service of at least one machine learning model to generate prediction result.Then in step
Rapid S630, the prediction result that main control module obtains the corresponding machine learning model of the multiple operational module (correspond to Fig. 5
In transmission result).
In a preferred embodiment, this method can also include that main control module summarize the prediction result of acquisition and incite somebody to action
The step of prediction result summarized is exported in a predefined manner to user (corresponding to the Post data in Fig. 5).At this point, distribution
Formula system further includes interface service module (for example, the Web service module realized in API server shown in Fig. 5), and institute
The method of stating can also include that interface service module obtains the prediction result summarized and by the prediction result summarized with pre-
Determine the step of mode is exported to user.
In the embodiment that distributed system further includes for example configuration sharing memory module shown in Fig. 5, the operating method is also
It may include that according to the configuration information from the configuration sharing memory module to obtain itself corresponding at least for each operational module
The step of related data of one machine learning model.It can also be wrapped as shown in Figure 5 as supplement or replacement, distributed system
Shared and/or distributed nature matrix memory module is included, the eigenmatrix memory module includes eigenmatrix, each feature square
Battle array corresponds to one or more of an operational module or the relevant multiple entity devices of the operational module, each feature square
Battle array includes multiple characteristic storing units, and each storage unit is realized using circular list, and the operating method further includes every
Data needed for a operational module executes instruction itself at least one corresponding machine learning model according to what is received are loaded into
The step in individual features storage unit in sharing feature memory module.Further, aforesaid operations method can also include
The step of using operating status of the prediction result to adjust local device corresponding with operational module that operational module generates.
Specific to the host level where each module, Fig. 7 shows primary server side according to an embodiment of the invention
Implementation.As shown in fig. 7, the main control module on primary server using multi-thread concurrent creates and is located at each work clothes
The socket connection for the operational module being engaged on device, and send order to each operational module upon connection.It is connect in each operational module
Order is received, after executing on-time model service and completing prediction, or during lasting prediction, main control module can be primary
Property or persistently receive the calculated result of operational module, and result data is sent to the API service where Web service module
Device.
API server can externally provide service by API (for example, passing through exposure Restful API), create Post
Method receive and store main control module transmission latest data, and according to the difference of client need to create corresponding URI and
Corresponding data are provided when accessed.
Fig. 8 shows the implementation of workspace server side according to an embodiment of the invention.As shown in figure 8, each
Operational module on workspace server can be first for example under the control of main control module according to the configuration obtained from configuration center
Information starts initialization process, is clogged to socket then to wait the subsequent prediction service command of main control module.It is receiving
After order, data load is carried out.Above-mentioned load includes from for example various data source (for example, data source A and B in figure) acquisition numbers
According to and be loaded onto characteristic storage matrix, and performed corresponding processing when there are data void holes.Then, in data processing rank
Section, operational module can read carries out prediction service involved in all models title, from configuration center or from local mould
Corresponding data needed for type construction, and the example for generating at least one machine learning model (for example, model X and Y in figure), base
The data stored in characteristic storage matrix calculate characteristic value and calling model, to generate prediction result.Above-mentioned prediction knot
Fruit can be aggregated and be sent to main control module.
As above the distributed system according to the present invention for supporting more machine learning model prediction services is described in conjunction with Fig. 1-8
And its operating method.Be related to multiple machine learning models it is online during, need in face of each model interface formats not
With (Restful API, RPC, order), data source difference (Hadoop, Oracle, file etc.), sample frequency is different, data
Cavity, distributed expandable, and all kinds of problems of unified external interface are provided according to the demand of client.For this purpose, of the invention
Distributed structure/architecture realizes the support to different models by introducing configuration center and expansible eigenmatrix, and in dynamic
Without changing overall architecture when additions and deletions model, the feature calculation of the new model can be realized by executing addition new model related data
And online service is called.When needing to add new model, the title of new model, while root can be added directly in configuration file
Corresponding source file is created according to model name, and realizes start method in source file.Start method can be according to different moulds
Type realizes the service logic of its own, and the data of json format are then uniformly returned to main control module, so that user accesses and looks into
It sees.The complexity of model interface and feature calculation is shielded as a result, only needs corresponding interface that can dynamically be added to model
In general frame.Meanwhile loose coupling may be implemented between frame various components and can configure, quickly to cope with the variation of demand.
The program is suitable for carrying out all kinds of distribution AI model prediction scenes of on-line prediction using multiple workspace servers.
In one embodiment, the program can be implemented as a kind of wind generator system, including main control server, multiple workspace servers
And multiple wind-driven generators.Each workspace server can correspond to one or a set of wind-driven generator, for example, same geography
Section or positioned at same grouping have same configuration wind-driven generator.Main control server is used for: to multiple work services
Device sends the prediction service execution for multiple machine learning models respectively and instructs, and to obtain the multiple workspace server each
The prediction result of self-corresponding machine learning model;Each workspace server is used for: according to receive execute instruction load from
Data needed at least one machine learning model that corresponding wind-driven generator obtains and corresponding with itself, based on what is loaded
Feature needed at least one machine learning model described in data configuration, and feature based on construction execute it is described at least one
The prediction service of machine learning model is to generate prediction result;And each wind-driven generator is used for: acquire itself operation and
Status signal, the operation of acquisition and status signal include at least at least one machine learning loaded on corresponding operational module
Operation and status signal needed for model carries out prediction service.
Wind generator system can be as described in above in association with distributed system equipped with eigenmatrix storage unit, shared work
Tool collection, configuration center and/or log center, details are not described herein for particular content.But it needs to be emphasized that in wind-force of the invention
In electricity generation system, each wind-driven generator can correspond to an eigenmatrix, and features described above matrix is for loading wind-force hair
The related data of motor, and it is used for the generation of subsequent characteristics.
Preferably, which equally includes that the module at interface, such as Web server are accessed for providing a user unification,
For: the prediction result of each workspace server is obtained, and the prediction result is presented to user with unified format.
In one embodiment, the specific environment phase that the execution of service can be presently in each wind-driven generator is predicted
It closes.For this purpose, main control server can be used for being at least partially based on itself operation and status information of wind-driven generator acquisition, determine
The prediction service execution at least one machine learning model that workspace server is sent is corresponded to it to instruct.
In addition, each wind-driven generator can be used for: it is at least partially based on the prediction result that corresponding workspace server generates,
Adjust the mode of operation of itself.
Correspondingly, a kind of operating method of wind generator system is also provided in one embodiment of the present of invention, wherein described
Wind generator system includes main control server, multiple workspace servers and multiple wind-driven generators, wherein each work service
Device corresponds to a wind-driven generator, and the operating method includes: to be distinguished from the main control server to multiple workspace servers
The prediction service execution sent for multiple machine learning models instructs, and obtains the multiple workspace server and respectively correspond to
Machine learning model prediction result;It is sent out according to the load that executes instruction received from corresponding wind-force by each workspace server
Data needed at least one machine learning model that motor obtains and corresponding with itself, based on the data configuration institute loaded
Feature needed for stating at least one machine learning model, and the feature based on construction execute at least one described machine learning mould
The prediction service of type is to generate prediction result;And itself operation and status signal are acquired by each wind-driven generator, acquisition
The operation and status signal include at least at least one machine learning model loaded on corresponding operational module and predicted
Operation and status signal needed for service.
Wherein, the system also includes Web server, the method also includes: by Web server from the master control service
Device obtains the prediction result of each workspace server, and the prediction result is presented to user with unified format.
Wherein, this method comprises: being at least partially based on itself operating for wind-driven generator acquisition by the main control server
And status information, determination will correspond to the prediction service at least one machine learning model that workspace server is sent to it and hold
Row instruction.
Wherein, the prediction result that corresponding workspace server generates is at least partially based on by each wind-driven generator, adjusted
Itself whole mode of operation.
In addition, being also implemented as a kind of computer program or computer program product, the meter according to the method for the present invention
Calculation machine program or computer program product include the calculating for executing the above steps limited in the above method of the invention
Machine program code instruction.
Alternatively, the present invention can also be embodied as a kind of (or the computer-readable storage of non-transitory machinable medium
Medium or machine readable storage medium), it is stored thereon with executable code (or computer program or computer instruction code),
When the executable code (or computer program or computer instruction code) by least one computing device (electronic equipment or
Calculate equipment or server etc.) processor when executing, so that the processor is executed according to the above method of the present invention each
Step.
The solution of the present invention can also be embodied as a kind of including at least one computing device and at least one store instruction
The system of storage device, wherein described instruction promotes at least one described meter when being run by least one described computing device
Calculate each step that device executes the above method of the invention.
Those skilled in the art will also understand is that, various illustrative logical blocks, mould in conjunction with described in disclosure herein
Block, circuit and algorithm steps may be implemented as the combination of electronic hardware, computer software or both.
The flow chart and block diagram in the drawings show the possibility of the system and method for multiple embodiments according to the present invention realities
Existing architecture, function and operation.In this regard, each box in flowchart or block diagram can represent module, a journey
A part of sequence section or code, a part of the module, section or code include one or more for realizing defined
The executable instruction of logic function.It should also be noted that in some implementations as replacements, the function of being marked in box can also
To be occurred with being different from the sequence marked in attached drawing.For example, two continuous boxes can actually be basically executed in parallel,
They can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or stream
The combination of each box in journey figure and the box in block diagram and or flow chart, can the functions or operations as defined in executing
Dedicated hardware based system realize, or can realize using a combination of dedicated hardware and computer instructions.
Various embodiments of the present invention are described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or improvement to the technology in market for best explaining each embodiment, or make the art
Other those of ordinary skill can understand each embodiment disclosed herein.
Claims (10)
1. a kind of distributed system for supporting to execute more machine learning model prediction services, including main control module and multiple Working moulds
Block, wherein
The main control module is used for: sending the prediction service execution for multiple machine learning models respectively to multiple operational modules
Instruction, and obtain the prediction result of the multiple corresponding machine learning model of operational module;
Each operational module is used for: executing instruction itself at least one corresponding machine learning model institute of load according to what is received
The data needed, based on feature needed at least one machine learning model described in the data configuration loaded, and based on construction
Feature execute the prediction service of at least one machine learning model to generate prediction result.
2. the system as claimed in claim 1, wherein the main control module is also used to:
Summarize the prediction result of acquisition;And
The prediction result summarized is exported to user in a predefined manner, or the prediction result summarized is supplied to and is connect
Mouth service module, the interface service module includes in the system and the prediction result for that will summarize is with predetermined
Mode is exported to user.
3. system as claimed in claim 2, wherein the interface service module is for being provided a user with unified interface
The Web server of prediction result data with specific format.
4. the system as claimed in claim 1, further includes: configuration sharing memory module;
The configuration sharing memory module is used for: storage configuration information;
Each operational module is used for: the correlation of itself at least one corresponding machine learning model is obtained according to the configuration information
Data.
5. system as claimed in claim 4, wherein configuration sharing memory module include for with the multiple of all kinds of interfaces
The interface unit of machine learning model docking, also,
The related data includes: the related data of at least one described machine learning model itself and at least one described machine
The processing method related data of the input data of device learning model.
6. a kind of wind generator system, including main control server, multiple workspace servers and multiple wind-driven generators, wherein
Each workspace server corresponds to a wind-driven generator, and, wherein
The main control server is used for: sending the prediction service for multiple machine learning models respectively to multiple workspace servers
It executes instruction, and obtains the prediction result of the multiple corresponding machine learning model of workspace server;
Each workspace server is used for: according to receive execute instruction load obtained from corresponding wind-driven generator and and itself
Data needed at least one corresponding machine learning model, based at least one machine learning described in the data configuration loaded
Feature needed for model, and the feature based on construction execute the prediction service of at least one machine learning model to generate
Prediction result;And
Each wind-driven generator is used for: acquiring itself operation and status signal, the operation of acquisition and status signal are at least
Operation and status signal needed at least one machine learning model including loading on corresponding operational module carries out prediction service.
7. a kind of operating method for the distributed system for supporting to execute more machine learning model prediction services, the distributed system
Including main control module and multiple operational modules, and the described method includes:
The prediction service execution sent respectively from the main control module to multiple operational modules for multiple machine learning models refers to
It enables;
Needed for executing instruction itself at least one corresponding machine learning model of load according to what is received as each operational module
Data, based on feature needed at least one machine learning model described in the data configuration loaded, and based on construction
Feature executes the prediction service of at least one machine learning model to generate prediction result;And
The prediction result of the corresponding machine learning model of the multiple operational module is obtained by the main control module.
8. a kind of operating method of wind generator system, wherein the wind generator system includes main control server, multiple work
Server and multiple wind-driven generators, wherein each workspace server corresponds to a wind-driven generator, the operating method
Include:
The prediction service sent respectively from the main control server to multiple workspace servers for multiple machine learning models is held
Row instruction, and obtain the prediction result of the multiple corresponding machine learning model of workspace server;
By each workspace server according to receive execute instruction load obtained from corresponding wind-driven generator and with itself it is right
Data needed at least one machine learning model answered, based at least one machine learning mould described in the data configuration loaded
Feature needed for type, and the prediction service of the execution of the feature based on construction at least one machine learning model are pre- to generate
Survey result;And
Itself operation and status signal are acquired by each wind-driven generator, the operation of acquisition and status signal include at least
Operation and status signal needed at least one machine learning model loaded on corresponding operational module carries out prediction service.
9. a kind of computer readable storage medium of store instruction, wherein when described instruction is run by least one computing device
When, promote at least one described computing device to execute method as claimed in claim 7 or 8.
10. a kind of system including at least one computing device He the storage device of at least one store instruction, wherein the finger
It enables when being run by least one described computing device, at least one described computing device is promoted to execute such as claim 7 or 8 institutes
The method stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811613738.XA CN109754090A (en) | 2018-12-27 | 2018-12-27 | It supports to execute distributed system and method that more machine learning model predictions service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811613738.XA CN109754090A (en) | 2018-12-27 | 2018-12-27 | It supports to execute distributed system and method that more machine learning model predictions service |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109754090A true CN109754090A (en) | 2019-05-14 |
Family
ID=66404066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811613738.XA Pending CN109754090A (en) | 2018-12-27 | 2018-12-27 | It supports to execute distributed system and method that more machine learning model predictions service |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754090A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414187A (en) * | 2019-07-03 | 2019-11-05 | 北京百度网讯科技有限公司 | Model safety delivers the system and method for automation |
CN110555550A (en) * | 2019-08-22 | 2019-12-10 | 阿里巴巴集团控股有限公司 | Online prediction service deployment method, device and equipment |
CN110808881A (en) * | 2019-11-05 | 2020-02-18 | 广州虎牙科技有限公司 | Model deployment method and device, target monitoring method and device, equipment and system |
CN111523676A (en) * | 2020-04-17 | 2020-08-11 | 第四范式(北京)技术有限公司 | Method and device for assisting machine learning model to be online |
CN113841371A (en) * | 2020-02-25 | 2021-12-24 | 华为技术有限公司 | Methods, systems, and computer readable media for integrating back-end as-a-service with online services |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102434387A (en) * | 2011-11-16 | 2012-05-02 | 三一电气有限责任公司 | Draught fan detection and diagnosis system |
CN107609652A (en) * | 2017-08-30 | 2018-01-19 | 第四范式(北京)技术有限公司 | Perform the distributed system and its method of machine learning |
CN107622310A (en) * | 2017-08-30 | 2018-01-23 | 第四范式(北京)技术有限公司 | For performing the distributed system and its method of machine learning |
CN107924334A (en) * | 2015-08-05 | 2018-04-17 | 华为技术有限公司 | The rebalancing and elastic storage scheme of the distributed cyclic buffer of elasticity name |
-
2018
- 2018-12-27 CN CN201811613738.XA patent/CN109754090A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102434387A (en) * | 2011-11-16 | 2012-05-02 | 三一电气有限责任公司 | Draught fan detection and diagnosis system |
CN107924334A (en) * | 2015-08-05 | 2018-04-17 | 华为技术有限公司 | The rebalancing and elastic storage scheme of the distributed cyclic buffer of elasticity name |
CN107609652A (en) * | 2017-08-30 | 2018-01-19 | 第四范式(北京)技术有限公司 | Perform the distributed system and its method of machine learning |
CN107622310A (en) * | 2017-08-30 | 2018-01-23 | 第四范式(北京)技术有限公司 | For performing the distributed system and its method of machine learning |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414187A (en) * | 2019-07-03 | 2019-11-05 | 北京百度网讯科技有限公司 | Model safety delivers the system and method for automation |
CN110414187B (en) * | 2019-07-03 | 2021-09-17 | 北京百度网讯科技有限公司 | System and method for model safety delivery automation |
CN110555550A (en) * | 2019-08-22 | 2019-12-10 | 阿里巴巴集团控股有限公司 | Online prediction service deployment method, device and equipment |
CN110555550B (en) * | 2019-08-22 | 2023-06-23 | 创新先进技术有限公司 | Online prediction service deployment method, device and equipment |
CN110808881A (en) * | 2019-11-05 | 2020-02-18 | 广州虎牙科技有限公司 | Model deployment method and device, target monitoring method and device, equipment and system |
CN110808881B (en) * | 2019-11-05 | 2021-10-15 | 广州虎牙科技有限公司 | Model deployment method and device, target monitoring method and device, equipment and system |
CN113841371A (en) * | 2020-02-25 | 2021-12-24 | 华为技术有限公司 | Methods, systems, and computer readable media for integrating back-end as-a-service with online services |
CN113841371B (en) * | 2020-02-25 | 2024-01-09 | 华为云计算技术有限公司 | Methods, systems, and computer readable media for integrating backend, instant services with online services |
CN111523676A (en) * | 2020-04-17 | 2020-08-11 | 第四范式(北京)技术有限公司 | Method and device for assisting machine learning model to be online |
CN111523676B (en) * | 2020-04-17 | 2024-04-12 | 第四范式(北京)技术有限公司 | Method and device for assisting machine learning model to be online |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109754090A (en) | It supports to execute distributed system and method that more machine learning model predictions service | |
CN108229686A (en) | Model training, Forecasting Methodology, device, electronic equipment and machine learning platform | |
CN104541247B (en) | System and method for adjusting cloud computing system | |
CN107077385B (en) | For reducing system, method and the storage medium of calculated examples starting time | |
CN111444019B (en) | Cloud collaborative deep learning model distributed training method and system | |
CN101387953A (en) | Collaboration software development system and method | |
CN116127899B (en) | Chip design system, method, electronic device, and storage medium | |
CN111800468A (en) | Cloud-based multi-cluster management method, device, medium and electronic equipment | |
CN111143039A (en) | Virtual machine scheduling method and device and computer storage medium | |
CN108984496A (en) | The method and apparatus for generating report | |
CN109284227A (en) | A kind of automation method for testing pressure and device calculate equipment and storage medium | |
CN109949054A (en) | Key code determines method, apparatus, equipment and storage medium | |
CN115860143A (en) | Operator model generation method, device and equipment | |
CN109144846B (en) | Test method and device for testing server | |
CN112044061B (en) | Game picture processing method and device, electronic equipment and storage medium | |
CN110175171B (en) | System for IT equipment intelligent recommendation of on-shelf position | |
CN112131010A (en) | Server layout method and device, computer equipment and storage medium | |
CN111090401A (en) | Storage device performance prediction method and device | |
CN102981461A (en) | Information processing apparatus and method, server apparatus, server apparatus control method, and program | |
CN115037665A (en) | Equipment testing method and device | |
CN114862098A (en) | Resource allocation method and device | |
CN113934710A (en) | Data acquisition method and device | |
CN114610443A (en) | Multi-service deployment method and device based on k8s container cluster and electronic equipment | |
CN109151007B (en) | Data processing method, core server and transmission server for application scheduling | |
CN110225077A (en) | Synchronous method, device, computer equipment and the computer storage medium of change supply data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |